Custom Keto Diet



We may be experiencing a new revolution in‘artificial intelligence. Advances in image processing and recognition marked the years 2012, paving the way for facial recognition or the autonomous car. Today, it is text writing that is experiencing an unprecedented upheaval. Evidenced by GPT-3, the third software version of a company text generator OpenAI, founded in 2015 by Elon Musk.

A considerable amount of data

The program was formed without supervision from a huge body of text sourced from the web and digital books. For example, the entirety of Wikipedia’s articles in English (six million articles) would only represent 0.6% of the data with which he trained. This includes cooking recipes as well as fiction or computer manuals. But GPT-3 has not been trained to perform any specific task.

This autocomplete tool was made available in its beta version a few days ago. Researchers immediately seized on it to conduct various experiments with very impressive results in terms of their consistency. Some have succeeded in generating computer code, scientific articles, a chatbot bringing together historical figures, literary pastiches of Emilie Dickinson and T.S. Eliot, or even obtained medical diagnoses as can be seen below.

However, reservations can be made. The Verge points out, in particular, that the scientists only shared what worked. However, some tests have produced far-fetched answers. However, some experts believe the program will improve each time its databases scale up. Up to one day leading to general artificial intelligence, that is, having the same cognitive capacities as humans.

If GPT-3 can achieve this type of result, it’s because it was trained without supervision. This has enabled it to garner an amount of data quickly and on an unmatched scale, eliminating the need for tedious and costly sorting by humans. However, this strength is also a weakness.

A biased corpus

GPT-3 has drawn on scientific articles as well as unfiltered racist, conspiratorial or sexist content. Its databases are therefore biased.

Frenchman Jérôme Pesenti, who works for Facebook, gave frightening examples on Twitter of posts generated by GPT-3 from keywords like Jewish, black, woman or holocaust.

GPT-3 has shown exceptional ability, but it could become a formidable propaganda weapon if it falls into the wrong hands.

Source: The Verge





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *