Wildorchid-agency

Advertisement

technologies

Myths about artificial intelligence that even programmers believe

Myth 2: AI is always unbiased and unprejudiced
Many people dream of a time when states will be ruled by neural networks. Machines, unlike humans, are believed to make decisions based solely on logic, not emotion or desire. Imagine incorruptible courts that always give fair sentences, states that treat all their citizens equally, governments that create only sensible laws. That’s great.

Except that in practice it turns out that AI may well be biased. After all, neural networks learn from data provided to them by humans, and humans tend to be biased.

For example, the developers of the Beauty.AI neural network tried to create a machine that would choose the prettiest girls in beauty contests. In the set of photos that the program was trained to recognize standards of female attractiveness, white models prevailed. In the end, the AI decided that dark-skinned and Asian girls could not be beautiful.

The Beauty.AI team realized that it had created a racist AI and shut down the project.

Another example is Microsoft’s chatbot Tay.ai, which was supposed to learn how to maintain “casual and playful conversation” with people on the Internet.

The neural network needed less than 24 hours online to pick up bad habits from social network users. As a result, Tay.ai, pretending to be a regular 19-year-old girl, began insulting people in comments, praising unhealthy political currents, condemning feminism, and simultaneously telling them that feminism is cool. As the saying goes, who’s the man you’re with…

No matter how good AI is, it depends on the quality of the data provided to it and the accuracy of its interpretation. And therefore it will always be biased just as much as the people teaching it are biased.

Myth 2: The AI always tells the truth.
Who wouldn’t want to have a robot assistant that will always tell you the right decision and do all the hard mental work? You ask an AI to write your thesis or compile a list of sources for an article and the machine just gives you the right data. That’s great!

But unfortunately, real neural networks do not always give the right answers. Try asking ChatGPT to help you write your term paper, for instance, and you’ll quickly discover that it… makes up links to non-existent sources and inserts non-working URLs to make it sound more convincing.

 

Advertisement

Related posts

Meta unveiled the Quest 3 VR headset – twice as compact and twice as powerful

Vanessa Derrick

Apple patented a watch with strap recognition

Vanessa Derrick

An insider told about the changes in the iPhone 16 Pro and iPhone 17

Vanessa Derrick

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept R