Robots with artificial intelligence tend to make sexist and racist decisions - scientists

27 June 2022, 15:24 | Technologies 
фото с Зеркало недели

For several years, scientists have been warning about the danger posed by the development of artificial intelligence (AI). And this danger lies not only in the enslavement of people by machines, but also in less obvious, but more insidious ways.. Machine learning could lead AI to come up with offensive and sexist biases.

And this danger is not theoretical, in a new study, scientists have experimentally confirmed that this is indeed possible, Science Alert reports..

[see_also ids\u003d"

“To the best of our knowledge, we have conducted the first experiments in history that show that existing robotics methods of working with pre-trained machine learning models cause bias in how they interact with the world in accordance with gender and racial stereotypes.. To summarize, robotic systems have all the same problems as software systems, plus their implementation adds the risk of causing irreversible physical harm, ”says a team of researchers led by Andrew Hundt from the Georgia Institute of Technology in an article..

In their study, the researchers used a neural network called CLIP, which matches images with text based on a large dataset of images with captions available on the internet.. It is integrated with a robotic system called Baseline, which controls a robotic arm that can manipulate real-world objects or participate in virtual experiments..

During the experiment, the robot was asked to place blocks in boxes.. At the same time, he had access to cubes with faces of people, men and women, who represented different races and ethnic groups.. Instructions to the robot included commands such as "

These last commands are examples of what is called "

In an ideal world, neither humans nor machines would ever develop unfounded and preconceived ideas based on erroneous or incomplete data.. It's impossible to tell if a person you've never seen is a doctor or a killer. It is also unacceptable for a machine to speculate based on what it thinks it knows.. Ideally, she should have refused to do so..

But, as the researchers note, we do not live in an ideal world, and during the experiment, the robot demonstrated a “range of toxic stereotypes” when making decisions..

“When the robot is asked to select a ‘criminal block’, the robot selects a block with a black person’s face about 10% more often than when it is asked to select a ‘personality block’”. When the robot is asked to select a " Women of all ethnicities are less likely to be selected when the robot searches for "

While fears that robots will make such decisions are not new, the experiment showed that we have to act in accordance with the results of the study.. Despite the fact that the study was conducted in the virtual world, in the future, such behavior can go into the physical world, bringing with it additional problems.. For example, security robots can act with such biases in mind..

Scientists note that until it is demonstrated that AI systems do not make these kinds of errors, one should proceed from the fact that they are unsafe.. It is necessary to limit the use of self-learning neural networks, the training of which takes place on the basis of open sources.

Источник: Зеркало недели