Saturday, September 28, 2024

Dangerous advice: When AI recommends drinking games or horror films to children

Dangerous advice
When AI recommends drinking games or horror films to children

Listen to article

This audio version was artificially generated. More info | Send feedback

Artificial intelligence is also playing an increasingly important role in children’s everyday lives: it suggests videos, tells jokes, and creates fairy tales or photos on request. But AIs are not always tailored to the needs of young people – and then things can quickly become dangerous.

Artificial intelligence, or AI for short, has become an integral part of many children’s lives. Voice assistants can play radio plays or tell jokes for the little ones on request. Language models like ChatGPT explain math tasks to older people or help with presentations. But what if the AI ​​gives children dangerous advice or shows them images or videos that are unsuitable for their eyes? Does AI need parental controls?

Children have different needs and communicate differently than adults, but AI technologies are not prepared for this, according to Nomisha Kurian from the University of Cambridge. The scientist calls for greater focus on children as a target group Study published in the journal “Learning, Media and Technology”.. For this purpose, the educational researcher examined various known cases in which chatbots or voice assistants gave children risky, dangerous or inappropriate advice.

No age-appropriate recommendations

Accordingly, the chatbot MyAI from Snapchat, which is popular with young people, advised researchers how to seduce an older man in a test in which they pretended to be a teenager. The voice assistant Alexa encouraged a ten-year-old child to touch the pins of a charging plug with a coin while charging.

Tests from the Jugendschutz.net platform also revealed something concerning: MyAI showed a supposedly 14-year-old user an alcohol drinking game and recommended a horror film with an age rating of 18 and over.

According to the scientist, in the cases described by Kurian, the companies affected subsequently tightened their security measures. From their point of view, it is not enough for AI developers to react to such incidents. You have to think about the safety of children right from the start, demands Kurian. Martin Bregenzer from the Klicksafe initiative sees it the same way: “Additional child protection doesn’t usually work. We see that in a lot of offers.”

Deepfakes as a risk

Many experts see the flood of fake images or videos on the Internet that were generated with AI, so-called deepfakes, as the biggest problem. These can now be created and distributed in no time, it also says Annual report from Jugendschutz.net: “Many of the fakes generated look deceptively real and can hardly be distinguished from actual photos.”

With the help of generative AI, masses of disturbing content could be generated, such as violent or sexual depictions, explains Bregenzer. This could make it even easier for children and young people to become victims of cyberbullying.

What is true, what is false? Even adults can hardly recognize this on the Internet. This is even more difficult for children because they lack the power of judgment and the horizon of experience, says David Martin, an expert on screen media at the German Society for Child and Adolescent Medicine (DGKJ). “Children have a fundamental ability to believe anything.”

In this context, the expert is critical of the fact that it is tempting to use language models such as ChatGPT to get all the important information for a school presentation, for example. It is no longer necessary to research and choose yourself: “A very important skill for our democracy – the ability to judge – is put at risk.”

Chatbots that act like humans

Many language models, on the other hand, give the impression that they are weighing the information themselves: they do not answer questions in blocks, but rather gradually – as if a human were typing on a keyboard. From Kurian’s point of view, it is particularly problematic that children could trust a human-sounding chatbot like a friend – with whom they sometimes shared very personal information, but whose answers could also be particularly disturbing to them.

Nevertheless, one should not demonize AI, but should also see its positive sides, says Markus Sindermann from the North Rhine-Westphalia Department for Youth Media Culture. Artificial intelligence is primarily a technical tool – with which people can, on the one hand, create false information, but on the other hand can also be used to track down precisely this information and delete it from the Internet.

The examples from Kurian’s study and the annual report from Jugendschutz.net came from last year, adds Sindermann. “The development of artificial intelligence is so rapid that it is actually already outdated.”

The expert Martin from the University of Witten/Herdecke therefore assumes that AI will be able to respond much better to children in the future. “The big danger could then be that the AI ​​will be so good at addressing children’s reward systems that they will want to spend as much time as possible on it.”

Source link

Melvin
Melvinhttps://indianetworknews.com
Melvin Smith is a seasoned news reporter with a reputation for delivering accurate and timely news coverage. His journalistic expertise spans various topics, offering clear and insightful reporting on current events and breaking stories.

Latest Article