The Dangers of Internet Data – How Alexa Told a 10-Year-Old Girl to Touch the Industry

Recently a mother of a 10 year old girl was shocked when Amazons Alexa told her to take a dime and hit the current. What exactly happened in this incident, how does it demonstrate the dangers of data collected on the internet, and how can engineers take action in the future?

In December 2021, a mother and daughter were looking for things to do around the house in bad weather. To help with this time, the Amazon Alexa assistant can present unique challenges organized online, including finding all the items in a kitchen starting with a specific letter or timing individuals how many push-ups they can do in a given time. .

However, the indoor activities took a dark turn when Alexa presented the two with a new challenge known as the “Penny Challenge”. Simply put, Alexa asked the 10-year-old to grab a charger, partially remove it from an outlet, and drop a dime on the exposed terminals.

Anyone who knows electricity knows that this challenge is dangerous. First, teeth exposed with a penny in contact can easily lead to fatal electrocution. Second, high current and sparks can cause fires either at the outlet or with the wiring.

The challenge was obtained from TikTok, where the Dime Challenge went viral and popular. While Alexa is based on AI for voice processing, text-to-speech, and determination, he couldn’t recognize the dangers of the challenge. Fortunately, the daughter and mother both realized the challenge was downright silly and took to social media to educate others about the problem with Alexa.

Creating a smart, responsive system is easier said than done, but there are methods that can create perceived intelligence. For example, CleverBot is an online AI system that learns from millions of conversations to develop a chat system that closely resembles a real person. However, the truth is, CleverBot doesn’t understand the context of his conversations, nor the responsibility for what he says.

In the case of Amazon Alexa, the engineers responsible for its design wanted to create functions for Alexa that would make it interesting and relevant. One way to create such a system would be to create real AI that could think and feel for itself, watch what the world is doing, and then report back on whatever it found interesting and relevant. However, this is currently impossible because such an AI does not exist (and may not do so for a very long time).

Another option is to use the ability of the general public to create their own content. In the case of Alexa challenges, any challenge that trends on social networks (such as TikTok) can be rated based on its popularity and then presented to users. However, such a system would be unable to determine whether the trending challenge is safe, nor would it understand the moral issues associated with the challenge.

The internet is perhaps an amazing development that has helped increase productivity and technological development, but it is also a dump of rotten data. Whether it’s comments and opinions from exceptionally loud people or sites stating facts not based on credible sources, engineers should be careful when using data from the Internet.

The only reason this crash with Amazon Alexa happened was because the challenge app designed by Alexa engineers didn’t have a verification process. The trend data would be searched automatically through AI algorithms, the challenge would be determined (using various algorithms), and then the challenge would be added to a list of challenges. Adding a human control process would prevent dangerous tasks from presenting to users, but it can create a bottleneck when creating a system that can think for itself and present challenges based on real-time data.

Fundamentally, engineers need to recognize that any data generated outside of their control and published online will have integrity issues. As such, systems that automatically take data from the Internet must have measures in place capable of classifying that data. In the case of the Penny Challenge, a basic Google search would show many online news articles indicating the dangers of the challenge, and an AI algorithm can mark comments on the Penny Challenge videos as negative or positive. If the overwhelming number of comments is negative, it could mean that the challenge is unpopular and / or dangerous even though it is trending.

Comments are closed.