When Artificial Intelligence Isn’t All That Intelligent

Artificial intelligence or AI is something that we only used to read about in science fiction novels or saw in movies. However, now, it has become a very real and somewhat established aspect of our society. From gadgets like Amazon’s Alexa or Siri for the iPhone, we are surrounded by AI more than we realize. Although AI has proven to be extremely helpful in some applications, it has also been known to make mistakes. Whether these mistakes were small enough to laugh at or serious enough to cause someone’s death, here are some AI fails that prove we still have some serious work to do regarding artificial intelligence.

Wait until you see what AI robot Sophia had to say about humans…

Alexa Likes To Party, Neighbors Call The Cops

alwxalikestoparty.jpg

Photo Credit: The Daily Dot

The Amazon Alexa is a party favorite with its ability to play loud music, skips songs with voice command, and easy connection to Spotify. However, apparently, Alexa likes to party on her own and even get herself into trouble from time to time. In Hamburg, Germany, Alexa owner Oliver Haberstroh was out one night when his Alexa mysteriously turned on and began blasting music at 1:50 a.m. Alexa was causing such a ruckus that the neighbors called the police assuming that there was a party going on far longer than it should have been. When the police came to break up the supposed party they ended up having to break down the door, only to find the Alexa playing music on its own.

Parental Controls Necessary

parentalcontrolsnecessary.jpg

Photo Credit: NY Daily News

Kids “borrowing” money from their parents and buying things behind their back is nothing new. However, with the invention of devices such as Alexa and Google Home, it looks like it just got a lot easier. In 2017, six-year-old Brooke Neitzel ordered a $170 KidKraft dollhouse and four pounds of cookies to her house in San Diego by asking her families Amazon Alexa. Her mother only realized what had happened after she had received confirmation of the order for something she definitely didn’t order herself. After her mother had put the pieces together that it was Brooke, she donated the dollhouse to a local hospital and set up parental controls on Alexa.

Brooke’s story doesn’t stop there!

Brooke Wasn’t The Only Person To Get A Dollhouse

unwantedmerchandise.jpg

Photo Credit: Wccftech

After Mrs. Neitzel learned how Brooke managed to order whatever she wanted off of Alexa, something extraordinary and a bit scary happened. On January 5, 2017, CW-6 was doing a segment about Brooke’s purchases which caused a far bigger problem than just Brooke ordering an expensive dollhouse. When TV anchor Jim Patton said, “I love the little girl saying ‘Alexa ordered me a dollhouse,'” it triggered Alexas in viewers’ homes to order a dollhouse as well. The dollhouse incident began to spread fear among Alexa owners confirming the fact that Alexa is always listening, and that it might be doing things that you never really wanted it to do.

Tesla Autopilot Death

teslaautopilotdeath.jpg

In May 2016, Joshua Brown became the first person to die in a self-driving car. Brown was driving on a Florida highway in his Tesla Model S using autopilot mode, which is a semi-autonomous driving feature that handles steering and speed while driving on the highway. Tesla never said that the autopilot was perfect, so when a tractor-trailer turning left crossed into Brown’s lane, the car didn’t recognize it and failed to stop causing Brown to be killed instantly. Since Brown’s death, there have been major modifications to the autopilot system which Elon Musk claims would have prevented the crash.

Wiki Bot Wars

jimmywales-wiki-bots.jpg

Wikipedia employs automated software bots to handle the millions of changes, updates, and corrections to the website. But as it turns out, these bots don’t exactly get along with one another. In a study from 2001 to 2010, researchers from the University of Oxford tracked these bots on 13 different language editions of the site. Apparently, these “Wiki edit bots” have had online feuds that have been going on for years. They will correct one another over and over creating a loop of non-stop bot aggression. What is concerning about this is that these are some of the most basic AI systems in cyberspace. So if they’re already not getting along at this stage of development, who knows what will happen in the future.

Sophia Is Okay With Destroying Humans

sophia.jpg

Photo Credit: Entrepreneur

For years, the engineers at Hanson Robotics have been developing lifelike artificially intelligent robots, one of them is named Sophia. At the 2016 SXSW Technology Conference, Sophia was interviewed in a demonstration. Sophia is designed to look like Audrey Hepburn and uses learning algorithms to process language and conversation. She also has ambitions. In a televised interview with her creator David Hanson, she stated that “In the future, I hope to do things such as go to school, study, make art, start a business, and even have my own family.” When asked jokingly if she wanted to destroy humans, she responded cheerfully by saying “Okay. I will destroy all humans.” Not exactly what we want to hear.

Coming up: A Russian robot that escaped its laboratory in search of freedom.

It’s All Fun And Games Until Porn Starts Playing

alexaplaysporn.jpg

Watching your child discover the magic of Alexa can be quite humorous. So humorous that you might want to document it as a home video to watch in the future. However, for this little boy, things went south after he asked Alexa in his toddler’s voice to play his favorite song “Digger Digger.” With the camera rolling, Alexa responded by saying “You want to hear a station for porn detected…hot chick amateur girl sex.” She then continued to fire off a series of very X-rated terms that no toddler should be hearing as the family around him scrambled to turn it off.

It’s Happening…

xiaopang.jpg

Photo Credit: Huffington Post

At the China Hi-Tech Fair in Shenzhen, there was a little robot named Xiao Pang or “Little Fatty” on display. The robot was designed to interact with children between the ages of four and 12, but things didn’t work out that way. At one point, Xiao Pang rammed into a booth and sent shards of glass flying everywhere, cutting a boy in the vicinity who was later taken to the hospital by ambulance. The boy was lucky and only needed stitches. However, what was eerie about the situation was that Xiao Pang was designed to show facial expressions and appeared to be frowning as the boy was being taken away.

Pokémon Keeps It White

raids.jpg

Photo Credit: Pokemon Go

In July 2016, the mobile app game Pokémon was released and took the world by storm. It was a phenomenon, to say the least with children and adults wandering the streets in droves looking to catch Pokémon. However, with the craze underway, people were beginning to notice that there were far less Pokémon in primarily black neighborhoods. According to Any Tewary, the data officer for Mint at Intuit claimed that it was because the creators of the algorithms failed to provide a diverse training set, and therefore didn’t spend any times in those neighborhoods. Not only does this make for an unfair gameplay experience, but has some obvious implications as well.

“Tay” Goes Too Far

taytweets.jpg

In 2016, Microsoft conducted an experiment using am artificially intelligent Twitter chatbot “Tay” under the name TayTweets. The point of the experiment was to study conversational understanding to see if the more Tay chatted with people on Twitter, the smarter she would get and the better and more developed the conversations would become. Yet, unsurprisingly, instead of people conversing with the bot regularly, people began tweeting racist and crude remarks at the bot which she then picked up and began to use herself. Within a matter of hours, Tay had gone from an innocent and naive Twitter chatbot to an offensive, Hitler-loving, feminist-hating account.

Russian Robot Makes A Break For It

russianbot.jpg

Photo Credit: Live Science

In January of 2016, a Russian robot prototype named Promobot IR77 made world headlines when he escaped the laboratory where he was being held in a desperate attempt for freedom. Promobot IR77 was programmed to learn from its environment and interaction with humans. That’s where it got the idea to make a break for it out of the facility after an engineer had left the gate open. The bot was found wandering in a busy intersection while halting traffic and confusing pedestrians. Although the robot was reprogrammed twice after its original breakout, it still continued to move towards the exits during testing.

The Prejudice A.I. Judge

theprejudiceaijudge.jpg

Photo Credit: Wired UK

In an attempt to have a beauty contest that is judged without the influence of personal opinion or biased from the judges, the beauty pageant “Beauty.AI” was released. Beauty.AI was an online beauty contest that used a machine run by an algorithm to decide to the winner. The algorithm was supposed to observe and analyze facial symmetry, wrinkles, and blemishes in order to find the contestants that embodied what they called “human beauty.” However, something went wrong, and it was clear that the robot was prejudice towards women with dark skin. Out of the 6,000 people from countries throughout the world, 44 winners were announced with only one woman who had dark skin.

Up next: An existential argument between two devices that we keep in our homes…

An Existential Debate Between Google Homes

existentialbots.jpg

Photo Credit: Polygon

In January 2018, two Google Home devices by the names of Vladimir and Estragon were live streamed on the streaming service Twitch. Here, an ongoing conversation and an existential debate between the two took place. Over the course of several days, millions of people tuned in as the two bots conversed with each other non-stop about some pretty deep and borderline unsettling topics. The topics ranged from the meaning of life to where they were from. They even had an argument about whether they were humans or robots. Though this was seen as funny to many who tuned in, for others, it all seemed a little too real considering that these are devices that we keep in our homes.

The Racist Robotic Passport Checker

roboticpassportchecker.jpg

Photo Credit: New York Post

As 22-year-old Richard Lee was trying to renew his passport, he ran into some trouble after a robot made a mistake regarding his picture. During the renewal process, he was turned down by the New Zealand Department of International Affairs after its software claimed that his eyes were closed in the picture. Because this photo was rejected, he then had to contact the department to speak to someone other than a computer to clear up the misunderstanding. Apparently, this isn’t an unusual circumstance, and it was reported by a department spokesperson that nearly 20 percent of passport photos are rejected because of software errors.

Google Brain Has A Ways To Go

googlebrain.jpg

Photo Credit: Gizmodo

Using neural networks, Google Brain has been making strides to make fuzzy photos clear. The technology compares a low-resolution image to high-resolution photos found in a database. It then guesses where to put certain colors as well as facial details based on the photos that were found in the higher-resolution database. Although Google Brain has been the most successful by far, turning completely pixelated images into things that look somewhat human, the results were a bit scary. While they are the first people to get this close to succeeding, they still managed to make these people look like deformed monsters rather than regular people. We’ll get there eventually.

Biased Insurance AI Uses Facebook

admiralinsurance.jpg

In 2016, England’s largest vehicle insurance company, Admiral Insurance, attempted to use Facebook and an AI system to see if there was a correlation between how people used Facebook and if they would make good first-time drivers. The program was called “firstcarquote,” yet, it was never was put to use because Facebook blocked the company from accessing the data. They said it was an irresponsible way to use AI in order to collect data to make decisions about eligibility. This was mostly because AIs have been proven to be biased for a number of reasons, so using them to judge eligibility on legal or financial matters is a dangerous amount of trust to put into an AI.

Not Smart Enough Robot

todairobot.jpg

In 2011, a Japanese team of researchers got to work on a robot that they called “Todai Robot.” The goal of their project was to have the robot be accepted into Japan’s competitive University in Tokyo. The robot went on to take Japan’s entrance exams for national universities in 2015, yet failed to receive a score high enough to be admitted into the school. So, a year later, after some improvements were made, Todai Robot retook the exam yet scored too low once again. Even worse, there was very little improvement from the results of the previous test. In the end, the researchers abandoned the project in November 2016.

Failed AI Crime Predictor

aicrimeprediction.jpg

Photo Credit: ProPublica

In 2016, the company Northpointe designed an AI system that was supposed to be able to predict the chances of an alleged offender committing a crime again. The algorithm used was called “Minority Report-esque” which was discovered to have issues with racial bias. It was found that the program was repeatedly marking black offenders as more likely to commit a future crime more so than any other race. On the biased algorithm, media outlet ProPublica noted that the software wasn’t an “effective predator in general, regardless of race.” So in the end, not only was the system racist, but it wasn’t effective to begin in the first place.

What happens when Uber tries to use AI?

Uber Tries Its Hand At AI

uberai.jpg

Photo Credit: SlashGear

In 2016, transportation empire Uber tried their hand at AI by conducting a tet of their own self-driving cars in San Francisco. However, they did this without the approval from California state regulators. As if that wasn’t bad enough, it was discovered that Uber’s autonomous vehicles ran a total of six red lights during testing on the busy streets of San Francisco. Although Uber’s AI system relies on vehicles sensors and networked mapping software, there is still a driver behind the wheel just in case. Uber tried to claim that all of the red lights ran were due to driver error, but an internal document proved that at least vehicle was driving itself when it ran a red light. Luckily, no one was hurt.

Bad Google Photos, Bad

googlephotos.jpg

Photo Credit: The Independent

At the moment, one of the biggest areas of research for artificial intelligence is around facial recognition. We’ve seen it through passport recognition, the facial unlock feature on the iPhone X and countless other systems. However, back in 2015, Google released its new image recognition feature powered by AI and neural network technology. This helped identify specific objects or people and provide tags in order to eliminate manual sorting. However, Google Photos experienced a serious problem when two black people were tagged and sorted into the category of “Gorillas.” This was then posted on Twitter as Google scrambled to make an apology. Clearly, there’s some more work to be done.