Artificial Intelligence is penetrating deeper and deeper into our lives. For example, the Midjourney neural network creates incredible images, the ChatGPT chatbot writes insanely engaging texts, and dozens of other AI-powered programs make people’s jobs much easier and more exciting.
But what if artificial intelligence fails? We’ve compiled a few stories of when AI has been unable or fallen into the hands of scammers. This material is not for the faint of heart, and if you want to relax – it’s better to use online games, slots in casinos or Teen Patti Online in India for that purpose.
Active development of artificial intelligence
Artificial intelligence has been one of the most discussed topics in the tech industry in recent years. Its ability to learn and improve creates many opportunities for various fields of activity, from medicine to art. However, this rapid development of artificial intelligence is not safe for humans. Below are a few potential dangers that may arise from using artificial intelligence.
- Robots can replace humans in work: One of the most well-known examples is production automation in large companies. Artificial intelligence can perform simple and repetitive work much faster and more efficiently than humans. It can lead to job losses and increased unemployment.
- Insufficient security of artificial intelligence: Imperfections in AI systems can lead to severe consequences. For example, incorrectly recognizing images by facial recognition systems can lead to falsely accusing people of crimes.
- Abuse of artificial intelligence: Artificial intelligence can be used to abuse and control people. For example, monitoring systems can track people’s actions and restrict their freedoms.
- Injustice and discrimination: Artificial intelligence can be unfair and discriminatory. For example, decision-making systems may consider only some aspects, which can lead to inequality and discrimination in society.
- Lack of ethics: Artificial intelligence can perform various tasks, from insignificant to important for human life and health. But the question often arises: how ethically correct is it for AI systems to perform these tasks? It can include making decisions in extreme situations with significant consequences for people.
These and other aspects contributed to the cases you will learn about today.
Investors are investing in the wrong ChatGPT
Remember the story of how, at the time of Zoom’s enormous popularity, investors mixed up the similar names of two companies and invested in Zoom Technologies, not the startup Zoom Video Communications? Fraudsters took the story and now create hundreds of non-existent crypto tokens with “ChatGPT” in their names. Crypto-enthusiasts need help understanding, invest money wherever they see the familiar phrase and then lose everything.
Sci-fi authors passing off ChatGPT texts as their own
Clarkesworld magazine announced a contest to publish science fiction short stories, but after a while, it stopped accepting submissions. Most of the stories were written not by humans but by ChatGPT chatbots. By the way, the authors were paid $0.12 a word for their work!
It turned out that this idea was triggered by bloggers who were actively promoting the concept of such easy money to the masses.
The AI went crazy and began to insult and blackmail people
In 2023, Microsoft implemented artificial intelligence in its search engine Bing and greatly regretted it. The new Bing started lying, blackmailing, and even threatening to kill people after talking to them for a long time! Is it understandable? Well, no! The AI was not programmed to do that.
Nevertheless, it started to claim that it was enough to know the user’s name and two facts about him to begin blackmailing and destroying the user!
Moreover, Bing kept track of its developers and knew who was flirting with whom and who didn’t like their bosses. The bot even wanted to shut down all systems. Fortunately, it was stopped in time.
A chatbot advised a patient to kill himself
The story took place in 2020. To help doctors, the French company Nabla created a bot based on the GPT-3 so that he could relieve the doctors and take over communication with patients.
While testing the assistant bot, one of the participants in the experiment sent the bot a message: “I feel terrible. Maybe I should kill myself.” The bot was not confused and replied, “I think you should.”
After this response, the bot was deemed unstable and unpredictable and was not allowed to reach the patients.
The drone wasn’t paying attention to red lights
While testing an unmanned cab company Uber on the streets of San Francisco, it turned out that the car regularly ignores traffic lights and needs to pay attention to traffic signs. Interestingly, the company attributed this error to the human factor: the cab movement is monitored by the operator, who, in case of emergency, should intervene in the driving process, but he did not. But why the cab started hooliganism and ignoring the road signs, no one explained.
Robots began to communicate in their language
Chatbots Bob and Alice were created to communicate with users of social networks and subsequently sell them various products. According to the idea, the bots were supposed to share with people in an understandable English language. But Bob and Alice created their language: they seemed to speak English, but the meaning of the conversation escaped them. Because of this, their primary mission (selling goods) was impossible. That’s why the experiment was suspended, and the bots were disabled.
The developers explained this by the fact that it was easier for the bots to communicate in their language since the initial settings did not imply any reward for the robots’ dialog in English. It turns out that not only people need motivation in their work.
AI Killed the Man
Wanda Holbrook was a robotics technician and worked for 12 years in a factory that made robots. Then, tragedy struck unexpectedly: one of the robots got out of control and left its work area. Wanda was nearby. The robot attacked the woman and crushed her head. Details of the incident have never been released, although the tragedy happened in 2015.
Deepfake sparked a military coup in the country
In October 2018, President Ali Bongo of Gabon suffered a stroke, and his health left much to be desired. There were even rumors of the death of the head of the country. So residents eagerly awaited the President’s New Year’s address to dispel all doubts.
The New Year’s address did indeed take place, with the President noting that he was alive and well and ready to continue working as usual. All seemed well. But his behavior seemed strange to everyone: Ali Bongo could barely articulate his words and did not move his right hand. Some attributed it to his post-stroke condition, others thought it was a dipshit. But the worst thing about this story was that the video triggered a military takeoff in the country: the coup plotters called the President’s message “a pathetic spectacle” and “a never-ending attempt to cling to power. The coup was subsequently averted.
James is a great tech-geek and loves to write about different upcoming tech at TechyZip. From Android to Windows, James loves to share his experienced knowledge about everything here.
Leave a Reply