AI at best a mixed bag

 

Photo courtesy: LinkedIn

The history behind AI Chatbots goes all the way back to the late 1900s when it was first discovered by the computer scientist Joseph Weizenbaum. He invented the first chatbot, “ELIZA.”

Although technology continues to advance, so do the chatbots, making it not just an AI Chatbot, but now a Character Chatbot. Character Chatbots are designed to have human-like conversations and gain more knowledge and intelligence based on the conversations and questions that are being asked in the chats. 

Character Chatbots are trained using machine-learning algorithm questions, allowing the chatbots to answer open-ended questions. They can be very helpful when it comes to education, business and even recreational uses. 

However, in addition to the good that AI and Character chatbots bring into the world, they can also bring harm and danger if not used the correct way.

According to the TV station FOX35Orlando, a 14-year-old boy took his life a few moments after exchanging romantic words with a chatbot named after a popular character from the HBO hit “Game of Thrones.” The boy had been talking to this character chatbot for nearly a year and had fallen in love with it. 

“Although, AI can be very useful when it comes to learning purposes but on the day-to-day encounter with AI, I wouldn’t permit them texting AI like it’s their friend or all other chats that are being created like AI chatbots. Vonee’ Ferguson, a parent of two middle school children, said.  “I do believe there should be an age restriction based on the content of AI and what it can use.”

Age restrictions and limits vary depending on the app in which AI chatbots are being presented. Snapchat is highly monitored and currently gives parents the option to block their teens from interacting with the AI chatbot on Snapchat. 

“Honestly, I believe that there should be an age limit to AI. Which could help a lot of positive situations but being in a child mindset, you wouldn’t know what they would be,” said Tyriq, an information technology  student at FAMU who requested that his last name not be used. 

AI chatbots can be extremely harmful when used the wrong way, especially for individuals who suffer from mental illness. By using these chatbots with barely any restrictions or limits, it can result in worsening mental conditions and illnesses.  

“The dangers I see are a lot of reassurance that us humans are asking AI. This can be positive outtakes as well as negative. Therefore, there should be an age limit and that age limit should at least be no one younger than 16 years old and if they do need it, it would only be used to help with schoolwork to gain better understanding and knowledge of the educational materials presented,” Tyriq said.

Ultimately, researchers say that in the next five years AI chatbots are certain to become more sophisticated and more knowledgeable by incorporating deeper human-like context and understanding with enhanced natural-language processing abilities. 

Additionally, AI will become the new customer service for many businesses, schools, health care facilities, personal finances and even  building friendship relationships by providing empathetic interactions and emotions with its audience. Researchers emphasize that it is important to develop AI skills to do their jobs in the coming years.