Share this @internewscast.com
“Beyond maintaining academic integrity, students are increasingly leaning towards shortcuts in their educational journey,” Micallef remarked.
There is rising unease about the possibility that children might excessively depend on artificial intelligence, especially chatbots, as they turn away from social media on their devices.
“It’s evident that many individuals can become captivated by the interactive nature of these systems,” Micallef noted.
“These technologies are designed to mimic human interactions, often leading people to mistakenly believe there’s a human element within the machine or that the system genuinely comprehends their words.”
“Consequently, apps aimed at forming friendships and building relationships could pose significant risks to vulnerable users, including children.”
In response, Australia has implemented new guidelines to regulate AI bots, restricting the type of content they can deliver to teenagers.
“We’re the only country in the world that will be tackling AI chatbots and AI companions,” eSafety Commissioner Julie Inman Grant told 9News in early December. 
“They will be prevented from serving any pornographic, sexually explicit, self-harm, suicidal ideation or disordered eating (content) to under-18s.”
Given says the chatbots are hazardous for vulnerable people.
“What we’ve certainly seen around the world is that there are people who will listen to what these systems are telling them and take it to heart,” she said.
“So if a system says to you, ‘that’s a really great idea, you should totally pursue that’, it gives a boost to our ego, and we think someone’s listening.
“But we’ve seen that sometimes that goes to a very dark place, particularly if people start saying to the system, I’m really depressed, or I’m having really horrible feelings, or I’m thinking about suicide.
“We see that some of those apps are actually responding in ways that encourage that thinking rather than trying to respond to a person and push them towards getting help.
“That means people who are very vulnerable, who are already at risk, can actually be really taken down to a dark hole by these computers.”
Micallef believes that teens should not have access to AI chatbots entirely, mostly because of the safety risk they pose.
“With some of the concerns outlined by the eSafety Commissioner, such as access to pornographic, graphic or self-harm content, it would not be wise to give impressionable teenagers the ability to utilise such applications,” he said.
Given thinks the restrictions should go even further so to protect everyone, not just teenagers â although conceded any such regulation would be difficult to enact.
“These systems are not just harmful to children,” she said.
“We know we’ve got evidence of adults that also get entranced by these systems, for anyone that is particularly vulnerable, or has existing mental health issues, that’s a huge concern.
“If it’s safe for adults, it should be safe for kids.”