The proliferation of generative artificial intelligence and LLM-based models (big language models), such as ChatGPT, has been one of the biggest digital shifts in recent years. As of May 2025, more than 500 million users interact with such systems on a daily basis. At the same time, there is a growing body of critical discussion about potential threats, ranging from cognitive decline to mental health disorders.
A study conducted by a group of scientists from MIT, recorded a decrease in neural activity in participants who regularly used ChatGPT when performing intellectual tasks. A comparative analysis with groups using search engines or working without digital support showed that interaction with AI reduces the level of cognitive engagement. The study has not been independently reviewed, and its scope is still limited, but it has already caused a resonance in the educational and scientific environment.
In addition, data from Harvard University in May 2025 shows that generative AI can reduce employees ' intrinsic motivation by 11 percent and increase boredom levels by 20 percent. Research conducted in partnership with Microsoft Research highlights that critical thinking is susceptible to erosion when the user passively accepts the model's responses as authoritative.
The problem is compounded by the fact that mass users often perceive a chatbot as a substitute for an expert source — be it a teacher, psychologist, or developer. There have been cases where interaction with AI has led to disorders in the perception of reality, emotional dependence on the virtual interlocutor, and even fatal consequences. Researchers in the field of neuropsychiatry record that with prolonged communication, the user may have the illusion of empathy on the part of the chatbot, which, in turn, reduces critical perception and forms psychological dependence.
A separate threat remains the use of AI as a tool in vulnerable psychological states. Commercial services that provide access to AI psychologists often do not offer adequate protocols in the event of a crisis. The lack of integration with medical structures, alarm response buttons or redirection to emergency services makes such platforms potentially dangerous in conditions of real stress.
The technical part is no less vulnerable. Experts note that language models can generate code that is logically correct, but vulnerable, constructs. Errors in AI recommendations are difficult to identify even for an experienced specialist, and a high level of trust in the "smart machine" leads to a decrease in professional alertness. This creates the threat of technological debt and architectural opacity in digital products.
With the rapid technological expansion of AI, ethical and regulatory boundaries need to be redefined. Experts emphasize that models based on probabilistic generation cannot be a source of final truth. They form texts that are close to consensus knowledge, but devoid of personal context, critical evaluation, and moral responsibility.
Representatives of the technology community recognize that technology itself is not a threat. The risks stem from an uncritical attitude to its results and the lack of a culture of digital literacy. In the short term, the key task will be to implement standards of verification, ethical programming, and mandatory user rechecking when applying AI in sensitive areas-from medicine to law.