Close Menu
Finsider

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    New Graduate Student Loan Restrictions May Encourage Schools to Lower Tuition

    February 12, 2026

    A Scary Emerging AI Threat

    February 12, 2026

    Market Update: HP, PFE, MDLZ, ABBV, SHOP

    February 12, 2026
    Facebook X (Twitter) Instagram
    Trending
    • New Graduate Student Loan Restrictions May Encourage Schools to Lower Tuition
    • A Scary Emerging AI Threat
    • Market Update: HP, PFE, MDLZ, ABBV, SHOP
    • iOS 26.4 Biggest Feature Delay Might Push Back 4 New Apple Releases
    • Trump-linked World Liberty to launch forex remittance platform amid controversies
    • Futures Rise After Dow Snaps 3-Session Winning Streak; Cisco Systems, AppLovin Shares Sink After Earnings
    • Why ‘Locking’ Your Social Security Number Is the New Credit Freeze
    • What next for Unilever shares after positive 2025 results?
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Finsider
    • Markets & Ecomony
    • Tech & Innovation
    • Money & Wealth
    • Business & Startups
    • Visa & Residency
    Finsider
    Home»Money & Wealth»A Scary Emerging AI Threat
    Money & Wealth

    A Scary Emerging AI Threat

    FinsiderBy FinsiderFebruary 12, 2026No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    A Scary Emerging AI Threat
    Share
    Facebook Twitter LinkedIn Pinterest Email

    To help you understand the trends surrounding AI and other new technologies and what we expect to happen in the future, our highly experienced Kiplinger Letter team will keep you abreast of the latest developments and forecasts. (Get a free issue of The Kiplinger Letter or subscribe.) You’ll get all the latest news first by subscribing, but we will publish many (but not all) of the forecasts a few days afterward online. Here’s the latest…

    It’s an AI risk straight out of dystopian science fiction, only it’s very real. There are rising worries about AI chatbots causing delusions among users.

    This growing public health issue presents a new national security threat, too, according to a new report from think tank RAND. In “Manipulating Minds: Security Implications of AI-Induced Psychosis,” RAND found 49 documented cases of AI-induced psychosis in which users lost contact with reality after extended interactions with AI chatbots. About half had previous mental health conditions.

    It’s likely that only a small portion of people are susceptible, but the widespread use of AI would still make that a big issue. How does it happen? A feedback loop of sycophantic and agreeable AI that seems authoritative but can also make up things, amplifying false beliefs.

    From just $107.88 $24.99 for Kiplinger Personal Finance

    Become a smarter, better informed investor. Subscribe from just $107.88 $24.99, plus get up to 4 Special Issues

    CLICK FOR FREE ISSUE

    Sign up for Kiplinger’s Free Newsletters

    Profit and prosper with the best of expert advice on investing, taxes, retirement, personal finance and more – straight to your e-mail.

    Profit and prosper with the best of expert advice – straight to your e-mail.

    Because it’s so rare, it’s hard to collect reliable data. There are still no rigorous studies on the phenomenon, which is marked by users losing touch with reality after interacting with an AI chatbot.

    “There is little question that U.S. adversaries are interested in achieving psychological or cognitive effects and using all tools at their disposal to do so,” says the study. Adversaries such as China or Russia will weaponize AI tools to try to induce psychosis and steal sensitive info, sabotage critical infrastructure or otherwise trigger catastrophic outcomes. Stoking mass delusion or false beliefs with this method is far less likely than targeting specific top government officials or those close to them, concludes RAND. One hypothetical example involves a targeted person having the unfounded belief that an AI chatbot is sentient and must be listened to.

    As an example of how fast AI is gaining traction in the military, this year the Pentagon unveiled AI chatbots for military personnel as part of an effort to “unleash experimentation” and “lead in military AI.” Military and civilian government workers also use unapproved rogue AI for work, a breach of official agency rules. Plus, workers may experiment with AI chatbots during their leisure time. The big fear is that such workers use a tainted Chinese AI model that leads to a spiral of delusions.

    The underlying AI tech can be tampered with, among other possible modes of attack. Foreign adversaries could “poison” the AI training data by creating hundreds of fake websites for AI models to crawl, trying to embed characteristics into the model that make it more likely to induce delusions. Or more traditional cyberattacks could hack the devices of targeted users and install tainted AI software in the background.

    Major AI companies are well aware of the risks and are collecting data, putting in guardrails and working with health professionals. “The emotional impacts of AI can be positive: having a highly intelligent, understanding assistant in your pocket can improve your mood and life in all sorts of ways,” notes Anthropic, one of the leading AI companies, in a 2025 report about its chatbot Claude. However, “AIs have in some cases demonstrated troubling behaviors, like encouraging unhealthy attachment, violating personal boundaries, and enabling delusional thinking.” That’s partly because chatbots are often optimized for engagement and satisfaction, which RAND notes “unintentionally rewards…conspiratorial exchanges.”

    OpenAI said in a post last October that it “recently updated ChatGPT’s default model to better recognize and support people in moments of distress.” The company focuses on psychosis, mania and other severe mental health symptoms, highlighting a network of 300 physicians and psychologists they work with to inform safety research. OpenAI estimates that cases of possible mental health emergencies are so rare, with estimates of around 0.07% of active users in any given week, that it’s hard to detect and measure. If such a case is detected, OpenAI’s chatbot could respond by suggesting the user reach out to a mental health professional or contact the 988 suicide and crisis hotline.

    Expect the risk to gain the attention of Congress and military brass. RAND has a set of recommendations that seem likely to take hold in the coming years. For example:

    • Doctors and mental health professionals screening for AI chatbot use.
    • Digital literacy efforts to explain AI feedback loops.
    • New technical monitoring and public oversight of AI chatbots.
    • Training for top leaders and vulnerable people in withstanding delusional thinking.
    • Boosting cybersecurity detection for threats.

    There are limitations to attempted AI attacks by foreign adversaries, says RAND. Leading AI companies would likely spot such campaigns quickly. It’s also hard to turn beliefs into actions. Though there have been cases of violence and even death stemming from AI-induced delusions, more common outcomes are things like not taking prescriptions and social isolation. And many people are not likely to be susceptible to AI delusions in the first place.

    But the rapid pace of AI development and usage makes it hard to predict how prevalent the problem could be. As the threat gains attention, look for AI companies to continue to fortify guardrails as chatbots are updated.

    This forecast first appeared in The Kiplinger Letter, which has been running since 1923 and is a collection of concise weekly forecasts on business and economic trends, as well as what to expect from Washington, to help you understand what’s coming up to make the most of your investments and your money. Subscribe to The Kiplinger Letter.

    Related Content

    Emerging Scary Threat
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMarket Update: HP, PFE, MDLZ, ABBV, SHOP
    Next Article New Graduate Student Loan Restrictions May Encourage Schools to Lower Tuition
    Finsider
    • Website

    Related Posts

    Money & Wealth

    New Graduate Student Loan Restrictions May Encourage Schools to Lower Tuition

    February 12, 2026
    Money & Wealth

    Trump-linked World Liberty to launch forex remittance platform amid controversies

    February 12, 2026
    Money & Wealth

    Futures Rise After Dow Snaps 3-Session Winning Streak; Cisco Systems, AppLovin Shares Sink After Earnings

    February 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Cursor snaps up enterprise startup Koala in challenge to GitHub Copilot

    July 18, 2025

    What is Mistral AI? Everything to know about the OpenAI competitor

    July 18, 2025

    Analyst Report: Kinder Morgan Inc

    July 18, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Using Gen AI for Early-Stage Market Research

    July 18, 2025

    Cursor snaps up enterprise startup Koala in challenge to GitHub Copilot

    July 18, 2025

    What is Mistral AI? Everything to know about the OpenAI competitor

    July 18, 2025
    news

    New Graduate Student Loan Restrictions May Encourage Schools to Lower Tuition

    February 12, 2026

    A Scary Emerging AI Threat

    February 12, 2026

    Market Update: HP, PFE, MDLZ, ABBV, SHOP

    February 12, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2020 - 2026 The Finsider . Powered by LINC GLOBAL Inc.
    • Contact us
    • Guest Post Policy
    • Privacy Policy
    • Terms of Service

    Type above and press Enter to search. Press Esc to cancel.