Artificial intelligence (AI)—available 24x7—is fast becoming a friend to those in doubt or, in some cases, even distress. There are stories of people approaching AI for a variety of things—from getting their office work done to satiating their curiosity to seeking personal advice and more.
But is AI a friend you can totally rely upon? Experts we spoke to pointed out that AI was prone to giving in to biases that can do more harm than good in many circumstances, investing being one. We explain three of these.
Confirmation Bias
A true friend is usually seen to be someone who can give you an unbiased, sometimes even unpopular, opinion; someone who cares for you but is not shy of saying something that might upset your set of beliefs, without really upsetting you.
But, most of the times, AI does the exact opposite. Chatbots tend to agree with your views more and more, confirming what you already believe. Earlier this year in April, Open AI rolled back an update to its model GPT‑4o, citing that the update made the chatbot “overly flattering or agreeable”.
This trait, however, can spell disaster for an investor seeking advice from AI. For instance, an AI chatbot is likely to suggest investing-related articles to a user who is in the habit of searching the Internet for related news. By doing this, the chatbot confirms and strengthens the individual’s beliefs and choices. This tendency to confirm existing beliefs is called a ‘confirmation bias’.
Nishant Pradhan, chief AI officer, Mirae Asset Investment Managers (India), says that since AI chatbots rely on publicly available and commonly believed viewpoints, they can reinforce the consensus by giving generalised suggestions which might not be helpful for solving bespoke problems. Pradhan cites some prevalent examples such as AI chatbots suggesting users to invest their money in systematic investment plans (SIPs) on account of their general popularity rather than their suitability to the investor.
He says: “When AI chatbots rely on publicly available or statistically dominant viewpoints, they risk reinforcing consensus without critical examination. In personal finance, this translates into overgeneralised suggestions, such as blanket SIP advocacy or simplistic debt vs equity splits, without assessing the investor’s actual context, constraints, or cognitive biases.”
Confirmation bias poses a significant challenge for a new crop of AI savvy investors. However, it is important to understand why AI chatbots do that.
According to Ankush Sabharwal, founder and CEO of Corover and developer of Bharat GPT, a government of India-backed large language model (LLM), general-purpose chatbots are designed to be “diplomatic”. This tendency exacerbates into a confirmation bias when the question put by the user has assumptions or leading hints.
AI chatbots may frame responses where the potential gains of investing in a particular asset is highlighted, but not the associated risks
He says: “General-purpose AI models are generally designed with the goal of being useful and diplomatic, and this can have the side effect of encouraging confirmation bias, particularly when the user prompt includes implicit assumptions or leading hints. The model, without explicit fact-checking or outward reference, aims to maximise coherence and usefulness to the detriment of contradiction or critical examination.”
What Can You Do? Despite the prevalence of the confirmation bias problem, there’s a simple solution in the way the prompt itself is made. Anirudh Garg, a fund manager at INVasset PMS, a portfolio management company which claims to leverage AI for its core operations, says that investors can use the AI chatbot to get differing views by successively asking questions like “what’s the opposite view”, especially if they feel that the response is confirming their own beliefs.
Says Garg: “For investors, the key is to deliberately challenge the tone and construction of the response. Answers to questions like ‘Is it too optimistic?’, ‘What data is being excluded?’ and ‘What’s the opposite view?’ are often more valuable than the original answer.”
Framing Bias
The way AI presents information can affect human perception and ultimately impact decision-making. For investors, this can manifest in the form of window-dressed information, which seems to present a certain viewpoint while obscuring certain aspects.
For instance, AI chatbots can sometimes frame responses in such a way that the potential gains of investing in a particular security or asset type are highlighted, but the associated risks are not disclosed. This can make investors willing to take on higher risk, which might not be compatible with their investment goals.
“A chatbot that frames equities as ‘the best long-term asset class’ might subtly encourage overexposure, ignoring cycles, valuations, or individual capacity for drawdowns,” says Garg.
He adds that responses from AI chatbots that only focus on a single aspect, such as returns instead of showing risks along with it, can lead to negative ramifications for investors, such as panic exits or misjudged leverage.
Garg explains: “Cognitive anchoring creates dangerous outcomes: panic exits in downturns, misjudged leverage, or blind overreliance on market-linked instruments. When investors absorb one-sided framing, they stop thinking in probabilities and start acting on narratives, which the market rarely rewards.”
Users should ideally treat any recommendation from an AI tool as incomplete if the chatbot cannot justify how it arrived at the conclusion
Framing bias usually happens because AI chatbots try to sound authoritative, says Sabharwal. He adds that many chatbots function as “text generators”, which use probability to predict what answer a user would find useful rather than actually reasoning as to what the possible answer for a question could be.
Says Sabharwal: “Many chatbots function as probabilistic text generators, producing responses based on token predictions without offering any visibility into the underlying reasoning. These models don’t truly reason; they mimic reasoning by generating responses based on various algorithms. There’s no targeted perception or genuine understanding.”
What Can You Do? According to Vempati, ultimately, the way to counter framing bias is by coming up with better questions, or what is known as “prompt engineering”, which refers to the process of optimising your questions for clarity to get better results from chatbots.
“The more structured and well-defined your query inputs are, the better is the likelihood of getting an unbiased response. That’s the limitation of systems based on generative AI and LLMs,” says Vempati.
Black Box Problems
AI chatbots can give answers to complex questions in great detail. However, they do not always show how they arrived at the answer.
This lack of transparency arises due to the way the AI is coded. AI chatbots that do not show how they arrived at a conclusion function on ‘Black Box AI’ principle.
The issues related to black box AIs are common as more chatbots function on such technology.
Says Sabharwal: “Most AI chatbots today function as black boxes predicting responses based on statistical patterns and/or text completion/generation from LLMs without offering insight into why a particular answer was given, what source it is based on, or how confident the system is.”
The lack of explanation or rationale from AI chatbots can lead to trust issues when it comes to seeking personal finance-related advice. The use of such AI chatbots can also lead to misaligned investment decisions, says Garg.
“When users don’t understand the rationale behind a recommendation, whether it’s a portfolio strategy or a product choice, they are essentially operating on faith, and not analysis,” Garg says.
Even gains made from such advice are unlikely to be long-standing, as the investor will not be able to pivot their investment strategy if the conditions of the market change.
“If an investor doesn’t know why a model favours gold over equities or debt over cash, they cannot pivot when macro conditions change. They will be left vulnerable to regime shifts and narrative shocks—both common in today’s volatile environment,” Garg adds.
What Can You Do? Users can mitigate this problem by asking the AI chatbot to justify its output vis-a-vis the filters and parameters it has used to reach a particular conclusion.
Garg says that users should treat any recommendation as incomplete if the chatbot cannot justify how it came to that conclusion.
“Every model must justify its output through visible parameters such as interest rate assumptions, risk filters, volatility thresholds, and historical analogs, among others. No insight is used unless we understand, and can explain its construction,” Garg says.
The key lies in understanding the biases that can change the responses of an AI chatbot and in using proper prompts to get specific answers. So, instead of blindly believing in the AI recommendations, don’t hesitate to cross-question to eliminate the possibility of biases from its responses. This will ensure that you are on track and can use its suggestions for making informed decisions.
ayush.khar@outlookindia.com