A sudden jump in a stock’s price, a clean breakout on the charts, high volumes of order in pre-market that disappear as soon as the market opens, or even a news headline that pops up at just the right moment. These are not always coincidences. For anyone who follows the bourses closely, these signs reek of market manipulation, a practice that has been around for as long as stock markets have existed. What is changing now is the way AI is being used for these manipulations.
While everyone seems to be talking about the rewards of embedding AI into stock market processes, not many are talking about the risks AI can bring in.
Many of these manipulation tactics exist already, but with advanced AI tools widely available, even manipulators with little expertise can wreak havoc. “With the explosion of AI to the masses, the barrier to launching these attacks has become lower and detection time has shrunk. This makes AI-enabled threats dangerous for global markets,” says Dean Gefen, CEO of NukuDo, a US-based cybersecurity training provider.
Anshuman Das, managing director (MD) and chief technology officer (CTO), JM Financial Services, says, “AI allows negative actors to weaponise more easily and without much technical knowledge. Thus, the number of such actors is likely to increase as the entry barrier of knowledge is reducing.”
AI Strengthening Existing Risks
Deepfake Impersonation: In April 2024, a video went viral on social media, showing the CEO and MD of the National Stock Exchange (NSE) Ashishkumar Chauhan giving stock tips, something someone in his position usually never does.
At first, many people believed the video was real. But it was actually a deepfake. The video used AI to copy his voice, face, and expressions so well that it looked genuine.
Deepfakes are not new, but AI has made them easier to create. Now, anyone with basic tools can make them. And when used in the financial markets, they can mislead investors (see Trust N. R, Narayana Murthy? Check If It’s An AI Video?).
AI-Powered Phishing: Phishing is similar to deepfakes. Here, scammers impersonate as a trustworthy person on social media and trick individuals into revealing sensitive information.
Many of these manipulation tactics exist already, but with advanced AI tools widely available, even manipulators with little experience can wreak havoc
Recently, a 37-year-old man from Asif Nagar, Hyderabad, was duped of `2.8 crore in a fake initial public offering (IPO) scam, according to a media report dated April 29, 2025. The fraud began with a Facebook advertisement offering early access to IPOs, which led him to a fake website. Scammers added him to a WhatsApp group where they regularly discussed IPOs to appear credible. One of them, introducing herself as “Priya”, lured him to use a fake investment app called ASKMIN.
To gain his trust, the scammers first transferred `4.90 lakh to his account, which were supposed to be “returns”. That convinced the individual that it was genuine, and he continued investing between March and April 2025. But when he tried to withdraw the money, they demanded a 15 per cent processing fee.
Manasvi Garg, a Securities and Exchange Board of India-registered investment advisor (Sebi RIA), CFA, and founder and CEO of Moneyvesta, says, “Scammers are impersonating stock experts on platforms like WhatsApp by creating fake trading groups. They use AI-generated voice and texts to mimic brokers or analysts and deceive users into following fraudulent advice.”
AI-Driven Spoofing: Spoofing in stock market is nothing new, but AI has made it easier and accessible to a wider range of miscreants.
“AI bots place large buy or sell orders to create an illusion of demand or supply, only to cancel them before execution. This misleads market participants and algorithmic trading systems into taking positions that benefit the manipulator, distorting price discovery and increasing volatility,” says Garg.
With AI, orders can be placed and withdrawn in milliseconds. This makes it tough for regulators to track and prove it in today’s high-frequency trading environment.
New AI-Driven Risks
While making deepfakes, spoofing, or phishing do not necessarily involve AI, some cybersecurity threats are exclusively AI-driven.
Adversarial AI, Autonomous Bots: Adversarial AI is used to trick AI systems using deceptive prompts and feeds. “Autonomous hacking bots are now constantly scanning trading systems for weaknesses, while adversarial AI is being used for manipulating algorithms or market sentiment to cause disruption,” says Sanat Mondal, head of private markets at Sanctum Wealth.
According to Gefen, the threat is no longer about brute-force attacks, but about precision. He says: “If attackers discover an unpatched vulnerability, they could steal sensitive data, like proprietary trading algorithms, or even shut down trading systems entirely. A well-crafted adversarial AI model could simulate market conditions that trigger faulty trades, flood systems with synthetic orders to create latency or instability or manipulate data inputs that feed into high-frequency trading strategies. With enough data, threat actors can zero in on the most vulnerable systems behind global markets and do so with incredible accuracy.”
Latency Arbitrage: This is a trading strategy that takes advantage of minute time delays between different stock exchanges. Traders with faster systems detect price differences and act before others, buying at a low on one exchange and selling at high on another within microseconds to earn quick profits.
AI is now exploiting this strategy. “With AI, latency arbitrage exploiting microsecond gaps in high-frequency trading systems can become more precise and faster,” says Mondal. With AI, smart algorithms can move even faster and make prices swing suddenly, making it difficult for regular traders to react in time.
DDoS Attacks: Another threat Mondal points out is Distributed Denial of Service (DDoS) attacks, where hackers flood trading platforms with massive traffic that slows down or crashes the system, causing chaos in the market.
“AI-led DDoS attacks can be timed to strike during peak market hours, to overwhelm the order matching engines that processes buy and sell orders on stock exchanges,” he says.
AI bots can rapidly submit and cancel thousands of fake orders to overload order books and strain matching engines. “This can confuse market participants and slow down system performance, potentially leading to trading halts or mispricing,” says Garg.
Threat To Trading Infra: AI-enabled malware poses a serious threat to brokerage platforms and payment gateways, says Garg. In 2024, C-Edge Technologies, a tech firm that supports over 300 Indian banks, was hit by a ransomware attack, which led to temporary disruptions in Unified Payments Interface (UPI), IMPS, and ATM services across India.
Are There Checks In Place?
Sebi first came out with a framework on cybersecurity for market infrastructure institutions (MIIs) in 2015. Over the years, it has shared advisories with regulated entities on how to follow best practices.
In August 2024, Sebi introduced an enhanced Cybersecurity and Cyber Resilience Framework (CSCRF) to tackle evolving cyber threats. Under the framework, stock exchanges, clearing corporations, brokers, mutual funds, depositories, merchant bankers, registrars, and other regulated entities are required to have a Security Operations Centre (SOC).
“These institutions have now upgraded their SOCs with AI-powered monitoring systems capable of flagging anomalous behaviour, such as bot-like traffic patterns, credential stuffing attempts, or deepfake phishing, much faster than traditional methods,” says Garg.
Further, NSE deployed CyberArk Identity Security Platform in March 2025. The platform makes sure that only the right people or machines get access to trading systems, and blocks password-based attacks, while keeping a constant watch for threats.
BSE has a Data Analytics and Surveillance Technology (DAST) system. “DAST has effectively identified circular trading patterns in small-cap stocks, where traders artificially inflated trading volumes to create a false demand,” says Garg.
AI Against AI: AI’s strength also lies in its ability to detect patterns that humans might miss. With the help of AI, regulators can spot manipulative trading activities within seconds.
Says Gefen: “AI can flag anomalies in real time, whether that’s irregular trading volume, unusual login behaviour, or subtle shifts in order flow. It can also automate threat hunting, adapt to new attack patterns, and free up human analysts to focus on higher-order strategy.”
In India, both the Central Depository Services (CDSL) and the National Securities Depository (NSDL) have employed AI to detect and prevent fraud at the individual account level. In 2024, several demat accounts were temporarily suspended after AI tools flagged suspicious login from unknown devices, foreign IP addresses, and in non-trading hours.
MIIs are keeping pace with the developments in AI and making constant efforts to safeguard the interests of investors. Yet, there’s a need to tread with care.
rishabh.raj@outlookindia.com