News
Generative AI in Retail: Opportunity Meets Risk
When retail executives proclaim “AI is the future,” they often envision seamless chatbots, hyper‑personalized shopping experiences, and efficient inventory forecasting. But as generative AI tools surge into the industry’s core, a more concerning narrative is emerging: the security cost. A new report from Netskope warns that while adoption is near ubiquitous in retail, the risks are scaling just as fast.
From Wild West to Enterprise Control
Retailers have been among the most aggressive adopters of generative AI. According to Netskope, 95 % of retail organizations now use generative AI in some capacity — up from 73 % just a year ago. What’s notable is not just the speed, but the shift in how they use it.
Earlier, staff often experimented with AI tools on their personal accounts — “shadow AI.” But that has declined. Use of personal accounts dropped from 74 % to 36 %, while use of company‑approved tools more than doubled from 21 % to 52 %. That signals a maturing posture: retail is trying to wrest control over unpredictable use.
Still, the transition is perilous.
The Hidden Costs: Exposure, Misconfiguration, Leakage
The strength of generative AI—its capacity to digest and respond to data—is also its Achilles’ heel.
One of the clearest risks is data leakage. The report finds that the most frequent type of policy violation in AI apps is exposure of a company’s own source code (47 %), followed by regulated, confidential customer or business data (39 %). That’s not a trivial risk in retail, where proprietary algorithms, pricing models, customer profiles, and supply chain details are gold mines.
Retailers are reacting. Some are outright banning tools judged “too risky.” The most commonly blocked app is ZeroGPT, banned by 47 % of organizations over concerns that it stores user content and routes it to third parties.
But bans are a blunt instrument. The real battleground is in secure integration.
Many retailers now embed generative AI deeper into their operations. Some 63 % are connecting directly to OpenAI’s API, embedding capabilities in backend systems and workflows. That amplifies risk: a misconfiguration, a lax permissions setting, or a compromised credential could open doors to critical systems.
Worse, cloud security hygiene is already a struggle. Attackers are blending in, using trusted names and services to deliver malware. The report notes that Microsoft OneDrive was cited in 11 % of retail malware incidents monthly and GitHub in nearly 9.7 %. Meanwhile, personal apps and social media remain vectors: 76 % of policy violations involving regulated data stem from files uploaded to unapproved personal services.
Vendor Moves: Enterprise‑Grade over Ad Hoc
To counter these risks, many retailers are migrating to enterprise-grade AI tools, often offered by big cloud providers. These platforms promise more control: private hosting, stricter governance, and custom tool development.
In the retail sector, OpenAI via Azure and Amazon Bedrock lead in adoption (both used by about 16 % of retailers). But these are not panaceas—misconfigurations or insufficient policy enforcement may still undercut the safeguards.
What Retailers Must Do: Governance, Visibility, Controls
If generative AI is becoming core infrastructure, it needs the same rigor as financial systems or supply chains. The report’s prescription is blunt but essential: gain full visibility on web traffic, block high-risk applications, enforce strict data protection policies, and control what information can be fed into AI models.
Retailers can’t afford to treat AI as a toy. Without governance baked into the deployment, the next AI innovation could turn into the next headline breach.