Home Tech & Science Shadow AI: The hidden security breach CISOs often miss

Shadow AI: The hidden security breach CISOs often miss

by Delarno
0 comments
Shadow AI: The hidden security breach CISOs often miss

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Security leaders and CISOs are discovering that a growing swarm of shadow AI apps has been compromising their networks, in some cases for over a year.

They’re not the tradecraft of typical attackers. They are the work of otherwise trustworthy employees creating AI apps without IT and security department oversight or approval, apps designed to do everything from automating reports that were manually created in the past to using generative AI (genAI) to streamline marketing automation, visualization and advanced data analysis. Powered by the company’s proprietary data, shadow AI apps are training public domain models with private data.

What’s shadow AI, and why is it growing?

The wide assortment of AI apps and tools created in this way rarely, if ever, have guardrails in place. Shadow AI introduces significant risks, including accidental data breaches, compliance violations and reputational damage.

It’s the digital steroid that allows those using it to get more detailed work done in less time, often beating deadlines. Entire departments have shadow AI apps they use to squeeze more productivity into fewer hours. “I see this every week,”  Vineet Arora, CTO at WinWire, recently told VentureBeat. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.”

“We see 50 new AI apps a day, and we’ve already cataloged over 12,000,” said Itamar Golan, CEO and cofounder of Prompt Security, during a recent interview with VentureBeat. “Around 40% of these default to training on any data you feed them, meaning your intellectual property can become part of their models.”

The majority of employees creating shadow AI apps aren’t acting maliciously or trying to harm a company. They’re grappling with growing amounts of increasingly complex work, chronic time shortages, and tighter deadlines.

As Golan puts it, “It’s like doping in the Tour de France. People want an edge without realizing the long-term consequences.”

A virtual tsunami no one saw coming

“You can’t stop a tsunami, but you can build a boat,” Golan told VentureBeat. “Pretending AI doesn’t exist doesn’t protect you — it leaves you blindsided.” For example, Golan says, one security head of a New York financial firm believed fewer than 10 AI tools were in use. A 10-day audit uncovered 65 unauthorized solutions, most with no formal licensing.

Arora agreed, saying, “The data confirms that once employees have sanctioned AI pathways and clear policies, they no longer feel compelled to use random tools in stealth. That reduces both risk and friction.” Arora and Golan emphasized to VentureBeat how quickly the number of shadow AI apps they are discovering in their customers’ companies is increasing.

Further supporting their claims are the results of a recent Software AG survey that found 75% of knowledge workers already use AI tools and 46% saying they won’t give them up even if prohibited by their employer. The majority of shadow AI apps rely on OpenAI’s ChatGPT and Google Gemini.

Since 2023, ChatGPT has allowed users to create customized bots in minutes. VentureBeat learned that a typical manager responsible for sales, market, and pricing forecasting has, on average, 22 different customized bots in ChatGPT today.

It’s understandable how shadow AI is proliferating when 73.8% of ChatGPT accounts are non-corporate ones that lack the security and privacy controls of more secured implementations. The percentage is even higher for Gemini (94.4%). In a Salesforce survey, more than half (55%) of global employees surveyed admitted to using unapproved AI tools at work.

“It’s not a single leap you can patch,” Golan explains. “It’s an ever-growing wave of features launched outside IT’s oversight.” The thousands of embedded AI features across mainstream SaaS products are being modified to train on, store and leak corporate data without anyone in IT or security knowing.

Shadow AI is slowly dismantling businesses’ security perimeters. Many aren’t noticing as they’re blind to the groundswell of shadow AI uses in their organizations.

Why shadow AI is so dangerous

“If you paste source code or financial data, it effectively lives inside that model,” Golan warned. Arora and Golan find companies training public models defaulting to using shadow AI apps for a wide variety of complex tasks.

Once proprietary data gets into a public-domain model, more significant challenges begin for any organization. It’s especially challenging for publicly held organizations that often have significant compliance and regulatory requirements. Golan pointed to the coming EU AI Act, which “could dwarf even the GDPR in fines,” and warns that regulated sectors in the U.S. risk penalties if private data flows into unapproved AI tools.

There’s also the risk of runtime vulnerabilities and prompt injection attacks that traditional endpoint security and data loss prevention (DLP) systems and platforms aren’t designed to detect and stop.

Illuminating shadow AI: Arora’s blueprint for holistic oversight and secure innovation

Arora is discovering entire business units that are using AI-driven SaaS tools under the radar. With independent budget authority for multiple line-of-business teams, business units are deploying AI quickly and often without security sign-off.

“Suddenly, you have dozens of little-known AI apps processing corporate data without a single compliance or risk review,” Arora told VentureBeat.

Key insights from Arora’s blueprint include the following:

  • Shadow AI thrives because existing IT and security frameworks aren’t designed to detect them. Arora observes that traditional IT frameworks are letting shadow AI thrive by lacking the visibility into compliance and governance that’s needed to keep a business secure. “Most of the traditional IT management tools and processes lack comprehensive visibility and control over AI apps,” Arora observes.
  • The goal: enabling innovation without losing control. Arora is quick to point out that employees aren’t intentionally malicious. They’re just facing chronic time shortages, growing workloads and tighter deadlines. AI is proving to be an exceptional catalyst for innovation and shouldn’t be banned outright. “It’s crucial for organizations to define strategies with robust security while enabling employees to use AI technologies effectively,” Arora explains. “Total bans often drive AI use underground, which only magnifies the risks.”
  • Making the case for centralized AI governance. “Centralized AI governance, like other IT governance practices, is key to managing the sprawl of shadow AI apps,” he recommends. He’s seen business units adopt AI-driven SaaS tools “without a single compliance or risk review.” Unifying oversight helps prevent unknown apps from quietly leaking sensitive data.
  • Continuously fine-tune detecting, monitoring and managing shadow AI. The biggest challenge is uncovering hidden apps. Arora adds that detecting them involves network traffic monitoring, data flow analysis, software asset management, requisitions, and even manual audits.
  • Balancing flexibility and security continually. No one wants to stifle innovation. “Providing safe AI options ensures people aren’t tempted to sneak around. You can’t kill AI adoption, but you can channel it securely,” Arora notes.

Start pursuing a seven-part strategy for shadow AI governance

Arora and Golan advise their customers who discover shadow AI apps proliferating across their networks and workforces to follow these seven guidelines for shadow AI governance:

Conduct a formal shadow AI audit. Establish a beginning baseline that’s based on a comprehensive AI audit. Use proxy analysis, network monitoring, and inventories to root out unauthorized AI usage.

Create an Office of Responsible AI. Centralize policy-making, vendor reviews and risk assessments across IT, security, legal and compliance. Arora has seen this approach work with his customers. He notes that creating this office also needs to include strong AI governance frameworks and training of employees on potential data leaks. A pre-approved AI catalog and strong data governance will ensure employees work with secure, sanctioned solutions.

Deploy AI-aware security controls. Traditional tools miss text-based exploits. Adopt AI-focused DLP, real-time monitoring, and automation that flags suspicious prompts.

Set up centralized AI inventory and catalog. A vetted list of approved AI tools reduces the lure of ad-hoc services, and when IT and security take the initiative to update the list frequently, the motivation to create shadow AI apps is lessened. The key to this approach is staying alert and being responsive to users’ needs for secure advanced AI tools.

Mandate employee training that provides examples of why shadow AI is harmful to any business. “Policy is worthless if employees don’t understand it,” Arora says. Educate staff on safe AI use and potential data mishandling risks.

Integrate with governance, risk and compliance (GRC) and risk management. Arora and Golan emphasize that AI oversight must link to governance, risk and compliance processes crucial for regulated sectors.

Realize that blanket bans fail, and find new ways to deliver legitimate AI apps fast. Golan is quick to point out that blanket bans never work and ironically lead to even greater shadow AI app creation and use. Arora advises his customers to provide enterprise-safe AI options (e.g. Microsoft 365 Copilot, ChatGPT Enterprise) with clear guidelines for responsible use.

Unlocking AI’s benefits securely

By combining a centralized AI governance strategy, user training and proactive monitoring, organizations can harness genAI’s potential without sacrificing compliance or security. Arora’s final takeaway is this: “A single central management solution, backed by consistent policies, is crucial. You’ll empower innovation while safeguarding corporate data — and that’s the best of both worlds.” Shadow AI is here to stay. Rather than block it outright, forward-thinking leaders focus on enabling secure productivity so employees can leverage AI’s transformative power on their terms.



Source link

You may also like

Leave a Comment

Booboone

Breaking News on Health, Science, Politic, Science, Entertainment!

 

Edtior's Picks

Latest Articles

@2023 – All Right Reserved. Designed and Developed by booboone.com