SBN

The Rise and Risks of Shadow AI

 

Shadow AI, the internal
use of AI tools and services without the enterprise oversight teams expressly
knowing about it (ex. IT, legal,
cybersecurity, compliance, and privacy teams, just to name a few), is becoming a problem!


Workers are flocking to use 3rd party AI services
(ex. websites like ChatGPT) but also there are often savvy technologists who
are importing models and building internal AI systems (it really is not that
difficult) without telling the enterprise ops teams. Both situations are
increasing and many organizations are blind to the risks.

According to a recent Cyberhaven
report
:

  • AI is Accelerating:  Corporate data
    input into AI tools surged by 485%
  • Increased Data Risks:  Sensitive data
    submission jumped 156%, led by customer support data
  • Threats are Hidden:  Majority of AI use
    on personal accounts lacks enterprise safeguards
  • Security Vulnerabilities:  Increased
    risk of data breaches and exposure through AI tool use.


The risks are real and
the problem is growing.

Claroty

Now is the time to get ahead of this problem.
1. Establish policies for use and
development/deployment


2. Define and communicate an AI Ethics posture

3. Incorporate cybersecurity/privacy/compliance
teams early into such programs


4. Drive awareness and compliance by including
these AI topics in the employee/vendor training




Overall, the goal is to build awareness and
collaboration. Leveraging AI can bring tremendous benefits, but should be done
in a controlled way that aligns with enterprise oversight requirements.




“Do what is great, while it is small” –
A little effort now can help avoid serious mishaps in the future!

*** This is a Security Bloggers Network syndicated blog from Information Security Strategy authored by Matthew Rosenquist. Read the original post at: https://infosecstrategy.blogspot.com/2024/05/the-rise-and-risks-of-shadow-ai.html

Application Security Check Up