The year 2018 was a watershed moment in the evolution of Large Language Models (LLMs), with the arrival of GPT-1 in June and BERT in October 2018, the two foundational models that propelled AI into what it is today. The seminal paper on “Transformers” (called Attention is all you need) that came out in 2017 introduced concepts that underpin nearly all modern LLMs. In the 7 years since, the growth in this field has been staggering, arguably faster and more disruptive than any comparative period since the second world war.As AI capabilities accelerated, AI governance and risk management emerged as a critical priority for organizations aiming to innovate responsibly and maintain compliance.
The COVID-19 pandemic forced us into a culture of remote work and digital-first workflows and tools. Then, just as we were coming out of lock downs, unprecedented illness related deaths and the pandemic, a mere 4 years later, OpenAI launched ChatGPT, a tool that has had a more profound impact on daily life than almost anything else in recent memory.
For the first time, AI went from lab coats to laptops and hand-held devices. For the first time, non-tech users could directly, and in a meaningful way, benefit from AI. So, AI started to proliferate every walk of life. These changes were often faster than organizations could adapt their AI policy frameworks or establish clear AI governance and risk management practices. This explosion introduced innovation along with AI risks around data exposure, IP leakage, and compliance violations, making responsible AI adoption more crucial than ever.
The AI Explosion
Soon after OpenAI’s release of ChatGPT, Anthropic entered the scene with Claude 1 in early 2023, adding a new major player in the LLM market. Founded by former OpenAI researchers, Anthropic emphasized a focus on AI safety and alignment, which are the foundational principles of responsible AI. The same year Google announced Gemini 1.0, marking a significant shift in capability. With a massive context window of 1 million tokens, this opened the door to entirely new ways of using AI at scale. Generative AI exploded into public consciousness.
Meanwhile, open-source LLMs like LLaMA, Mistral, Falcon began to emerge. The parallel rise of proprietary and open-source LLMs created a new problem — Shadow AI. Developers and AI Enthusiasts, eager to learn and experiment, began deploying unvetted models across teams and departments, everywhere from just their laptops to CI/CD pipelines — often, without approval or oversight. Suddenly, companies found themselves running multiple LLMs without any centralized governance, raising concerns about AI risk and policy violations.This highlights the growing necessity for companies to implement AI governance and risk management at both technical and executive levels.
Shadow AI
Shadow AI refers to the use of AI tools and models, often open-source or third-party, within an organization without official approval, oversight, or governance. Like shadow IT before it, Shadow AI can introduce significant risks around data privacy, compliance, and security.
At the same time, IT and security teams were already stretched thin — dealing with concerns around data protection and AI policy enforcement, especially related to PII handling and regulatory compliance, because the world of SaaS also grew in parallel. With that came tools like CASB and ZTNA that IT teams had to learn and implement. It’s worth noting that GDPR had only recently come into effect, adding stricter regulations around data privacy and compliance, just before the release of GPT-1 and BERT in 2018, adding another layer of complexity to an already chaotic AI landscape.
AI becomes mainstream
As the generative AI boom accelerated, hyperscalers quickly recognized the market potential and began launching services tailored specifically for AI workloads. Until then, users had to rely on expensive GPU backed VMs to run their AI models and workloads — there weren’t any mainstream “Platforms as a Service” for AI. That changed with the introduction of AWS Bedrock, Google Vertex AI, and Azure OpenAI Service, which allowed developers to access and deploy powerful models via APIs, without the the burden of managing a full infrastructure. Thus started the race for “Best AI platform”, setting it apart from the race for being the best “Cloud IaaS/PaaS”.
As mentioned earlier, this period also saw the explosion of SaaS apps. Combined with Generative AI, came the era of AI integrations in SaaS tools. Every tool started sporting an “AI powered” banner. For eg. Microsoft came up with Microsoft Copilot, productivity tools like Notion came up with Notion AI, Confluence started having AI features in popular tools like Jira. There were startups building add-ons to collaboration tools like MS Teams, Slack and Zoom for summarizing meetings and taking notes. Software engineering tools too were in this race. GitHub came up with its Copilot that can do code completions, implement code using a prompt, do code reviews etc.
But…
While all of this feels exciting, there are significant dangers hidden just beneath the surface of it all. The AI explosion resulted in AI entering all walks of life. While the organization itself might have subscriptions to tools like Microsoft Copilot and GitHub copilot, employees might use AI tools of their own choice, that are now embedded in the day-to-day tools that they use. Suddenly, developers could fire up an editor, give it a problem statement in reasonable detail in plain English and the editor would write the code for them.
Managers who had to juggle between meetings started using note making and summarization plugins that would join some meetings on their behalf and make notes and summarize the meetings for them later. Marketing teams started using AI tools for better content generation. Sales teams started using AI for better lead generation. Software teams started using AI enabled tools like Notion AI and AI enabled Jira and Confluence.
But with this newfound awesomeness, come some critical unanswered questions
- Is the code editor silently sending your entire source repository to a remote model for context?
- Could the AI-generated code be a verbatim copy of copyrighted or GPL-licensed material?
- Are you unknowingly subjecting your organization to “slop squatting”: Using non-existent or unknown versions of dependencies that can lead to supply chain attacks or system instability.
- What happens if an AI meeting bot captures your company’s IP or strategic roadmap, and it’s accidentally logged, stored, or even leaked?
The intent behind these tools is usually good — productivity is the primary goal. But the risks aren’t just theoretical. As users, we typically lack the insight to answer those questions above. There have already been real-world incidents involving leaked sensitive data, inadvertent license violations, and model hallucinations with serious downstream consequences.These examples underline the urgent need for AI governance and risk management.
AI is here to stay. And so, for organizations looking to embrace its power, the key question becomes: How can we innovate safely, without compromising security, IP, or trust?
AI Governance and Risk Management
Governance, Risk Management and Compliance (GRC) has been a time-tested approach to these kinds of problems for decades. While the AI boom captured the world’s imagination, standards bodies and researchers were quietly working on the other side of the equation: trust, safety, and accountability. Organizations like ISO and NIST, along with leading academic and industry experts, recognized the serious risks AI risks posed by uncontrolled AI adoption and came up with a GRC-based approach to tackle the problem and follow a responsible AI governance.
These standards bodies introduced standards and frameworks that provide clear guardrails, ensuring that innovation doesn’t come at the expense of privacy, compliance, or organizational integrity.
The most prominent standards/frameworks are:
- ISO/IEC 42001 — The first international standard for AI management systems, offering guidance on how organizations can implement AI governance and risk management responsibly across the lifecycle: From data collection to deployment and monitoring.
- NIST AI Risk Management Framework (AI RMF) — A voluntary framework developed to help organizations identify, assess, and manage AI risks. It promotes trustworthy and responsible AI through principles like transparency, fairness, and accountability.
By diligently adopting frameworks like this (either or both, based on the need), organizations can shift from worry, fear and reactive fire-fighting to proactive AI governance and compliance.
As an AI-powered GRC, CISOGenie offers the cutting-edge solutions necessary to turn Shadow AI into an opportunity for secure, compliant innovation. Don’t let uncontrolled AI adoption threaten your organization. Visit https://www.cisogenie.com to learn more and schedule a demo to see our platform in action