Get a Quote Right Now

Edit Template

Get a Quote Right Now

Edit Template

SlopSquatting- A New Dimension to Supply-Chain Attacks

Supply chain has been a prominent threat vector used in cyber-attacks. From the infamous SolarWinds to the Okta attack of 2023, cyber criminals have been targeting the supply-chain route to gain access to their target systems and networks. The advent of AI agents that can create entire software solutions has led to a new approach to supply chain attacks, introducing a dangerous variant called SlopSquatting.

Copilots and Autopilots (a.k.a Agents)

Ever since GitHub introduced its “Copilot”, both software engineers and vendors in the AI and software tools verticals have been focusing on more and more sophisticated tools in the realm of AI. Whether we like it or not, we are now living in an era where there are agents that:

  • Write code from requirements
  • Write unit tests from code written by humans or agents
  • Write system tests from requirements
  • Perform code reviews and more

All major vendors of LLMs (like OpenAI, Anthropic, Google etc.) and software tools (Cursor, Jetbrains etc.) have deep AI integration in their tools.

Also, there are SaaS vendors that utilise common LLMs and offer AI Agents that can create MVP or v1 of a product if we give them our requirements or share screenshots (eg.Replit, Lovable etc.)

Chinks in the Armour

But here is the catch — All LLMs have an intrinsic behaviour called “hallucination”. Hallucination, in the context of LLMs refers to the generation of non-existent or fabricated content that is contrary to the reality. In the world of software engineering, this can turn out to be fatal, in a manner of speaking. And this is the direct cause for SlopSquatting — The use/inclusion of non-existent dependencies (either direct or transitive) by an LLM due to its hallucination resulting in malicious code being included as a dependency.

What is SlopSquatting?

  • Suppose there exists an Agentic AI tool that reads a list of requirements and builds software. Let’s call it Achilles.
  • We give detailed requirements to Achilles to build a “Budget and Expense management” tool to be deployed as a multi-tenant SaaS solution, implemented in Python.
  • Achilles builds the software and includes a dependency named piecharts (there is no such package in PyPI as of writing this post).
  • A malicious actor is scanning GitHub for all requirements.txt (or package.json, build.gradle, pom.xml, etc.)
  • For such files obtained, the actor checks for dependencies that do not exist. In our case, it returns piecharts.
  • The actor quickly uploads a malicious library named piecharts (for eg. data exfiltration tool) to PyPI (or npm or Maven Central).
  • On CI/CD runs and docker image builds, when piecharts gets installed, it is this malicious library that will get installed.

This gives a detailed overview of how a SlopSquatting attack is done.

Except for the trained eye, it is hard to catch this error before the code gets pushed to the version control system and goes through the CI/CD pipeline.This new attack style makes SlopSquatting one of the most subtle but devastating types of supply-chain attacks today.

Cautious Optimism — The Simplest Step Forward

So, does this mean we should stop using AI-based tools and get back to doing it all manually?

Not really. Use AI as a co-pilot. Carefully observe every edit that it makes. This is very hard to do when we use AI in full auto-pilot where an agent builds everything from scratch — This has the same problem of reviewing a PR with a few hundred non-trivial lines added. As erstwhile McAfee’s motto says “Safe Never Sleeps” — The cybersecurity industry is alert and vigilant as always. It will come up with something that detects and handles these quickly.

In the interim, a stop-gap solution is to have an authorized list of software dependencies along with versions for commonly used languages in your organisation. If the dependency management file (like requirements.txt or package.json etc.) includes a dependency that is not in that list, and if the software is agent-written, validate the existence of the said dependency and the specified version too.

While these steps might seem laborious and time-consuming, the effort is worthwhile. Also, the time spent on this would have been already more than offset by the agent that created the software in hours, if not minutes, instead of weeks.

Human-in-the-Loop Practices for Safer AI Use

Use AI as a co-pilot. Carefully observe every edit that it makes. This is very hard to do when we use AI in full auto-pilot where an agent builds everything from scratch — This has the same problem of reviewing a PR with a few hundred non-trivial lines added.

Dependency Whitelisting and Validation Tactics

Also, as erstwhile McAfee’s motto says “Safe Never Sleeps” — The cybersecurity industry is alert and vigilant as always. It will come up with something that detects and handles these quickly. In the interim, a stop-gap solution is to have an authorized list of software dependencies along with versions for commonly used languages in your organisation. If the dependency management file (like requirements.txt or package.json etc.) includes a dependency that is not in that list, and if the software is agent-written, validate the existence of the said dependency and the specified version too.

While these steps might seem laborious and time-consuming, the effort is worthwhile. Also, the time spent on this would have been already more than offset by the agent that created the software in hours, if not minutes, instead of weeks.

Governance and Compliance in the Age of Generative AI

Finally, to have complete visibility and governance over the use of LLMs and generative AI, and to be prepared to handle and mitigate risks, strictly adhere to strong compliance frameworks.If you use AI (and specifically Generative AI) extensively, ISO 42001 is your friend. It enforces transparency and explain ability, along with “human-in-the-loop” requirements and strong governance.

Need Help? Let’s Talk.

Talk to us at www.cisogenie.com on how we can help you implement compliance frameworks and manage risks with relative ease.

Leave a Reply

Your email address will not be published. Required fields are marked *

Supply chain has been a prominent threat vector used in cyber-attacks. From the infamous SolarWinds to the Okta attack of 2023, cyber criminals have been targeting the supply-chain route to gain access to their target systems and networks. The advent of AI agents that can create entire software solutions has led to a new approach to supply chain attacks, introducing a dangerous variant called SlopSquatting.

Copilots and Autopilots (a.k.a Agents)

Ever since GitHub introduced its “Copilot”, both software engineers and vendors in the AI and software tools verticals have been focusing on more and more sophisticated tools in the realm of AI. Whether we like it or not, we are now living in an era where there are agents that:

  • Write code from requirements
  • Write unit tests from code written by humans or agents
  • Write system tests from requirements
  • Perform code reviews and more

All major vendors of LLMs (like OpenAI, Anthropic, Google etc.) and software tools (Cursor, Jetbrains etc.) have deep AI integration in their tools.

Also, there are SaaS vendors that utilise common LLMs and offer AI Agents that can create MVP or v1 of a product if we give them our requirements or share screenshots (eg.Replit, Lovable etc.)

Chinks in the Armour

But here is the catch — All LLMs have an intrinsic behaviour called “hallucination”. Hallucination, in the context of LLMs refers to the generation of non-existent or fabricated content that is contrary to the reality. In the world of software engineering, this can turn out to be fatal, in a manner of speaking. And this is the direct cause for SlopSquatting — The use/inclusion of non-existent dependencies (either direct or transitive) by an LLM due to its hallucination resulting in malicious code being included as a dependency.

What is SlopSquatting?

  • Suppose there exists an Agentic AI tool that reads a list of requirements and builds software. Let’s call it Achilles.
  • We give detailed requirements to Achilles to build a “Budget and Expense management” tool to be deployed as a multi-tenant SaaS solution, implemented in Python.
  • Achilles builds the software and includes a dependency named piecharts (there is no such package in PyPI as of writing this post).
  • A malicious actor is scanning GitHub for all requirements.txt (or package.json, build.gradle, pom.xml, etc.)
  • For such files obtained, the actor checks for dependencies that do not exist. In our case, it returns piecharts.
  • The actor quickly uploads a malicious library named piecharts (for eg. data exfiltration tool) to PyPI (or npm or Maven Central).
  • On CI/CD runs and docker image builds, when piecharts gets installed, it is this malicious library that will get installed.

This gives a detailed overview of how a SlopSquatting attack is done.

Except for the trained eye, it is hard to catch this error before the code gets pushed to the version control system and goes through the CI/CD pipeline.This new attack style makes SlopSquatting one of the most subtle but devastating types of supply-chain attacks today.

Cautious Optimism — The Simplest Step Forward

So, does this mean we should stop using AI-based tools and get back to doing it all manually?

Not really. Use AI as a co-pilot. Carefully observe every edit that it makes. This is very hard to do when we use AI in full auto-pilot where an agent builds everything from scratch — This has the same problem of reviewing a PR with a few hundred non-trivial lines added. As erstwhile McAfee’s motto says “Safe Never Sleeps” — The cybersecurity industry is alert and vigilant as always. It will come up with something that detects and handles these quickly.

In the interim, a stop-gap solution is to have an authorized list of software dependencies along with versions for commonly used languages in your organisation. If the dependency management file (like requirements.txt or package.json etc.) includes a dependency that is not in that list, and if the software is agent-written, validate the existence of the said dependency and the specified version too.

While these steps might seem laborious and time-consuming, the effort is worthwhile. Also, the time spent on this would have been already more than offset by the agent that created the software in hours, if not minutes, instead of weeks.

Human-in-the-Loop Practices for Safer AI Use

Use AI as a co-pilot. Carefully observe every edit that it makes. This is very hard to do when we use AI in full auto-pilot where an agent builds everything from scratch — This has the same problem of reviewing a PR with a few hundred non-trivial lines added.

Dependency Whitelisting and Validation Tactics

Also, as erstwhile McAfee’s motto says “Safe Never Sleeps” — The cybersecurity industry is alert and vigilant as always. It will come up with something that detects and handles these quickly. In the interim, a stop-gap solution is to have an authorized list of software dependencies along with versions for commonly used languages in your organisation. If the dependency management file (like requirements.txt or package.json etc.) includes a dependency that is not in that list, and if the software is agent-written, validate the existence of the said dependency and the specified version too.

While these steps might seem laborious and time-consuming, the effort is worthwhile. Also, the time spent on this would have been already more than offset by the agent that created the software in hours, if not minutes, instead of weeks.

Governance and Compliance in the Age of Generative AI

Finally, to have complete visibility and governance over the use of LLMs and generative AI, and to be prepared to handle and mitigate risks, strictly adhere to strong compliance frameworks.If you use AI (and specifically Generative AI) extensively, ISO 42001 is your friend. It enforces transparency and explain ability, along with “human-in-the-loop” requirements and strong governance.

Need Help? Let’s Talk.

Talk to us at www.cisogenie.com on how we can help you implement compliance frameworks and manage risks with relative ease.

Leave a Reply

Your email address will not be published. Required fields are marked *

Empowering Your Business with Cutting-Edge Software Solutions for a Digital Future

CISOGenie’s GRC platform, built by CISOs for CISOs and Security Teams, offers unified risk management with sincere AI. Simplify compliance, audits, and risk management effortlessly. 

Join Our Community

We will only send relevant news and no spam

You have been successfully Subscribed! Ops! Something went wrong, please try again.

Stronger Compliance Management = Secured Operations

Simplified Compliance, Prioritized Security.

You have been successfully Subscribed! Ops! Something went wrong, please try again.

Streamline your GRC journey with CISOGenie—easy and precise.

enquiry@cisogenie.com

Useful Links

Product

Company

Resourses

Platform

Assessment and Policy

Compliance and Audit

Risk Management

Vendor Risk Managment

Dashboards and Reports

Copyright © 2025 All Rights Reserved

Copyright © 2025 All Rights Reserved

Privacy Policy

Terms of use

Terms & Conditions

Streamline your GRC journey with CISOGenie—easy and precise.

enquiry@cisogenie.com

Useful Links

Product

Company

Resourses

Platform

Assessment and Policy

Compliance and Audit

Risk Management

Vendor Risk Managment

Dashboards and Reports

Copyright © 2025 All Rights Reserved

Copyright © 2025 All Rights Reserved

Privacy Policy

Terms of use

Terms & Conditions

Streamline your GRC journey with CISOGenie—easy and precise.

enquiry@cisogenie.com

Copyright © 2025 All Rights Reserved

Copyright © 2025 All Rights Reserved

Copyright © 2025 All Rights Reserved

Copyright © 2025 All Rights Reserved

Privacy Policy

Terms of use

Terms & Conditions