Several mission-critical software systems rely on a single, seemingly insignificant open-source library. As with xz utils, these are prime targets for sophisticated adversaries with time and resources, leading to catastrophic outcomes if a successful infiltration remains undetected. A parallel scenario can unfold for the open-source AI ecosystem in the future, where a select few of the most powerful large language models (LLM) are repeatedly utilised, fine-tuned for tasks ranging from casual conversation to code generation, or compressed to suit personal computers. Then, they are redistributed again, sometimes by untrusted entities and individuals. In this talk, Andrew Martin and Vicente Herrera will explain methods by which an advanced adversary could leverage access to LLMs. They will show full kill chains based on exploiting the open-source nature of the ecosystem or finding gaps in the MLOps infrastructure and repositories, which can lead to vulnerabilities in your software. Finally, they will show both new and existing security practices that should be in place to prevent and mitigate these risks.
Principal Consultant at Control Plane, focusing on Kubernetes and AI cybersecurity for fintech organizations. Core member of AI Readiness Group in FINOS, collaborating in defining security risks, controls and mitigations. Lecturer at Loyola University in Seville for the Master's program... Read More →