Loading…
Secure AI Summit 2024 (Powered by Cloud Native) has ended
Tuesday June 25, 2024 11:15am - 11:50am PDT
Several mission-critical software systems rely on a single, seemingly insignificant open-source library. As with xz utils, these are prime targets for sophisticated adversaries with time and resources, leading to catastrophic outcomes if a successful infiltration remains undetected. A parallel scenario can unfold for the open-source AI ecosystem in the future, where a select few of the most powerful large language models (LLM) are repeatedly utilised, fine-tuned for tasks ranging from casual conversation to code generation, or compressed to suit personal computers. Then, they are redistributed again, sometimes by untrusted entities and individuals. In this talk, Andrew Martin and Vicente Herrera will explain methods by which an advanced adversary could leverage access to LLMs. They will show full kill chains based on exploiting the open-source nature of the ecosystem or finding gaps in the MLOps infrastructure and repositories, which can lead to vulnerabilities in your software. Finally, they will show both new and existing security practices that should be in place to prevent and mitigate these risks.
Speakers
avatar for Vicente Herrera

Vicente Herrera

Principal Consultant, Control Plane
Principal Consultant at Control Plane, specialized in Kubernetes and cloud cybersecurity for fintech organizations. Maintainer for project FINOS Common Cloud Controls, defining a vendor independent cloud security framework. Lecturer at Loyola University in Seville for Master on Data... Read More →
Tuesday June 25, 2024 11:15am - 11:50am PDT
Room 447
Feedback form is now closed.

Attendees (8)


Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link