Loading…
Secure AI Summit 2024 (Powered by Cloud Native) has ended
Integration and Challenges clear filter
Tuesday, June 25
 

9:15am PDT

Invisible Infiltration of AI Supply Chains: Protective Measures from Adversarial Actors - Torin van den Bulk, ControlPlane
Tuesday June 25, 2024 9:15am - 9:50am PDT
Malicious human and AI actors can infiltrate AI supply chains, compromising the integrity and reliability of the resultant AI systems through training data tampering, software or model backdoors, model interference, or new runtime attacks against the model or its hosting infrastructure. This talk examines the importance of securing the data, models, and pipelines involved at each step of an AI supply chain. We evaluate the efficacy of emerging industry best practices and risk assessment strategies gathered from the FINOS AI Readiness Working Group, TAG Security Kubeflow joint assessment, and case studies with air-gapped and cloud-based AI/ML deployments for regulated and privacy-protecting workloads. In this talk, we: - threat model an AI system, from supply chain, through training and tuning, to production inference and integration - implement quantified security controls and monitoring mechanisms for an AI enterprise architecture - mitigate the risks associated with adversarial attacks on AI systems - address compliance and regulation requirements with actionable remediations - look to accelerate AI adoption while balancing minimum viable security measures
Speakers
TV

Torin van den Bulk

Cloud Native Security Engineer, Control Plane
Torin is a Cloud Native Security Engineer at ControlPlane, where he specializes in threat-driven designs within cloud native environments. He holds a Bachelor of Science in Cybersecurity and a Master's degree in Computer and Information Technology from Purdue University, where he... Read More →
Tuesday June 25, 2024 9:15am - 9:50am PDT
Room 447

10:45am PDT

⚡ Lightning Talk: Navigating the Intersection: AI’s Role in Shaping the Secure Open Source Software Ecosystem - Harry Toor, Open Source Security Foundation (OpenSSF)
Tuesday June 25, 2024 10:45am - 10:55am PDT
The intersection of AI, cybersecurity, and open-source software (OSS) is pivotal for growth and development of companies and society. We discuss the four apparent corners of this intersection to help inform a growing ecosystem: (1) OSS underpins AI systems, and routinely faces security risks. Tools like Scorecard help consumers understand risks in the supply chain of OSS used in AI systems. (2) Furthermore, open-sourcing AI components accelerates OSS growth, requiring secure practices. Tools like sigstore can help secure these newly released open sourced AI components entering the OSS supply chain. (2) AI also revolutionizes OSS security by automating vulnerability management, enhancing development lifecycles. (4) Lastly, AI's role is evolving; it now contributes to OSS, influencing both upstream creation and downstream use, marking a significant shift in open-source development. These four corners and the challenges within are crucial in shaping the future of technology.
Speakers
avatar for Harry Toor

Harry Toor

Chief of Staff, OpenSSF
Harry is the Chief of Staff for the OpenSSF and comes to the Linux Foundation with over a decade of experience supporting clients understand how they can harness technology to innovate, adapt, and evolve their enterprises. He has worked across industries including the Public Sector... Read More →
Tuesday June 25, 2024 10:45am - 10:55am PDT
Room 447

11:15am PDT

Future Open Source LLM Kill Chains - Vicente Herrera, ControlPlane
Tuesday June 25, 2024 11:15am - 11:50am PDT
Several mission-critical software systems rely on a single, seemingly insignificant open-source library. As with xz utils, these are prime targets for sophisticated adversaries with time and resources, leading to catastrophic outcomes if a successful infiltration remains undetected. A parallel scenario can unfold for the open-source AI ecosystem in the future, where a select few of the most powerful large language models (LLM) are repeatedly utilised, fine-tuned for tasks ranging from casual conversation to code generation, or compressed to suit personal computers. Then, they are redistributed again, sometimes by untrusted entities and individuals. In this talk, Andrew Martin and Vicente Herrera will explain methods by which an advanced adversary could leverage access to LLMs. They will show full kill chains based on exploiting the open-source nature of the ecosystem or finding gaps in the MLOps infrastructure and repositories, which can lead to vulnerabilities in your software. Finally, they will show both new and existing security practices that should be in place to prevent and mitigate these risks.
Speakers
avatar for Vicente Herrera

Vicente Herrera

Principal Consultant, Control Plane
Principal Consultant at Control Plane, specialized in Kubernetes and cloud cybersecurity for fintech organizations. Maintainer for project FINOS Common Cloud Controls, defining a vendor independent cloud security framework. Lecturer at Loyola University in Seville for Master on Data... Read More →
Tuesday June 25, 2024 11:15am - 11:50am PDT
Room 447

2:25pm PDT

Secure by Design: Strategies for LLM Adoption in Cloud-Native Environments - Patryk Bąk & Marcin Wojtas, BlueSoft
Tuesday June 25, 2024 2:25pm - 3:00pm PDT
This presentation will explore the common journey of software development companies in securely adopting AI technology within a cloud-native environment. It will unpack the challenges that development and platform teams face as they integrate AI into their systems. After initial resistance to using external LLM APIs, confidence grew and companies began building solutions with non-public data and open-source models. However, questions persist: Is my new shiny AI/LLM app secure and safe? This session will discuss practical approaches to challenges such as data privacy in Retrieval-Augmented Generation architectures, the complexities of AI-agent architectures where actions are performed across integrated systems, and the general security hardening of AI/LLM applications. We will share insights from our practice, which began by defining a threat model for AI-based systems aligned with the OWASP Top 10 for LLM Applications, and progressed to incorporating solutions into our cloud-native platform. Both offensive and defensive approaches were implemented, including the integration of tools like garak (LLM Vulnerability Scanner) and NVIDIA NeMo Guardrails into our cloud-native stack.
Speakers
avatar for Patryk Bąk

Patryk Bąk

Solutions Architect, BlueSoft
Patryk has over six years of experience in IT, with a diverse background encompassing roles as a Software Engineer, DevOps specialist, and team leader. He is a co-founder of Platform Engineers Poland and serves as a community leader. His current areas of focus include Platform Engineering... Read More →
avatar for Marcin Wojtas

Marcin Wojtas

Senior DevOps engineer, BlueSoft
Marcin Wojtas has over seven years of experience in IT as DevOps engineer. He has built experience through various projects using a wide range of technologies, particularly in developing large-scale platforms. His current areas of focus include LLMOps, Software Supply Chain Security... Read More →
Tuesday June 25, 2024 2:25pm - 3:00pm PDT
Room 447
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.