Loading…
Secure AI Summit 2024 (Powered by Cloud Native)
Attending this event?
Tuesday, June 25
 

8:00am PDT

Registration + Badge Pick Up
Tuesday June 25, 2024 8:00am - 5:00pm PDT
Tuesday June 25, 2024 8:00am - 5:00pm PDT
Ballroom Lobby - 5th Floor Seattle Convention Center, 900 Pine St, Seattle, WA 98101, USA

8:30am PDT

Solutions Showcase
Tuesday June 25, 2024 8:30am - 4:30pm PDT
Tuesday June 25, 2024 8:30am - 4:30pm PDT
Pine Lobby - 4th Floor Seattle Convention Center, 900 Pine St, Seattle, WA 98101, USA

9:00am PDT

Opening Remarks - Shane Lawrence and Annie Talvasto, Event Chairs
Tuesday June 25, 2024 9:00am - 9:10am PDT
Speakers
avatar for Shane Lawrence

Shane Lawrence

Sr Staff Developer, Shopify
Shane is a Senior Staff Infrastructure Security Engineer at Shopify, where he's working on a multi-tenant platform that allows developers to securely build scalable apps and services for crafters, entrepreneurs, and businesses of all sizes.
Tuesday June 25, 2024 9:00am - 9:10am PDT
Room 447

9:15am PDT

Invisible Infiltration of AI Supply Chains: Protective Measures from Adversarial Actors - Torin van den Bulk, ControlPlane
Tuesday June 25, 2024 9:15am - 9:50am PDT
Malicious human and AI actors can infiltrate AI supply chains, compromising the integrity and reliability of the resultant AI systems through training data tampering, software or model backdoors, model interference, or new runtime attacks against the model or its hosting infrastructure. This talk examines the importance of securing the data, models, and pipelines involved at each step of an AI supply chain. We evaluate the efficacy of emerging industry best practices and risk assessment strategies gathered from the FINOS AI Readiness Working Group, TAG Security Kubeflow joint assessment, and case studies with air-gapped and cloud-based AI/ML deployments for regulated and privacy-protecting workloads. In this talk, we: - threat model an AI system, from supply chain, through training and tuning, to production inference and integration - implement quantified security controls and monitoring mechanisms for an AI enterprise architecture - mitigate the risks associated with adversarial attacks on AI systems - address compliance and regulation requirements with actionable remediations - look to accelerate AI adoption while balancing minimum viable security measures
Speakers
TV

Torin van den Bulk

Cloud Native Security Engineer, Control Plane
Torin is a Cloud Native Security Engineer at ControlPlane, where he specializes in threat-driven designs within cloud native environments. He holds a Bachelor of Science in Cybersecurity and a Master's degree in Computer and Information Technology from Purdue University, where he... Read More →
Tuesday June 25, 2024 9:15am - 9:50am PDT
Room 447

9:55am PDT

Elevate Cloud Threat Hunting with AI - Kenneth Peeples & Maya Costantini, Red Hat
Tuesday June 25, 2024 9:55am - 10:30am PDT
The rapid advancement of Generative AI has lowered the barrier for creating sophisticated malware, making less experienced hackers capable of propagating attacks in a matter of minutes. This new type of threat highlights the need to develop suitable tools to reduce detection time to a similar timeframe. This talk introduces Kestrel as a Service (KaaS), empowering threat hunters with reusable threat hunting flows from the Kestrel language, effortlessly deployable in the cloud. Augmented by predictive AI model plugins, Kestrel optimizes threat detection, accelerating response times in case of attacks. Kestrel provides a layer of abstraction to stop the repetition involved in cyber threat hunting. Kestrel contains two main components, 1) A threat hunting language for a human to express what to hunt and 2) A machine interpreter that deals with how to hunt. The key objective is to use these components to hunt faster.
Speakers
avatar for Maya Costantini

Maya Costantini

Software Engineer, Red Hat
Maya is a Software Engineer within the Red Hat Emerging Technologies Security team. Her interests reside in Software Supply Chain Security, with a focus on Python and Open Source.
avatar for Kenneth Peeples

Kenneth Peeples

Red Hat
I have a passion for Cybersecurity and anything open source. I have worked on many initiatives globally for Red Hat/IBM and currently pursuing my Doctorate in Systems Engineering. Examining problems and providing solutions are enjoyable to me. I have enjoyed concentrating on Zero... Read More →
Tuesday June 25, 2024 9:55am - 10:30am PDT
Room 447

10:45am PDT

⚡ Lightning Talk: Navigating the Intersection: AI’s Role in Shaping the Secure Open Source Software Ecosystem - Harry Toor, Open Source Security Foundation (OpenSSF)
Tuesday June 25, 2024 10:45am - 10:55am PDT
The intersection of AI, cybersecurity, and open-source software (OSS) is pivotal for growth and development of companies and society. We discuss the four apparent corners of this intersection to help inform a growing ecosystem: (1) OSS underpins AI systems, and routinely faces security risks. Tools like Scorecard help consumers understand risks in the supply chain of OSS used in AI systems. (2) Furthermore, open-sourcing AI components accelerates OSS growth, requiring secure practices. Tools like sigstore can help secure these newly released open sourced AI components entering the OSS supply chain. (2) AI also revolutionizes OSS security by automating vulnerability management, enhancing development lifecycles. (4) Lastly, AI's role is evolving; it now contributes to OSS, influencing both upstream creation and downstream use, marking a significant shift in open-source development. These four corners and the challenges within are crucial in shaping the future of technology.
Speakers
avatar for Harry Toor

Harry Toor

Chief of Staff, OpenSSF
Harry is the Chief of Staff for the OpenSSF and comes to the Linux Foundation with over a decade of experience supporting clients understand how they can harness technology to innovate, adapt, and evolve their enterprises. He has worked across industries including the Public Sector... Read More →
Tuesday June 25, 2024 10:45am - 10:55am PDT
Room 447

10:55am PDT

AM Break
Tuesday June 25, 2024 10:55am - 11:15am PDT
Tuesday June 25, 2024 10:55am - 11:15am PDT
Pine Lobby - 4th Floor Seattle Convention Center, 900 Pine St, Seattle, WA 98101, USA

11:15am PDT

Future Open Source LLM Kill Chains - Vicente Herrera, ControlPlane
Tuesday June 25, 2024 11:15am - 11:50am PDT
Several mission-critical software systems rely on a single, seemingly insignificant open-source library. As with xz utils, these are prime targets for sophisticated adversaries with time and resources, leading to catastrophic outcomes if a successful infiltration remains undetected. A parallel scenario can unfold for the open-source AI ecosystem in the future, where a select few of the most powerful large language models (LLM) are repeatedly utilised, fine-tuned for tasks ranging from casual conversation to code generation, or compressed to suit personal computers. Then, they are redistributed again, sometimes by untrusted entities and individuals. In this talk, Andrew Martin and Vicente Herrera will explain methods by which an advanced adversary could leverage access to LLMs. They will show full kill chains based on exploiting the open-source nature of the ecosystem or finding gaps in the MLOps infrastructure and repositories, which can lead to vulnerabilities in your software. Finally, they will show both new and existing security practices that should be in place to prevent and mitigate these risks.
Speakers
avatar for Vicente Herrera

Vicente Herrera

Principal Consultant, Control Plane
Principal Consultant at Control Plane, specialized in Kubernetes and cloud cybersecurity for fintech organizations. Maintainer for project FINOS Common Cloud Controls, defining a vendor independent cloud security framework. Lecturer at Loyola University in Seville for Master on Data... Read More →
Tuesday June 25, 2024 11:15am - 11:50am PDT
Room 447

11:55am PDT

Toward Zero Trust with AI - Boris Kurktchiev, Nirmata & Ronald Petty, RX-M
Tuesday June 25, 2024 11:55am - 12:30pm PDT
Achieving and maintaining a Zero Trust architecture in cloud-native environments remains a complex challenge. K8sGPT, a cutting-edge AI-powered tool, is revolutionizing system management and streamlining the path to Zero Trust. By providing detailed guidance, integrating with system events, and working alongside tools like Istio and Kyverno, K8sGPT simplifies policy enforcement and network security, empowering operators to implement a robust Zero Trust model confidently.
Speakers
avatar for Boris Kurktchiev

Boris Kurktchiev

Chief Plumber, Nirmata
In the world of tools, it's not 'one size fits all.' I'm the expert who always knows when to grab the hammer and when to reach for the screwdriver.
avatar for Ronald Petty

Ronald Petty

RX-M
Ronald Petty is a consultant at RX-M, a global cloud native advisory and artificial intelligence training firm in the founding classes of Kubernetes Certified Service Providers (KCSP) and Kubernetes Training Providers (KTP). He has consulted, developed, and trained across many domains... Read More →
Tuesday June 25, 2024 11:55am - 12:30pm PDT
Room 447

12:30pm PDT

Lunch
Tuesday June 25, 2024 12:30pm - 1:45pm PDT
Tuesday June 25, 2024 12:30pm - 1:45pm PDT
Pine Lobby - 4th Floor Seattle Convention Center, 900 Pine St, Seattle, WA 98101, USA

1:45pm PDT

Security-Focused Chaos Engineering - the Lasso for AI Security Threats - Priyanka Tembey & Glenn McDonald, Operant AI
Tuesday June 25, 2024 1:45pm - 2:20pm PDT
While AI presents an opportunity to innovate across domains, we are learning that it also presents unknown threat vectors that are constantly evolving. So what does threat-modeling look like for today's AI applications? Some frameworks are emerging like the OWASP LLM risks or MITRE ATLAS framework that lists attack TTPs for AI applications. However these are just baseline frameworks that need customizing for each organization. Furthermore, secure behavior of AI applications needs continuous verification as they are by nature, indeterministic and are often built on top of 3rd party models which are untrusted black boxes. AI apps should be actively breached to test how secure organizational data, IP, and internal APIs are when connected through them, much like the way the resilience of dynamic microservices is actively tested using chaos experiments. This talk will describe how to bring proactive Chaos-testing to AI security using Secops-Chaos - an open source framework that helps encode TTPs as security focused chaos experiments, with hands-on demos of how to map some of the MITRE ATLAS TTPs to AI apps running as containers within Kubernetes environments.
Speakers
avatar for Glenn McDonald

Glenn McDonald

Software Engineer, Operant
Glenn McDonald is a Software Engineer at Operant, bringing a broad industry experience from Cloud Providers to Financial Services. Specializing in Cloud Native architecture and Application Security, with a keen interest in exploring emerging technologies.
avatar for Priyanka Tembey

Priyanka Tembey

Co-founder and CTO, Operant
A technologist with a PhD in distributed systems and optimization from Georgia Tech, Priyanka has spent over 10 years as a software engineer at the forefront of cloud-native technologies. Priyanka was one of the foundational engineers to build out VMware's hybrid cloud product architecting... Read More →
Tuesday June 25, 2024 1:45pm - 2:20pm PDT
Room 447

2:25pm PDT

Secure by Design: Strategies for LLM Adoption in Cloud-Native Environments - Patryk Bąk & Marcin Wojtas, BlueSoft
Tuesday June 25, 2024 2:25pm - 3:00pm PDT
This presentation will explore the common journey of software development companies in securely adopting AI technology within a cloud-native environment. It will unpack the challenges that development and platform teams face as they integrate AI into their systems. After initial resistance to using external LLM APIs, confidence grew and companies began building solutions with non-public data and open-source models. However, questions persist: Is my new shiny AI/LLM app secure and safe? This session will discuss practical approaches to challenges such as data privacy in Retrieval-Augmented Generation architectures, the complexities of AI-agent architectures where actions are performed across integrated systems, and the general security hardening of AI/LLM applications. We will share insights from our practice, which began by defining a threat model for AI-based systems aligned with the OWASP Top 10 for LLM Applications, and progressed to incorporating solutions into our cloud-native platform. Both offensive and defensive approaches were implemented, including the integration of tools like garak (LLM Vulnerability Scanner) and NVIDIA NeMo Guardrails into our cloud-native stack.
Speakers
avatar for Patryk Bąk

Patryk Bąk

Solutions Architect, BlueSoft
Patryk has over six years of experience in IT, with a diverse background encompassing roles as a Software Engineer, DevOps specialist, and team leader. He is a co-founder of Platform Engineers Poland and serves as a community leader. His current areas of focus include Platform Engineering... Read More →
avatar for Marcin Wojtas

Marcin Wojtas

Senior DevOps engineer, BlueSoft
Marcin Wojtas has over seven years of experience in IT as DevOps engineer. He has built experience through various projects using a wide range of technologies, particularly in developing large-scale platforms. His current areas of focus include LLMOps, Software Supply Chain Security... Read More →
Tuesday June 25, 2024 2:25pm - 3:00pm PDT
Room 447

3:05pm PDT

⚡ Lightning Talk: Revolutionize Security GRC: Leverage AI and LLM for Continuous Controls Monitoring - Megha Shah, ComplianceCow
Tuesday June 25, 2024 3:05pm - 3:15pm PDT
Today GRC team struggles to instill the culture of Continuous Control Monitoring. Typically, they utilize mechanisms such as security questionnaire, email or Sharepoint to gather evidence. These aids help them in assessing compliance, preparing for audits and managing vendor risk assessments. However, they encounter difficulties in collecting data and evidence due to lack of standardization, technical complexity, repetitiveness and insufficient time and resources. We can support our hard working GRC teams and equip them the necessary tools by employing LLM in the following way: - Creating a machine readable controls framework in YAML from the policy document. - Generating a dynamic graph of policies, controls and frameworks based on the YAML - Designing a dynamic evaluation questionnaire for users to assess the effectiveness of these policies - Deploying this questionnaire using well known tools like Google forms for continuous controls monitoring - Implementing CEL (Common Expression Language) to calculate the compliance score dynamically based on the evaluation responses. - Integrating the final results into reports and dashboards for the steering governance committee.
Speakers
avatar for Megha Shah

Megha Shah

Principal Solutions Architect, ComplianceCow
Kubernetes Security Engineer with CKAD, CKA and CKS. She is a proficient programmer in Golang and Python with 10+ years of software development experience and has specifically focused on Kubernetes, Cloud and SAAS security assurance for the last 5+ years.
Tuesday June 25, 2024 3:05pm - 3:15pm PDT
Room 447

3:15pm PDT

PM Break
Tuesday June 25, 2024 3:15pm - 3:35pm PDT
Tuesday June 25, 2024 3:15pm - 3:35pm PDT
Pine Lobby - 4th Floor Seattle Convention Center, 900 Pine St, Seattle, WA 98101, USA

3:35pm PDT

Using Large Language Models to Improve Data Loss Prevention in Organizations - Asaf Fried, Cato Networks
Tuesday June 25, 2024 3:35pm - 4:10pm PDT
Cato Networks has recently released a new data loss prevention (DLP) capability, enabling customers to detect and block documents being transferred over the network, based on sensitive categories, such as tax forms, financial transactions, patent filings, medical records, job applications, and more. Many modern DLP solutions rely heavily on pattern-based matching to detect sensitive information. However, they don't enable full control over sensitive data loss. Take for example a resume of a job applicant. While the document might contain basic PIIs, such as the candidate's phone number and address, it's the application itself that concerns the company's DLP policy. Unfortunately, pattern-based methods fall short when trying to detect the document category. Many sensitive documents don't have specific keywords or patterns that distinguish them from others, and therefore, require full-text analysis. In this case, the best approach is to apply data-driven methods and tools from the domain of natural language processing (NLP), specifically, large language models (LLM).
Speakers
avatar for Asaf Fried

Asaf Fried

Cato Networks
Asaf leads the Data Science team in Cato Research Labs at Cato Networks. He earned an MS degree from Ben-Gurion University of the Negev with his thesis “Facing Airborne Attacks on ADS-B Data with Autoencoders” and received a Bachelors degree in computer science from Reichman... Read More →
Tuesday June 25, 2024 3:35pm - 4:10pm PDT
Room 447

4:15pm PDT

ShellTorch the Next Evolution in *4Shell Executions - Gal Elbaz & Avi Lumelsky, Oligo Security
Tuesday June 25, 2024 4:15pm - 4:50pm PDT
The Oligo Security team recently identified ShellTorch, a chain of 4 vulnerabilities that allow a full chain of Remote Code Execution (RCE), with a new CVE-2023-43654 having a CVSS score of 9.8, and found tens of thousands of vulnerable instances publicly exposed in Torchserve, which is part of the PyTorch ecosystem (one of the most widely adopted OSS frameworks for AI in the world), open to unauthorized access and insertion of malicious AI models.   In this talk, we’ll dive into the research team’s identification of the TorchServe vulnerabilities enabling a total takeover of impacted systems.  With the growing popularity of AI and LLMs, securing these applications and their tooling stacks is becoming increasingly important.  Come to this session to unpack this newly discovered high-severity exploit from the researchers themselves, which enables the viewing, modifying, stealing, and deleting of AI models and sensitive data on a targeted TorchServe server, with a live demo of its reproduction, and steps you can take immediately to mitigate the risk.
Speakers
avatar for Avi Lumelsky

Avi Lumelsky

AI Security Researcher @ CTO Office, Oligo Security
Avi has a relentless curiosity about AI, Security, and Business — and the places where all three connect.An experienced Software Engineer and Architect, Avi focuses on AI, with deep security insights. Edit Profile... Read More →
avatar for Gal Elbaz

Gal Elbaz

Co-founder & CTO at Oligo Security, Oligo Security
Co-founder & CTO at Oligo Security with 10+ years of experience in vulnerability research and practical hacking. He previously worked as a Security Researcher at CheckPoint and served in the IDF Intelligence. In his free time, he enjoys playing CTFs.
Tuesday June 25, 2024 4:15pm - 4:50pm PDT
Room 447

4:50pm PDT

Closing Remarks
Tuesday June 25, 2024 4:50pm - 4:55pm PDT
Speakers
avatar for Shane Lawrence

Shane Lawrence

Sr Staff Developer, Shopify
Shane is a Senior Staff Infrastructure Security Engineer at Shopify, where he's working on a multi-tenant platform that allows developers to securely build scalable apps and services for crafters, entrepreneurs, and businesses of all sizes.
Tuesday June 25, 2024 4:50pm - 4:55pm PDT
Room 447
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.