Loading…
Secure AI Summit 2024 (Powered by Cloud Native) has ended
Machine Learning and Anomaly Detection clear filter
Tuesday, June 25
 

3:35pm PDT

Using Large Language Models to Improve Data Loss Prevention in Organizations - Asaf Fried, Cato Networks
Tuesday June 25, 2024 3:35pm - 4:10pm PDT
Cato Networks has recently released a new data loss prevention (DLP) capability, enabling customers to detect and block documents being transferred over the network, based on sensitive categories, such as tax forms, financial transactions, patent filings, medical records, job applications, and more. Many modern DLP solutions rely heavily on pattern-based matching to detect sensitive information. However, they don't enable full control over sensitive data loss. Take for example a resume of a job applicant. While the document might contain basic PIIs, such as the candidate's phone number and address, it's the application itself that concerns the company's DLP policy. Unfortunately, pattern-based methods fall short when trying to detect the document category. Many sensitive documents don't have specific keywords or patterns that distinguish them from others, and therefore, require full-text analysis. In this case, the best approach is to apply data-driven methods and tools from the domain of natural language processing (NLP), specifically, large language models (LLM).
Speakers
avatar for Asaf Fried

Asaf Fried

Cato Networks
Asaf leads the Data Science team in Cato Research Labs at Cato Networks. He earned an MS degree from Ben-Gurion University of the Negev with his thesis “Facing Airborne Attacks on ADS-B Data with Autoencoders” and received a Bachelors degree in computer science from Reichman... Read More →
Tuesday June 25, 2024 3:35pm - 4:10pm PDT
Room 447

4:15pm PDT

ShellTorch the Next Evolution in *4Shell Executions - Gal Elbaz & Avi Lumelsky, Oligo Security
Tuesday June 25, 2024 4:15pm - 4:50pm PDT
The Oligo Security team recently identified ShellTorch, a chain of 4 vulnerabilities that allow a full chain of Remote Code Execution (RCE), with a new CVE-2023-43654 having a CVSS score of 9.8, and found tens of thousands of vulnerable instances publicly exposed in Torchserve, which is part of the PyTorch ecosystem (one of the most widely adopted OSS frameworks for AI in the world), open to unauthorized access and insertion of malicious AI models.   In this talk, we’ll dive into the research team’s identification of the TorchServe vulnerabilities enabling a total takeover of impacted systems.  With the growing popularity of AI and LLMs, securing these applications and their tooling stacks is becoming increasingly important.  Come to this session to unpack this newly discovered high-severity exploit from the researchers themselves, which enables the viewing, modifying, stealing, and deleting of AI models and sensitive data on a targeted TorchServe server, with a live demo of its reproduction, and steps you can take immediately to mitigate the risk.
Speakers
avatar for Avi Lumelsky

Avi Lumelsky

AI Security Researcher @ CTO Office, Oligo Security
Avi has a relentless curiosity about AI, Security, and Business — and the places where all three connect.An experienced Software Engineer and Architect, Avi focuses on AI, with deep security insights. Edit Profile... Read More →
avatar for Gal Elbaz

Gal Elbaz

Co-founder & CTO at Oligo Security, Oligo Security
Co-founder & CTO at Oligo Security with 10+ years of experience in vulnerability research and practical hacking. He previously worked as a Security Researcher at CheckPoint and served in the IDF Intelligence. In his free time, he enjoys playing CTFs.
Tuesday June 25, 2024 4:15pm - 4:50pm PDT
Room 447
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.