What Does Security Look Like When Building AI?
Anyone who is working with AI or considering doing so should care about security. When considering building an AI-powered system or product, the traditional attack surfaces and mitigations still apply. However, new attack surfaces can be present depending on the specific AI approaches used. In addition, due to the typically higher level of automation in AI systems, they can do more harm if they are compromised.
In this talk, we’ll discuss how AI has the same attack vectors as traditional software, and what those attacks look like. We’ll also discuss new attacks that are specific to generative AI (e.g. LLMs like ChatGPT), machine learning & computer vision systems, and optimization techniques. For each type of attack, we’ll point out how they can be thwarted, or at least mitigated.
Previous experience with AI and security are not required to benefit from the session. Attendees will see tools & techniques to help write more secure software, AI-enabled or not. They will walk away with a better understanding of AI-specific attack vectors and their mitigations. They will be equipped to find security education resources in the future.