F. Paul Greene, Felix Knoll, and Bruce Cheney
October 25, 2023 10:00 am - 10:50 am
Questions concerning Artificial Intelligence ("AI") are landing squarely on the desk of information security teams. But the technology itself is changing, regulators are struggling to keep up, and the actual reward of AI adoption can be difficult to separate from the deafening hype. This panel brings together industry experts to discuss the risks inherent to AI, whether used internally or externally, the developing regulatory landscape, and the potential rewards an organization can realize by leveraging AI. Discussion will include an assessment of how AI is being leveraged by attackers, strategies for developing a responsible approach to AI , and how AI can help an organization manage its data better.
October 25, 2023 11:00 am - 11:50 am
SMTP has been a troubled protocol for the entirety of its existence. The one thing that has made it marginally livable has been filters and 'milters' which trim 99% of SPAM at the provider level before it even hits the inbox. AI will be able to defeat every known mail filter we have, making it impossible to trim the spam. We could harness the power of AI to filter also, but is it worth it?
October 26, 2023 3:30 pm - 4:20 pm
Amid an escalating threat landscape, healthcare has emerged as a prime target for malicious actors. As the sector grapples with a surge in attacks, our understanding of these threats is vital for strategic defense. This presentation will provide a detailed walkthrough of attacks on a crucial patient-facing web application. Using extensive data, we will dissect actual threat strategies, their objectives, and their potential effectiveness. The talk will delve into trend analysis, providing an insight into the evolving patterns of attacks over the years. Furthermore, we discuss our approach to threat identification, logging, and internal mitigation measures. Join us as we unpack the realities of cyber threats in healthcare.
October 25, 2023 2:00 pm - 2:50 pm
The rapid advancement of generative artificial intelligence (AI) foundation models, exemplified by models such as ChatGPT, has revolutionized numerous domains, including natural language processing, content creation, and problem-solving. However, along with their remarkable capabilities, these models introduce a set of significant security risks that demand careful consideration. This presentation aims to delve into the potential security challenges posed by generative AI foundation models and propose effective mitigation strategies.