Purchase any webinar and get OFF
Live Webinar
SIGNUP AND FLAT OFF ON WEBINAR.
Apr 17, 2026 , 01 : 00 PM EST | 22 Days Left
Choose Your Options
Artificial Intelligence is no longer experimental in healthcare; it is operational. AI models now assist radiologists in identifying abnormalities, support clinical decision-making, optimize hospital operations, power chatbots interacting with patients, and drive intelligent behavior in connected medical devices. While these technologies promise improved patient outcomes and operational efficiency, they introduce a new generation of cybersecurity and safety risks.
Healthcare organizations already operate in one of the most regulated and threat-heavy environments. The integration of AI expands the digital attack surface beyond traditional systems. AI models rely on large datasets, APIs, cloud infrastructure, identity systems, and integration points across Electronic Health Record (EHR) platforms and medical devices. Each layer presents vulnerabilities.
Threat actors are increasingly exploiting AI systems through techniques such as data poisoning, adversarial manipulation, model inversion attacks, API abuse, credential compromise, and ransomware targeting AI infrastructure. In clinical settings, these attacks are not merely data risks— they can directly impact patient safety and clinical decision accuracy.
Additionally, AI-enabled medical devices present unique regulatory and safety challenges. The FDA has introduced evolving guidance on Software as a Medical Device (SaMD) and AI/ML-based systems. Healthcare entities must consider secure development lifecycle practices, post-market monitoring, threat modeling, and vulnerability management as core requirements—not optional controls.
From a compliance perspective, HIPAA’s Security and Privacy Rules extend to AI systems that process PHI. Organizations must ensure confidentiality, integrity, and availability while maintaining auditability and accountability. Emerging AI governance frameworks, including the NIST AI Risk Management Framework, add further complexity.
This webinar examines AI in healthcare from both a technical and governance lens. Participants will explore how AI systems are architected, where vulnerabilities typically emerge, and how to apply structured threat modeling techniques to identify and mitigate risks. The session will also address how to align AI security practices with Zero Trust principles, identity governance, cloud security, DevSecOps, and incident response planning.
Learning Objectives
By the end of this session, participants will be able to:
Areas Covered
Background
Artificial Intelligence (AI) is rapidly transforming healthcare. From AI-assisted diagnostics and predictive analytics to smart infusion pumps and connected medical devices, healthcare organizations are integrating machine learning into clinical workflows at unprecedented speed.
However, as AI adoption accelerates, so do the risks.
Healthcare remains the most targeted sector for ransomware and data breaches. The introduction of AI systems expands the attack surface, introducing new risks such as model manipulation, adversarial attacks, data poisoning, prompt injection, API exploitation, identity abuse, and regulatory non-compliance.
Regulators are responding. The FDA is issuing guidance on AI-enabled medical devices. OCR continues enforcement under HIPAA. NIST is advancing AI Risk Management frameworks. Meanwhile, healthcare CISOs and compliance leaders must balance innovation, safety, and patient trust.
This session addresses the critical intersection of AI, cybersecurity, medical device safety, and regulatory governance.
Why Should You Attend
You should attend if you are asking:
This session moves beyond theory and provides practical, architecture-level and governance-level guidance.
Who Should Attend
Dr. Gus Hanna is a cybersecurity architect and technical leader with more than 25 years of experience securing healthcare systems, cloud infrastructure, and regulated environments. He has led enterprise security architecture, threat modeling, and AI governance initiatives across healthcare, government, and global SaaS organizations.
In senior cybersecurity leadership roles at multiple organizations, Dr. Hanna has designed and validated NIST-aligned security controls, implemented Zero Trust architectures, and driven secure DevSecOps integrations in complex cloud and hybrid environments. He brings extensive hands-on expertise in AI security, FDA-compliant medical device cybersecurity, HIPAA, HITRUST, FedRAMP, and NIST 800-53 compliance programs, as well as identity architecture and cloud risk management.
A sought-after conference speaker and panelist, he frequently delivers keynotes on AI in cybersecurity and regulatory compliance and regularly advises organizations on aligning innovation with regulatory requirements and operational resilience.
In addition to his industry leadership, Dr. Hanna is a university professor teaching graduate and undergraduate courses in cybersecurity, cloud security, incident response, and secure software development. His skill in translating complex regulatory requirements into clear, actionable guidance makes his presentations highly valuable to both technical professionals and executive audiences.