+1-(877) 629-3710 cs@conferencepanel.com

Apr 17, 2026 , 01 : 00 PM EST |  22 Days Left

AI in Healthcare: Securing Clinical Systems, Medical Devices, and Patient Data in the Age of Intelligent Threats

Presented by Dr. Gus Hanna
Duration - 60 Minutes

Join our mailing list
Click here* to download our Order Form

Choose Your Options

Live Webinar
$219
Recorded Webinar
$219
Live & Recorded Webinar
$389
Transcript (Pdf)
$219
Recorded Webinar & Transcript (Pdf)
$389
Live + Recording + Transcript
$429
Live + Transcript
$389
Total $0.00

Description

Artificial Intelligence is no longer experimental in healthcare; it is operational. AI models now assist radiologists in identifying abnormalities, support clinical decision-making, optimize hospital operations, power chatbots interacting with patients, and drive intelligent behavior in connected medical devices. While these technologies promise improved patient outcomes and operational efficiency, they introduce a new generation of cybersecurity and safety risks.

Healthcare organizations already operate in one of the most regulated and threat-heavy environments. The integration of AI expands the digital attack surface beyond traditional systems. AI models rely on large datasets, APIs, cloud infrastructure, identity systems, and integration points across Electronic Health Record (EHR) platforms and medical devices. Each layer presents vulnerabilities.

Threat actors are increasingly exploiting AI systems through techniques such as data poisoning, adversarial manipulation, model inversion attacks, API abuse, credential compromise, and ransomware targeting AI infrastructure. In clinical settings, these attacks are not merely data risks— they can directly impact patient safety and clinical decision accuracy.

Additionally, AI-enabled medical devices present unique regulatory and safety challenges. The FDA has introduced evolving guidance on Software as a Medical Device (SaMD) and AI/ML-based systems. Healthcare entities must consider secure development lifecycle practices, post-market monitoring, threat modeling, and vulnerability management as core requirements—not optional controls.

From a compliance perspective, HIPAA’s Security and Privacy Rules extend to AI systems that process PHI. Organizations must ensure confidentiality, integrity, and availability while maintaining auditability and accountability. Emerging AI governance frameworks, including the NIST AI Risk Management Framework, add further complexity.

This webinar examines AI in healthcare from both a technical and governance lens. Participants will explore how AI systems are architected, where vulnerabilities typically emerge, and how to apply structured threat modeling techniques to identify and mitigate risks. The session will also address how to align AI security practices with Zero Trust principles, identity governance, cloud security, DevSecOps, and incident response planning.

Learning Objectives

By the end of this session, participants will be able to:

  • Identify key cybersecurity risks introduced by AI systems in healthcare.
  • Analyze AI architectures to determine potential attack vectors.
  • Apply threat modeling principles to AI-enabled applications.
  • Understand regulatory implications for AI in medical and clinical environments.
  • Implement foundational controls to secure AI development and deployment.
  • Communicate AI risk effectively to executive and board leadership.

Areas Covered

  • AI use cases in healthcare and medical devices
  • AI threat landscape (data poisoning, adversarial ML, prompt injection, model theft)
  • AI attack surface analysis in clinical environments
  • Threat modeling AI systems (STRIDE, MITRE ATT&CK mapping)
  • Securing AI APIs and cloud infrastructure
  • Identity & Zero Trust for AI systems
  • AI in connected medical devices (FDA considerations)
  • HIPAA implications for AI processing PHI
  • AI governance frameworks (NIST AI RMF overview)
  • DevSecOps for AI/ML pipelines
  • Incident response considerations for AI compromise
  • Board-level AI risk communication.

Background

Artificial Intelligence (AI) is rapidly transforming healthcare. From AI-assisted diagnostics and predictive analytics to smart infusion pumps and connected medical devices, healthcare organizations are integrating machine learning into clinical workflows at unprecedented speed.
However, as AI adoption accelerates, so do the risks.

Healthcare remains the most targeted sector for ransomware and data breaches. The introduction of AI systems expands the attack surface, introducing new risks such as model manipulation, adversarial attacks, data poisoning, prompt injection, API exploitation, identity abuse, and regulatory non-compliance.

Regulators are responding. The FDA is issuing guidance on AI-enabled medical devices. OCR continues enforcement under HIPAA. NIST is advancing AI Risk Management frameworks. Meanwhile, healthcare CISOs and compliance leaders must balance innovation, safety, and patient trust.

This session addresses the critical intersection of AI, cybersecurity, medical device safety, and regulatory governance.

Why Should You Attend

You should attend if you are asking:

  • How do we secure AI systems integrated into clinical environments?
  • What new cyber risks do AI-enabled medical devices introduce?
  • How do HIPAA, FDA, NIST, and emerging AI regulations apply?
  • How do we implement AI securely without slowing innovation?
  • How do we prepare our organization before AI-related incidents occur?

This session moves beyond theory and provides practical, architecture-level and governance-level guidance.

Who Should Attend

  • Healthcare CISOs and CIOs
  • Compliance & Risk Officers
  • Medical Device Security Engineers
  • Clinical Engineering Teams
  • Healthcare IT Directors
  • Information Security Architects
  • Healthcare Executives & Board Members
  • Biomedical Engineering Leaders
  • Health System Innovation Officers
  • Privacy Officers.

Speaker

Dr. Gus Hanna

Dr. Gus Hanna is a cybersecurity architect and technical leader with more than 25 years of experience securing healthcare systems, cloud infrastructure, and regulated environments. He has led enterprise security architecture, threat modeling, and AI governance initiatives across healthcare, government, and global SaaS organizations.

In senior cybersecurity leadership roles at multiple organizations, Dr. Hanna has designed and validated NIST-aligned security controls, implemented Zero Trust architectures, and driven secure DevSecOps integrations in complex cloud and hybrid environments. He brings extensive hands-on expertise in AI security, FDA-compliant medical device cybersecurity, HIPAA, HITRUST, FedRAMP, and NIST 800-53 compliance programs, as well as identity architecture and cloud risk management.

A sought-after conference speaker and panelist, he frequently delivers keynotes on AI in cybersecurity and regulatory compliance and regularly advises organizations on aligning innovation with regulatory requirements and operational resilience.

In addition to his industry leadership, Dr. Hanna is a university professor teaching graduate and undergraduate courses in cybersecurity, cloud security, incident response, and secure software development. His skill in translating complex regulatory requirements into clear, actionable guidance makes his presentations highly valuable to both technical professionals and executive audiences.