From Regulation to Resilience: Best Practices for Securing Healthcare Data in an AI Era

In June, Rep. Brian Fitzpatrick of Pennsylvania and Rep. Jason Crow of Colorado introduced the Healthcare Cybersecurity Act of 2025 in the House of Representatives. The bill would appoint a liaison between the Department of Health and Human services and the Cybersecurity and Infrastructure Security Agency who will promote real-time threat sharing to improve incident response and facilitate cybersecurity training for provider organizations.
The goal of the act is to minimize breaches and to minimize data loss when breaches occur. If the bill passes, then there will likely be increased compliance requirements in the short term for healthcare organizations, especially rural, independent and community hospitals.
In accordance with the executive order Removing Barriers to American Leadership in AI, the White House released America’s AI Action Plan, which frames AI dominance as a national security imperative.
However, there is some contention between the proposed bill and the plan. The action plan’s intent is to let more ideas and data flow freely among organizations to improve the nation’s innovation posture; the proposed bill is concerned with data security.
RELATED: Visibility is the key to data security in healthcare.
For example, large language models need as much data as they can possibly consume. But encouraging unrestricted training of AI models conflicts with the purpose of the cyber bill, which seeks to ensure that only the right people have access to certain data. Healthcare organizations need to be aware of this dichotomy.
Another security-related policy to be aware of is the proposed HIPAA Security Rule to Strengthen the Cybersecurity of Electronic Protected Health Information. If enacted, it would require that healthcare organizations keep more detailed data than they currently do to perform risk analyses. They would need to notify entities of security incidents.
While the updated requirements would improve security, some of them could create additional burdens for organizations. For example, health systems will be required to implement multifactor authentication for email and to encrypt electronic personal health information at rest and in motion.
Finally, healthcare organizations should also be aware of two policies from the Biden Administration: the Preventing Access to U.S. Sensitive Personal Data and Government-Related Data by Countries of Concern or Covered Persons final rule and the proposed Protecting Americans’ Data from Foreign Adversaries Act of 2024. In short, these aim to limit the amount of data that can be gathered and sent to certain adversarial countries.
The Impact of AI on Health Systems’ Approach to Data and SecurityIn addition to shifts driven by policy, the rapid growth of AI has acted as a catalyst for healthcare organizations to prioritize data and security. Having solid data and AI governance in place, with a strong understanding of the role security should play in both, is crucial as organizations plan to use AI tools and protect against them. Many organizations are finally waking up to the fact that it’s time to pay attention to data and security and give them the time, effort and investment they need.
With robust data and AI governance in place, healthcare organizations can take advantage of AI and automation. Take automated incident response, for example: Instead of having a human notice an event and react to it, AI-powered incident response solutions can recognize that an incident is occurring and automatically begin the process laid out in the organization’s security policies. It can recognize a suspicious login and put constraints around that identity to protect the health system’s data. That’s becoming more commonplace.
While AI can help healthcare organizations address security concerns, it also introduces new risks. The large data sets health systems are amassing puts them at greater risk because accessing the network for those large troves of data is more valuable to cybercriminals than the less centralized data they could steal previously. That’s a security concern for hospitals that are training LLMs and other AI models.
Another consideration for healthcare organizations to consider is automation bias. It’s important to have a human in the loop to verify AI outputs, but over time people relax their oversight, giving these systems more power. If a bad actor gets access to an AI tool without solid oversight, it could be used against the organization.
Click the banner below to sign up for HealthTech’s weekly newsletter.
healthtechmagazine