6 AI Security Guidelines for Healthcare Organizations

To secure AI in hospitals, Pete Johnson, CDW’s artificial intelligence field CTO, recommends using an in-house solution that lets clinicians and other staff experiment with an AI chat app without exposing data in the public sphere. Organizations can also work with a public model that has the right privacy protections in place.
“All of the Big Three hyperscalers — Amazon, Microsoft and Google — have in their data privacy agreements that they will not use any of your prompt content to retrain models,” Johnson says. “In that way, you’re protected even if you don’t have that AI program on-premises. If you use it in the cloud, the data privacy agreement guarantees that they won’t use your data to retrain models.”
2. Establish an Action Plan in Case of an AttackAn action plan should detail what to do if a data breach occurs or if a mass phishing email circulates in a financial fraud attempt.
“It’s incredibly important for IT professionals to understand exactly what those new attack surfaces are and what they look like, and then start building a framework for addressing that,” Hawking says. “That includes everything — the hardware, software and actual IT architecture — but also policies and regulations in place to address these issues.”
EXPLORE: Address trust and privacy concerns to support full-scale AI adoption in healthcare.
3. Take Small Steps Toward AI ImplementationAs healthcare organizations experiment with AI, they should start small. For example, they can use ambient listening and intelligent documentation to reduce the burden on physicians and clinicians.
“Don’t take your entire data estate and make it available to some AI bot. Instead, be very prescriptive about what problems you are trying to solve,” Johnson says.
4. Use Organization Accounts With AI ToolsHawking warns against using personal email accounts to avoid creating entry points for data sharing that could be used to train models without consent.
5. Vet AI Tools No Matter Where They’re UsedHawking also recommends that organizations create an oversight team to vet AI tools. The team could include stakeholders such as the IT department, clinicians and even patient advocates.
“It doesn’t mean lock down all AI, but understand exactly what’s being used and why it’s being used,” Hawking says.
UP NEXT: AI data governance strategies that will set you up for success.
6. Conduct a Complete Risk Assessment and Full AuditA thorough risk assessment allows healthcare organizations to identify regulatory compliance risks and develop policies and procedures for the use of generative AI.
“It’s really important, as part of an AI audit, to get a proper overview how all of those things take place,” Hawking says. “That is the starting point of good governance.”
healthtechmagazine