AI can be a big help to healthcare workers, but there are legal issues to consider

Photo: Carly Koza

As burnout among healthcare workers continues to be a major concern, the use of artificial intelligence, EHRs and other automation tools may be able to have a positive impact on hospitals and health systems.

When it comes to artificial intelligence, some legal issues arise. That’s why we interviewed Carly Koza, an authority on this topic and Buchanan Ingersoll & Rooney associate. Buchanan Ingersoll & Rooney is a national law firm with 450 attorneys and government relations professionals across 15 offices representing companies including 50 of the Fortune 100.

Koza discusses what healthcare provider organizations should prepare for when it comes to growing AI implementation, how AI can help combat increasing demands on healthcare workers, ways AI can help healthcare provider organizations ensure quality patient care, and legal matters that arise from these issues.

Q. What should healthcare provider organizations prepare for when it comes to growing AI implementation in the near future? What legal issues arise?

A. Traditionally, providers alone assess a patient and directly deliver medical information based on observation, but AI is transforming the healthcare industry. AI can analyze data stored by healthcare provider organizations to find possible connections among patients, disease and treatment.

Data analytics can be broken down into four distinct categories – diagnostic, descriptive, prescriptive and predictive – all of which assess both current and historical data to predict certain outcomes as they relate to an individual.

Healthcare provider organizations should prepare to use both clinical decision support software and image-recognition software to provide diagnoses and recommend individualized treatment to patients.

When applied properly, clinical decision support software can deliver treatment recommendations for oncologists, for example, based upon what the program has learned from large amounts of patient data, as well as information it pulls from reference materials, clinical guidelines and medical journals.

Additionally, image-recognition software and predictive analytics can help providers recognize skin cancer in photos, identify arrhythmias in EKGs and determine whether a nodule in a CT scan is malignant. Essentially, predictive analytics can assist providers in determining which patients are at risk of hospital readmissions and have health conditions that require intervention.

However, because AI is constantly improving, practitioners should consider the principle of informed consent because liability could attach if a physician does not inform the patient of the risks and benefits of proposed treatment and non-treatment.

To best avoid any legal issues that may arise from data interpretation, when making a diagnosis or providing treatment recommendations to patients, practitioners should disclose the ever-evolving nature of the technology used, explain how the AI system works and share the level at which the technology was relied upon because when used extensively, AI could even be considered another member of the medical team.

Finally, AI clinical decision support programs learn from the health information of some patients through data collection to make recommendations for other patients.

Thus, the treating physician should address the potential issues that may arise related to confidentiality and privacy as patients have a right to know how their protected health information is being disseminated for research, diagnosis and treatment purposes.

Q. How can AI help combat increasing demands on healthcare workers? Are there any legal issues that could impact healthcare workers?

A. AI can assist with administrative workflow. Because AI hosts data in the aggregate, AI can help reconcile gaps in medical records and produce automated fillers and responses. Additionally, the compilation of thousands of patients’ data creates streamlined processes for health systems that increases quality of care while maximizing profits.

Documenting chart notes and patient summaries along with writing test orders and prescriptions can take providers away from direct patient care. To alleviate this issue, healthcare practitioners can employ speech and text recognition for tasks like recording patient communication and capturing clinical notes.

Use of AI may also be beneficial as applied to billing and claims processing and prior authorization. Robotic process automation, a rule-based software robot that mimics human actions, can be used to enter, process and adjust claims. Natural language processing, a form of AI that understands and interprets spoken or written human language, can convert physician notes into standardized CPT and ICD-10 codes associated with billing.

Prior authorization includes confirming each patient’s individual health plan and any medical and drug benefits along with any additional patient information that may be needed for treatment approval including history, referral information or medical necessity justification.

As health systems continue to expand, the administrative burden of manually entering or searching for the aforementioned information can be reduced through greater dependence on AI that can organize information from electronic health records, emails, policies and medical protocols.

By using computer algorithms and natural language processing to assimilate spoken or written data, clinicians can deliver informed treatment plans to patients in a more efficient manner.

While AI can be used as a tool to combat increasing demands on healthcare workers, it is important to take bias into consideration. Healthcare provider organizations must closely monitor prior authorization decisions made for disadvantaged populations and those on Medicaid, for example.

Pre-existing bias of the programmers developing the algorithms used in AI systems, regardless of whether it is intentional or unintentional, often leads to further algorithmic and data-driven bias from the AI itself.

For example, a facial recognition algorithm could be programmed to more easily recognize a white individual than individuals of other races, as this data is usually used more often in training. Biased inputs that do not take the patient population at large into account could lead to biased outcomes. Therefore, it is likely best practice to evaluate potential bias by completing risk assessments and ethics training.

Q. What are some ways AI can help healthcare provider organizations ensure quality patient care and reduce potential liability?

A. AI can help healthcare provider organizations ensure quality patient care both on an individual and population-based level. On the individual level, AI-insights data can provide practitioners with progress updates, detailed history, and other information related to the patient to better guarantee a more streamlined treatment approach and help predict future outcomes.

Providers can also use AI to study broad community-related factors related to population health. If healthcare provider organizations rely on AI to target high-risk patients grouped by clinical conditions or comorbidities, studies show they can expect fewer and less severe interventions and reduced hospitalizations.

Additionally, when AI’s analytical power is paired with other technologies like robotics and their physical capabilities, even stem cell procedures used in regenerative medicine can improve and help restore or establish normal function in cells, tissues or organs. By spending less time on lengthy processes and manual tasks, providers can dedicate more hours to patient-centered care.

Beyond providers’ implementation of AI into their respective practices, manufacturers and developers are involved in AI progression. Thus, it is important to consider their roles when aiming to reduce potential provider liability.

Manufacturers and developers must ensure that products containing AI are safe when used in reasonably foreseeable ways so that healthcare provider organizations avoid implementing a defective AI system into their practice.

Thereafter, the healthcare provider organization should assess the impact of the automated systems it chooses to use and consider how much it will rely on these systems to make informed choices when treating patients.

AI is a beneficial tool but ultimately the diagnosing or treating physician is responsible for medical decisions and thus also responsible if the care provided is negligent or reckless.

It is by using AI to pair high quality datasets alongside predictive algorithms in addition to their medical knowledge, that practitioners can form complex connections between multiple patient characteristics and thereafter deliver a more precise diagnosis and, in turn, reduce potential liability.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email the writer: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article