To ensure that healthcare ML works ethically, it must be subject to medical ethics oversight. It is achievable by leveraging the research ethics framework for clinical ML development.
Physicians have two essential qualities that algorithms lack: empathy and benevolence, which are critical for establishing trusting patient-physician relationships. This article adds to the ongoing discussion by exploring these issues through the lens of professionalization theory.
Transparency is a critical component of ethical machine learning in healthcare. Following several high-profile cases in which ML has been linked to other concerns about ethical machine learning, there is a demand for a more deliberate approach to AI. It is reflected in activism from civil society, the emergence of over a hundred sets of AI Ethical principles globally (Linking Artificial Intelligence Principles), and government moves worldwide to regulate AI.
However, defining what constitutes ethics and developing ethical ML systems is more complex. Ethical principles often operate at a level of abstraction that makes them more universal, but the specifics of how they apply to any given socio-political context must be negotiated. Moreover, there are often trade-offs between different principles – for example, collecting more data about non-typical users to better tailor solutions to them (for fairness purposes) could come at the expense of privacy.
Fortunately, health care is one of the few sectors with a rich history of addressing ethical risks – a tradition proving helpful in developing practical, transparent ML tools. Leaders in the industry should take inspiration from this work, identifying the potential human rights impacts of their products and providing access to redress for those whose rights are violated by these technologies. In addition, they should consider the possibility that their algorithms may be biased and develop ways to identify and mitigate this risk.
Fairness – treating all individuals equally – is critical in healthcare, including ensuring equal access to high-quality healthcare services and preventing discrimination against patients with different social statuses. It can be accomplished by implementing equitable AI-driven healthcare systems that are free of biases and provide accurate diagnoses.
Biases in ML can originate from various sources, including the data used to train algorithms (data bias) and inherent design or learning mechanisms in the algorithm itself (algorithmic bias). Fairness in healthcare requires identifying, mitigating, and controlling these biases so that all patients receive equitable treatment.
A key aspect of fairness is respecting individual autonomy and avoiding coercion or other forms of manipulation in gathering, processing, and disseminating personal data. It is an important issue when using ML in healthcare, especially when collecting sensitive information such as health records.
For example, it is unethical to use ML to determine a patient’s eligibility for a specific service based on their age, disease or disability, political affiliation, creed, ethnicity, gender, social standing, or sexual orientation without explicit consent. Similarly, using a person’s location or hardware to deliver a tailored service is unethical. These ethical issues are often overlooked when using ML in healthcare, but they are essential to maintain patient trust and improve clinical outcomes.
ML systems should be held accountable for their decisions and the impact of those decisions on people. The key to accountability is transparency; a clear explanation of how the system makes its choices should be provided. It is essential for both human users of the system and the organizations who develop and deploy it. Ultimately, ML systems should be transparent enough that humans can interpret them, allowing for better oversight, impact assessment, and auditing.
Another accountability aspect is mitigating bias and fairness in machine learning. Bias in training data can perpetuate and even amplify existing biases in deployed models, which significantly impact people’s lives (such as hiring or credit scoring). It is particularly concerning when it comes to healthcare, where misdiagnosis can potentially lead to life-threatening consequences.
While the ethical concerns of ML in healthcare are new, many of the solutions aren’t. These issues are not very different from the medical ethics and research standards that have existed for centuries – from the Hippocratic Oath to ethical guidelines for clinical trials. These are all about ensuring that outside accountability is in place and, ultimately, people can trust the healthcare system to keep them safe. Whether these principles are achieved through laws, policies, or best practices, healthcare organizations must take them seriously and ensure that external accountability measures are in place.
Privacy is a critical element of ethical machine learning concerning the data used in training and model outputs. Using sensitive personal information without an individual’s consent can lead to harm, including discrimination, monetary loss, damage to reputation, or even identity theft. These harms can be especially severe in healthcare, where data collection is often tied to specific treatment outcomes for a patient.
Moreover, if a clinician and an algorithm disagree on a diagnosis, the clinician will likely be asked to justify her decision. It puts the clinician at a greater risk of being held responsible for causing harm to a patient compared to an adversarial situation between humans. This problem is not new to medicine or research and has long been a concern of the Hippocratic Oath.
Several academic studies have developed frameworks to help researchers address the ethics of ML in their work. However, many questions remain to be answered, particularly around the scope of these frameworks. For example, having a set of ethical principles that can be used in a broad range of socio-political contexts may be essential. Still, the definition of these principles will depend on local conditions and must be negotiated. Furthermore, it is crucial to consider how to balance different issues in the context of a given project (for instance, collecting more data about non-typical users for fairness purposes might come at the expense of privacy). The bottom line is that ML can profoundly impact people’s lives and should be designed to minimize potential harm.