Techniques such as AI and Machine Learning are creating new opportunities for healthcare and changing the way that people interact with healthcare services. There are many benefits associated with new technology – for example, supporting a move away from hospital-based care towards the home, increasing patients’ independence, helping us to make better decisions and reducing the burden on healthcare providers. This type of technology can be used to increase efficiency by speeding up the clinical trials process, reducing paperwork, limiting unnecessary procedures and getting more done with less.
Nevertheless, there also some concerns about introducing modern technologies. For instance, what are the aspects of healthcare that do not lend themselves to automation / A.I.? This is not a simple issue– i.e. questions around machines making life and death decisions. The cost of change in healthcare is high – it is a regulated environment and we need to consider the feasibility of implementation. We want to get things right but we may not always know how (for example) certain types of algorithm arrive on the solution that is presented. It follows that there may be seemingly random, emergent and unpredictable side effects. We have a duty to understand these concerns before pressing ahead.
How can modern medical technologies be trusted? How can they be governed and regulated? We all get annoyed when the software fails to work in the way we need it to or gives the wrong results. In healthcare, the consequences can be serious; patients may receive the wrong diagnosis and, in extreme circumstances, they may be harmed. Some of the repercussions may not be immediately obvious or only become apparent when considered across a population. Although no one wants to delay the introduction of new technologies, we also need to explore what can be done to get the most out of them, act in the public interest and include appropriate safeguards.
Who is to be held responsible?
We need to clearly define the boundaries of a system and who is responsible for it. ‘Intelligence’ may sit on a server and be enabled through data-sharing and connectivity. This is inherently multiregional and it can be hard to understand exactly how such technology is functioning, how safety is being assured and who is taking responsibility for it. Such systems are not static, they have many interdependencies and interactions – they may have been tested but we don’t know how, what, which part and why? As a user of network-enabled technology, we might find ourselves asking questions around provenance. For example, we will never have seen the software code, we won’t have met the people writing the code and in some cases, we will have little way of knowing if the software code has been written correctly (i.e. it is doing the job as intended). We don’t know when the software was written, for what reason, when it was last updated, what input it is receiving and how it arrived at a given output/calculation. We potentially put ourselves in this situation every day – we have no way of knowing how most of the systems that we use function but we take it on trust that they will not cause any harm. Where is the evidence to show that this is the case? Do we know how to test and evaluate such systems and how do we provide an appropriate level of assurance?
Although there have been great advances over recent decades, there has been a lag in terms of public understanding and appreciation of the broader implications of mass digitisation and use of these new forms of technology (especially in healthcare). The time has come to take stock and consider the profound impact that such technology can have. How informed is society and how aware are we of the positive and negative aspects of the use of this type of medical technology? For instance, technologies may not be applied in our best interest; they may substitute for human contact and in rare circumstances, they may be unsafe.
Image credit: Stock
Shared information such as that from GPS tracking tools, step counters, social media and internet browsing makes it possible to define personal profiles and track/individuals in a way that wasn’t previously possible. AI / ML / Big Data allows for the amalgamation of sources and a much richer picture around who we are and what we do. Effectively, we can reverse anonymization using these techniques (although this may be prohibited). From a healthcare perspective, we don’t know how this information is being used. For example, it could be used to cherry-pick low-risk individuals or deny treatment to high-risk individuals. Perhaps the most concerning aspect of this is that we have little to no way of knowing what is going on and to what extent. Healthcare is changing, but we have little understanding of why, how and who is responsible. It is also the case that virtual technologies may be being substituted for face to face or human contact when it is not appropriate to do so.
In a recently publicised case, a doctor told a patient they were going to die via a robot. The clinician (mediated through a robot), told the patient that ‘he has no lungs left only option is comfort care, remove the mask helping him breathe and put him on a morphine drip until he dies.’ The case highlights many obvious questions. Technology can’t substitute for human contact. We need to be sensitive to when this is the case. It follows that, as computational systems become progressively capable, it is possible they will help predict when we will die – an even more contentious scenario would involve this type of robot relaying such information with no clinician involved. Do we feel conformable with this scenario?
Image credit: BBC
A degree of transparency
Another concern relates to the feasibility of assuring the safety of AI systems. Although we have an established track record for doing this in the field of automation, newer forms of technology challenge our approach. For example, AI systems are highly sensitive to the environment (e.g. functions are dependent on the properties of the input and can vary over time). This type of system could be sub-optimal in terms of the way in which it is programmed/implemented. The system may have limited / incomplete / poorly mapped data. This situation could vary between the time at which it is tested and the time when it is used. Computational systems can be deliberately misled for profit; they can be hacked and hijacked. As the scale of the system grows, so does the potential for harm. We don’t know if this type of technology allows for monitoring (e.g. do we have the means to know when things go wrong?) – traditional vigilance mechanisms may not work (for instance, an individual may not be aware of a loss of personal data, but they may be impacted by it).
By raising these questions we don’t offer an opinion as to the suitability of these forms of technology, we just raise a point that there needs to be a degree of transparency. A growing dialogue is occurring around these topics, but how involved are manufacturers and pharmaceutical/technology companies? There are increasing numbers of resources becoming available in this field (see the list below), but how much is the industry involved? We encourage an open and informed dialogue between the public, medical community and industry in moving things forward.