Suzanne Waite - Health Policy Partnership

Suzanne Wait

Catherine Whicher

Catherine Whicher

AI: the emerging role of policy

1 August 2024

Optimising the use of AI in healthcare will require solid policy frameworks coupled with a dose of humility.

Is AI going to take over the world as we know it? This question (or possibly a less dramatic iteration of it) has featured around many a dinner table, conference podium and media outlet since the launch of ChatGPT in late 2022.

Although this was not the first application of generative AI – that is, AI able to create new content – the capabilities seen when these technologies were put into practice far exceeded their developers’ expectations, causing excitement but also setting off some alarm bells.

In response, several of the founders of these technologies – with Mustafah Suleyman, co-founder of DeepMind, at the helm – have expressed the need to set up appropriate guardrails to control and monitor the use of AI, with independent oversight. Suleyman suggests the precautionary principle is needed, with all new AI models being thoroughly stress-tested to identify and understand the full extent of any possible risks before full deployment.

AI transcends national borders, so global governance will be a necessary complement to national policies.

 

Policy to the rescue?

Policy can play a significant role in ensuring our use of AI aligns with societal goals and does not take on a ‘will of its own’ – a concept known as AI alignment. The EU AI Act, which was published in the Official Journal of the European Union in July 2024 and came into force on 1 August 2024, is one of the most comprehensive AI policies to date.

The Act proposes a risk-based approach to regulating AI to ‘ensure safety and compliance with fundamental rights, while boosting innovation’. It recognises that some applications of AI are higher risk than others: healthcare would fall into this ‘high risk’ category, along with democratic processes (e.g. elections), employment and education.

For such higher-risk applications, the Act stipulates that developers and users must perform comprehensive risk assessment and risk mitigation strategies, maintain records of use, and ensure transparency, accuracy and human oversight. Critically, the Act also stipulates that citizens have a right to issue complaints and seek further explanations for any decisions based on high-risk AI systems that affect their rights – presumably including employment or healthcare.

 

A new era of AI diplomacy

The EU is not alone in advancing AI policies. The US is developing its own legislation on AI and a blueprint for an AI Bill of Rights to ensure AI-based technologies protect people’s rights, such as the right to privacy.

There is also growing international convergence when it comes to AI. At the 2023 UK AI Safety Summit, 28 governments and leading AI companies signed a joint commitment to advance regular, scientist-led assessments of AI capabilities and safety risks. More recently, a technical dialogue took place between the European AI Office and the US AI Safety Institute. A new era for AI diplomacy has begun.

There is no doubt that the integration of AI into healthcare, and society more generally, will be transformational, but some of the potential risks are difficult to grasp.

 

The importance of global governance

AI transcends national borders, so global governance will be a necessary complement to national policies. The WHO has established principles for safe and ethical AI to protect autonomy, promote human safety and wellbeing, ensure transparency and explainability, foster responsibility and accountability, ensure inclusiveness and equity, and promote responsible and sustainable AI.

UNESCO also offers comprehensive recommendations for the ethics of AI. In healthcare, several multi-stakeholder organisations like HealthAI have emerged as neutral, non-profit partners focused on expanding countries’ capacity to regulate AI and leverage standards to the benefit of all. What is promising (and important to mention) is that much of this global momentum for independent oversight is being encouraged by the very companies that are developing these technologies.

 

The way forward – ‘worry wisely’ and shape progress

There is no doubt that the integration of AI into healthcare, and society more generally, will be transformational, but some of the potential risks are difficult to grasp. AI has already shown demonstrable impact in healthcare, such as being able to drastically reduce the time taken to review brain scans, and it is still in its infancy. Part of ensuring progress is building the right safeguards to mitigate risks.

Getting this right will require a collaborative approach: policies guided by up-to-date evidence, commonly agreed standards that are effectively monitored and continuous knowledge-sharing between all sectors – with a constant commitment to engage with citizens and users.

We also need a big dose of humility, as many questions remain unanswered about the use of AI in healthcare. So let’s foster innovation carefully and responsibly, and be agile in our approach, as the policies established today will shape the role of AI in our societies and affect us for decades to come.

 

The opinions expressed in this blog are those of the authors and do not necessarily represent the views of The Health Policy Partnership.
Share