Designing safe and ethical self-driving services: the role of public engagement

 
 

Guest author Ed Houghton - Head of Research and Service Design at DG Cities

Public awareness of artificial intelligence (AI) has never been higher. The seemingly overnight emergence of Chat GPT into the public consciousness has radically shifted the public conversation on the role of machine learning in our daily lives. News and social media are captured by the potential of an AI supported future. The workplace, home, leisure, and travel are all aspects of our daily lives where AI is expected to play an increasing role.

In many ways the use of AI in self-driving services is one of the most visible examples of AI in our (future) daily lives. Whilst other applications, such as assisting administrative tasks in the workplace, or supporting children to do their homework are fully digital, the operation of a vehicle – something many people have learned, and many enjoy – is very different. The self-driving vehicle represents a physical manifestation of AI decision making in practice – and for this reason it is one of the most potent images of the AI future.

Much of the research into the self -driving services of the future have focused on the development of technology and testing its ability to operate within the legal framework of the environment in which it operates. Legal rules however are only part of the wider system in which the self-driving vehicle must drive. A key element often missing from discussions are the ethical boundaries of what is “right” when it comes to AI decision making. For a self-driving service operating on a road, knowing what is “right” is hugely complex and, by its nature, specific to the individual and their requirements as a potential user.

Understanding what is “right” driving behaviour in the minds of the pubic was the topic of recent research we conducted with Reed Mobility, TRL, Humanising Autonomy, and April Six. For ‘Ethical Roads’ we made use of deep public engagement methods to define the red lines of decision making with the public, to find out what is appropriate/acceptable. Through workshops we were able to explore complex scenarios in which a self-driving bus would need to make decisions: e.g., should the bus speed up to make up for “lost time when delayed, reducing rider comfort but benefiting journey times? Should the operator of the self-driving bus share incident data with other bus operators to improve the quality of services, or should data be protected as company intellectual property? Our study highlighted clear agreement on some ethical dilemmas, and clear differences on others, highlighting how operating within the legal minimum alone is unable to meet the needs of a diverse population of future users.

The need to go beyond regulation and legal rules is further highlighted by recent data from the Department for Transport study ‘The Great Self Driving Exploration’, which highlights just how important public engagement is to the design of future services. In this work researchers noted the critical importance of strong regulation and rigorous testing for building public trust. As with other forms of AI the public has high expectations of the design, rollout, and operation of self-driving technology. Key concerns over accessibility and trust are clear barriers to adoption, and as such, more must be done to co-design and develop services with public input.

What is clear is that the design, deployment, and operation of self-driving technologies, operated by AI, cannot come through top-down, closed-door decision making. Trust in AI, and those who operate it is only part of a complex puzzle: research regularly points to considerable mistrust in institutions – government, technology, and business, even when checks and balances such as ethics committees are deployed. Governance and assurance mechanisms designed to foster trust alone are therefore not the solution to enable diverse communities to feel trust. Particularly when the subject is something many consider radical, like self-driving vehicles.

Instead, the self-driving ecosystem, of technology developers, academic researchers, policy makers, and civil society partners, must collaborate to draw the public into the design of future services. The question of what is the ‘right’ behaviour is a powerful tool to not only build in safety but also demonstrate to the public their role in defining how a self-driving service should behave and ultimately illustrate who self-driving services are intended to benefit.

Recent rapid development of self-driving technologies mean that the UK is getting closer and closer to new services – but without deep engagement there are real risks to adoption and acceptance. Work defining the “red lines” of appropriate behaviour shows that it is critical to continually engage to ensure acceptance is more likely. As recent work highlights there are co-design approaches that can help ensure public buy-in at every stage of the design process. These useful methods can help to ensure widescale adoption is more likely and can ensure as many as possible benefit from an AI driven future.

References

1. REED Mobility (2022) Ethical Roads. https://www.reed-mobility.co.uk/ethicalroads

2. Department for Transport (2022) The Great Self-Driving Exploration. https://www.gov.uk/government/publications/self-driving-vehicles-public-perceptions-and-effective-communication

3. Edelman (2023) Edelman Trust Barometer – Global report 2023. https://www.edelman.com/trust/2023/trust-barometer

SMLL