A road traffic collision, a domestic violence alert, a potential arson, and one police unit. Who gets priority? Emergency dispatch centres have spent decades managing limited resources carefully to ensure that as many lives as possible are saved. Yet with cuts in public funding leading to fewer units on the road and a lack of staff affecting almost every Public Safety Answering Point (PSAP), matching dwindling resources to growing demand is a challenge. There may be a new solution: Artificial Intelligence (AI) systems are showing outstanding potential across emergency service responses, from routing ambulances through traffic to identifying important background information on calls.
But as public concern over AI grows and decision-makers seek to impose laws and legislation to curb the exponential rise of AI in society, restrictions on AI systems that would benefit emergency services have to find a compromise between fair limitations and enough space for progress. Are we at risk of squeezing innovation out of potentially life-saving technology? Is there a way to carefully navigate a compromise between laws that ensure AI systems respect our societal values, but are also used to their full potential?
There is no denying that AI has never been so present in general discussion – but with plenty of accompanying controversy. There is a wariness about AI systems, perpetuated by the idea that we’re all going to be replaced by robots in a sci-fi dystopian nightmare. Yet AI already plays a huge role in our lives. McKinsey’s 2022 Global Report on AI found over 50 per cent of companies have integrated AI into at least one function, with most of the implementation being in the product- or service development and service-operations functions. Your streaming service recommends what to watch based on what you’ve already seen (and whether you turned it off half-way through a dull episode). Smart speakers use machine learning to respond to audio-based commands from users to play their favourite music or tell them the time. Vehicle navigation systems (like SatNavs) use an AI system to determine the best route for your journey, processing data on live traffic conditions, road quality, and historical traffic patterns to give you the quickest and most efficient route.
It is easy to see how efficient route planning can benefit emergency services: getting an ambulance, fire truck, or police unit to an emergency quicker can be the difference between life and death. According to some estimates, medical staff miscommunication accounts for up to 80 per cent of clinical errors. AI systems can give an emergency department an accurate estimated time of arrival for an ambulance, giving the hospital vital time to prepare for an incoming patient. Studies have shown that AIs are able to learn from complex urban geography, predicting the next day demand for emergency services, so staffing and other preparations could be made easier.
In a study conducted for the Dutch EMS region of Brabant Zuid-Oost (BZO) in the Netherlands, the effect of AI systems on dispatch trees was investigated. The study focused on making better dispatch decisions with the goal of meeting the Netherlands’ national target of having a response time of within 15 minutes for 95 per cent of highly urgent cases. The AI was able to make dispatch decisions much more quickly than human call-takers by feeding it data on the decision tree that dispatchers currently use, as well as historical examples of how dispatchers implemented that decision tree. On-time response performance for highly urgent requests increased by 0.77 per cent, the equivalent of adding more than seven weekly ambulance shifts, according to the study.
But that isn’t where the benefits of AI in emergency services end. The European Emergency Number Association (EENA) partnered with Corti.ai to explore how out-of-hospital cardiac arrests could be detected using artificial intelligence. With Corti.ai, when a bystander or victim calls the emergency services, the AI acts as an assistant to the emergency services call-taker, listening for particular signs or signals in what the caller is describing to help detect potential cardiac arrest faster. With the current survival rate of out-of-hospital cardiac arrests (OCHAs) at 8.6 per cent in the UK, AI has the potential to save thousands of lives each year. Research from Copenhagen indicates that Corti can increase that survival rate to 20 per cent, reducing undetected cardiac arrest cases by 40 per cent. The project concluded that artificial intelligence does have the potential to assist the decision-making of emergency call takers by increasing the accuracy of OCHA detections. EENA Managing Director Jerome Paris noted that: “The EENA Corti project was an important learning experience for the use of AI in emergency services, demonstrating not only the potential of the technology, but also how to overcome significant challenges to pave the way for the future of emergency response.”
Understanding audio through speech recognition offers other significant benefits for emergency services. While AI systems are unlikely to be listening out for ‘Hey Siri’ during an emergency call, they can identify background noises that are essential to the situation. Emergency calls can have poor quality and be loud, and callers may be panicked or unable to communicate. That is the issue that inspired the first AI designed specifically with emergency dispatch in mind. A new system employed by Magen David Adom helps to transcribe poor-quality phone calls and save valuable time for call-takers. The AI system identifies keywords related to medical emergencies, transcribing them automatically, saving time that the call-taker would otherwise spend asking the caller to repeat themselves. When there is an issue with the call’s quality, the system can identify key words spoken by the caller – for example, if they mention chest pain, a vehicle accident, or otherwise. This allows call-takers to not only save time but also efficiently dispatch necessary resources.
Disasters and large-scale events, too, have not been neglected with the boom of AI. In scenarios where multiple people – sometimes even thousands – require up-to-date and relevant information immediately, AI can act quickly. AI monitors social media for fast indications of potential disasters before emergency services may be made aware through emergency communications (such as calls to 999 or 112). Social media listening can identify when a new topic is trending, alerting authorities and first responders to respond in a timely manner. During a large-scale event, there may be longer wait times when trying to reach emergency services due to the limited number of humans able to man the physical lines, which may be receiving thousands of calls. A chatbot, a computer programme designed to simulate conversation with users, can respond to multiple queries simultaneously, freeing up human call-takers for more complex inquiries.
So, with AI offering such potential for emergency services, what is the problem? Inevitably, any new technology or development will face laws and regulations. There are worries that AI systems can lead to harmful outcomes, largely due to a lack of consent (should we be required to ‘opt in’ to AI systems?) and technical vulnerabilities in the systems. There are also concerns about how data collected by AI systems is held, particularly with regard to facial recognition systems. While facial recognition is already employed in many circumstances by law enforcement as a counter terrorism measure, faces cannot be encrypted in the way many kinds of data (such as financial details) can. A captured facial scan that misidentifies someone could have long-term consequences, and research is already indicating that feeding AI systems data that contains racial or gender biases can lead those systems to discriminate themselves.
As a result, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications should be analysed and classified according to the ‘risk’ they pose to users. The different risk levels will mean different levels of regulation; once approved, these will be the world’s first rules on AI. The European Parliament adopted its position on the AI rulebook with an overwhelming majority on June 14, 2023, paving the way for the interinstitutional negotiations set to finalise the world’s first comprehensive law on Artificial Intelligence.
It should be noted that the specifics of this law are still very much up in the air. While the European Parliament has adopted a negotiating position, the law will now undergo a series of trialogues in which the act must be discussed and agreed with the Council of the EU, comprised of EU member states. This will take some time, particularly as there are some amendments to the law that are considered controversial. Naturally, negotiations will evolve and change the final form of the law. This article speaks only about the law as it has been adopted by the European Parliament.
So, what does the law actually say? Firstly, it gives a definition of AI: “[It is] software that… for a given set of human-defined objectives, generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” The law then breaks down what categories certain AI systems may be placed in. AI systems that would be banned outright include any AI that causes physical or psychological harm to a person, AIs that implement social or trustworthiness scores, and, particularly relevant for emergency services, the use of: “Real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement.” There are some exceptions to this when ‘strictly necessary,’ such as searching for specific potential victims of crime, including missing children; preventing a specific, substantial, and imminent threat to the life or the physical safety of natural persons or of a terrorist attack; and the detection, localization, identification, or prosecution of a perpetrator or suspect of a criminal offence.
Some systems will not be banned but will be considered high-risk. These include the biometric identification of natural persons as well as AI systems managing critical infrastructure (such as road traffic and the supply of water, gas, heating, and electricity). Specifically, the law already notes that: “AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid,” are to be considered high-risk.
What restrictions will these high-risk systems face? This is a very brief summary of the 108 pages of the proposal and the 349 pages of amendments. A risk management system will need to be established and maintained, identifying any known and foreseeable risks and adopting suitable risk management measures. Such systems will need to have the capability to automatically record events and be developed in such a way that users can interpret the system’s output and use it appropriately. Every system will need to be designed in such a way that it can be overseen by a human during the period of use, and regular testing of the system will be required to ensure it is compliant with these rules. There are also a series of transparency obligations that will ensure users of the system are aware they are interacting with an AI.
These restrictions certainly will not stop the use of AI in emergency services altogether. What can we expect in the near future? The use of chatbots, language detection and translation, quality assurance, triage, staffing, and mental health are just a few areas of emergency response where AI has the potential to improve things. The EENA 2024 Conference will heavily feature AI as a topic. EENA has launched a special project on the use of AI in emergency services, gathering companies offering AI products and PSAPs to trial and test the use of AI in live environments.
Nothing will ever replace humans in emergency services. It is the ability to empathise, connect, and work selflessly to save the lives of others that defines first responders. But humans are a valuable and limited resource. Rather than seeing AI as a replacement, there is ample evidence of AI systems being used as enhancements, allowing human call-takers to focus on what they do best. Humans should always remain the ones making the final decision when it comes to dispatch, but with the use of AI to guide and accelerate the decision-making process, they are doing so with the best information possible. It is imperative that legislation and laws around AI keep the interests of humans at heart, not only by ensuring that AI systems respect and follow our values but also by allowing careful and considered innovation in places where AI can save lives.
———————————
This article was published in the Crisis Response Journal, the global information resource that covers all aspects of human-induced disasters or natural hazards, spanning response, disaster risk reduction, resilience, business continuity and security. You can find more about CRJ here.
To read more blogs like this from EENA, please click here to visit our website.
Please sign in or register for FREE
If you are a registered user on Critical Communications Network, please sign in