To gain an insight into how accurate and useful generative AI systems could be especially for primary health care, Silicon UK spoke with Dom Couldwell, Head of Field Engineering EMEA at DataStax.
“The biggest change would be around how the service is delivered. Using natural language patterns and establishing more of a conversation with patients should help those that are less familiar with today’s chatbots or online services, opening up these kinds of services to more users. At the same time, using generative AI within a service will rely on how people use and interact with the service in the first place. Providing this kind of service will have to pass rigorous standards on security and safety of patient data, so this would also have to be delivered.
“What generative Ai is good at is providing more natural responses to questions and getting that conversational tone in place will help chat services move beyond simple issues. It’s about improving the bedside manner that these online services can have, so more people feel they can trust and use them.
“It can also be combined with other AI-powered services, like analysing pictures for potential issues. Using AI for scanning cancer images for potential issues is now growing in popularity as it makes human experts more productive. Linking this kind of image recognition and scanning to natural language processing and AI agents could make it easier for patients to discuss what is in their images when it suits them, rather than only having a short session with a consultant.
“NHS England is already investing in pilot projects, with £123million earmarked for projects over the next four years under the AI in Health and Care Award. There are 86 projects already awarded under this, so the NHS is already innovating around AI.”
“For many people, the biggest problem is just getting an appointment in the first place and reducing the waiting time. Using a service like this can help those that have simple issues to feel that they have had an appropriate response and got the help that they need, while also making it easier to direct those with more complex needs to where they need to go for help or for that next step in diagnosis by a human doctor.”
“There are ethical and emotional aspects in dealing with patients and treating their illness, the question is if the patients would accept the service. For getting individual detailed information or assistance in booking appointments acceptance will be high because there is a win of comfort. For medical advice the question of trust might outweigh the comfort benefit.
“Another more technical area is accuracy and reliability. These services will rely on their training data for how they function. If you look at services like ChatGPT, these have data up to September 2021. To keep large language models up to date is expensive in terms of compute and storage, so you can’t just say that you will ‘update’ the model, and you will also want to protect individual patient data so that it can be leaked in other conversations.
“Instead, you will see a combination of different approaches used so that you can keep responses up to date and patient data secure. Retrieval augmented generation, or RAG, is a process where your AI service will combine multiple models and data sets as part of delivering responses. This lets you keep your data sets up to date and relevant for each patient while also updating the other data sets that you use overall. But more importantly, you will need to store the data in a manageable and secure database like Astra DB to be able to keep control over the data.
“Today the rapid adoption of AI technologies like ChatGPT has outpaced the development of data protection laws and regulations for data privacy. Their definition will follow soon and require robust and reliable data management platforms.”
“OpenAI servers, where ChatGPT operates, are not HIPAA compliant. This poses a risk to the privacy and security of protected health information if you use public services. This needs to be sorted out first.
“Retaining patient data separately so it can’t be used within public data sets or as part of other responses to other patients will be necessary. This is a great use case for RAG as this supports multiple sources of data into its response generation. At the same time, this can be used to retain conversations with individuals so that the AI service has that ongoing record of conversations and interactions from the past.”
“This is where I see many patients starting their journey around using an AI service. Rather than waiting for appointments, these AI services can step in and help with that triage process that directs patients to their next steps. It can also potentially help manage cases based on priorities, so that those that might need more timely intervention can be flagged up compared to others that are less time sensitive.
“At the same time, these kinds of services do not replace human doctors. For those that need urgent care, talking to a doctor will always be preferable. Where AI can come in is around making those doctors as efficient as possible, providing them with more time to spend on people.”
“This comes back to the training data side, and how much data the AI models are trained on. There are many illnesses that start with the same symptoms, so how can AI pick up on specifics?
“For more specific services like scanning images for potential problems, this has been developed over the years and covered thousands of images. One breast cancer screening trial found that AI reduced the workload on radiologists by at least 30% because they could cut the number of images that needed review, but it also increased the cancer detection rate by 13% because the AI spotted more issues that needed to be analysed.
“We have to develop the ethical framework for augmented AI healthcare, similar to how human doctors have to follow the Hippocratic Oath and ‘do no harm.’ This will include how we handle medical data, how we make decisions around healthcare and delivering services, and how and when we move from automated services to human doctors and back again. Ultimately, any organisation that provides healthcare services to patients will be just as responsible for its work using AI services as it is for its human doctors.
“Everyone involved in delivering healthcare – from those front-line staff involved through to those on the IT side, and the suppliers supporting them – will share some of the responsibility to make AI more effective and more accurate over time. This has to be based on the results that we see over time, how those results are delivered, and how they can be improved in the future too.”
“Being able to use natural language in your interactions will make the biggest difference to patients. As more people try out using services like ChatGPT and other chat services that use generative AI, using this same approach for healthcare conversations will become more natural. Embedding generative AI into online services or mobile apps will be where this starts but over time, I think more organisations will adopt an ‘augmented agent’ approach, where the interface will be geared more to conversation and interaction rather than static pages.
“The biggest impact and first adoptions will rather be seen for clinical staff, adding AI into the applications that they use today will be where they will see the most difference. This will be added into their clinical pathways and workflows, rather than trying to prescribe new working methods. Clinical staff are the experts in how to design care systems, so AI will adapt to how they work.
“But to ensure successful integration, collaboration between AI developers and healthcare professionals is essential. Clinical staff, being experts in designing care systems, can guide the adaptation of AI to align with their workflows and preferences. This collaborative approach ensures that AI technology, such as ChatGPT, is seamlessly integrated into existing healthcare systems, supporting clinical decision-making and enhancing patient care.”
Dom is Head of Field Engineering EMEA at DataStax, a real-time data and AI company. Dom helps companies to implement real-time applications based on an open source stack that just works. His previous work includes more than two decades of experience across a variety of verticals including Financial Services, Healthcare and Retail. Prior to DataStax, he has previously worked for the likes of Google, Apigee and Deutsche Bank. www.datastax.com
Discover how emerging technologies like AI, blockchain, and edge computing are set to revolutionise industries…
US Federal Aviation Administration approves SpaceX's Falcon 9 rockets to return to service following second-stage…
Social media platform X drops Unilever from lawsuit against advertisers after reaching agreement on 'safety…
US Congressional Representatives ask for answers from AT&T, Verizon, Lumen Technologies after wiretap networks reportedly…
Swedish EV battery start-up Northvolt in talks for 200m euros in short-term funding as it…
US labour officials say Apple illegally restricted employees' right to discuss workplace issues on Slack…