The difference between a robot and a human is curiosity. At least, so someone said on the radio recently when talking about the reason we are going to send people back to the moon to collect samples, rather than just sending a mechanical soil collector. And that got me thinking about artificial intelligence (AI) again. Some people like to think that AI is, or soon will be, the answer to many of our problems.
My eldest son is a computer coder for a company in London which provides AI to regulate city traffic, aiming to optimise traffic flow and thus reduce pollution. Accessing data from hundreds of cameras and working out what to do with many traffic lights is surely something we need machine learning to organise. But what about using AI in veterinary practice?
Nobody knows what the AI system is analysing in these scans; however, it can tell… the diagnosis of every case better than each of the individual ophthalmologists
A colleague at Moorfields Eye Hospital, who has been keen on computer games since he was a kid, asked Deep Mind – the people who produced Deep Blue, the computer that beat the chess grand master Garry Kasparov – if they could help with analysing retinal scan images. If he gave them a thousand optical coherence tomography retinal scans, together with the diagnoses made by 10 of their retinal specialists, what could their computers do? “Leave it to us,” said Deep Mind. When Deep Mind came back with their results, the first thing the computer said was that it could tell whether the patient was male or female. “Impossible…” was the response from the ophthalmologists. “It cannot be done.” But, lo and behold, it could be.
Nobody knows what the AI system is analysing in these scans; however, it can tell the sex of the patient, their age to within three years and the diagnosis of every case better than each of the individual ophthalmologists. Not as well as the joint opinion of all 10 ophthalmologists, it must be admitted, but it came pretty close. Remarkable.
Can we see the same sort of diagnostic feats in veterinary medicine? The thing is, all human retinas are pretty similar, and the pathology seen in the range of retinal disease is relatively uniform. This is rather different from the range of even clinically normal dogs, from Chihuahuas to Great Danes. Even so, a quick web search shows companies from VetAI producing the program Jooi, through Vetology, a hybrid AI service, to DeepTag taking veterinary notes and suggesting diagnoses. Interestingly, as I was researching this online, up came a seemingly helpful robot which said it would answer my veterinary questions. When I asked it what role it thought AI would have in the future of veterinary diagnostics, it sat and thought for a minute or so (I wonder how many calculations it did in that time?!) then told me, “I am reaaly sorry we do not have that information with us.”
This response took me back to my first interaction with a computer when I was seven years old. I went to my mother’s work for one day when school was off (to teach the teachers how to teach, which I always thought was a bit strange!). She had a technical job in medical physics in our local hospital. I sat down at a spare screen and keyboard and typed in: “1+1=?” After a few seconds (well let’s face it, there was only one calculation to do!) it came out with, “Eh?” I remember thinking that it was absolutely ridiculous. Here was this machine that was meant to be amazing at maths and it could not even do the simplest of sums. Looking back now, I am surprised it did not come back with something like “error 223 at line 423”. “Eh?” was a surprisingly human response. Not exactly something that would allow the computer to pass the Turing test, but rather showing that there must have been someone with a bit of a sense of humour programming the machine for every eventuality!
I may not have put in the right search terms – the computer is only ever as good as the person inputting the questions or the data upon which they are asked
Now back to veterinary medicine – a problem I see often these days is that, rather robotically, we put a blood sample in for biochemistry and come up with a list of 20 results, with one or two that are slightly out of the normal range. When I was at vet school – in the days when diagnostic labs had to deal with T. rex and Stegosaurus samples – we were taught to ask for specific tests for the presumptive diagnoses we had made on evaluating the clinical signs of the animal. The results would then affirm or deny our diagnosis. This meant we would maybe have five test results. Now, with 25 or more (most of which have nothing to do with the diagnosis we are thinking about), the bell-shaped curve of the normal distribution of numbers means that at least one of those is going to be greater than two standard deviations from the mean. Well, maybe that’s right – I’ve never been very good at statistics and I’m sure someone will reply to me if I’m wrong.
What we probably need is some AI to help us with all these numbers. A quick whizz through Google Scholar didn’t show any “AI for veterinary biochemistry” results, but of course I may not have put in the right search terms – the computer is only ever as good as the person inputting the questions or the data upon which they are asked. But there are papers on using AI in predicting outcomes in equine colic and identifying lesions on thoracic radiographs. If AI can do that, who knows where it might go? To my mind that doesn’t matter as long as we go there with it, leading the way.