How do we know what we know? - Veterinary Practice
Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

InFocus

How do we know what we know?

“How long will it be before owners are invited to write out their concerns for their pets and wait a few seconds to see what AI comes up with before coming to the clinic?”

I have to say it was a bit of a wrench to drag myself out of the garden this sunny first Sunday of summer to write this column, but with clinical work and teaching, the weeks are so busy there really isn’t much time to sit down and type out some thoughts. I’ve spent the last half hour watching a little money spider crawling up my glass before it drops to the bottom and starts its attempt at an escape once again. For those who think this is entirely unethical of me, never fear; I rescued it and placed it on a leaf. But I wondered why these tiny arachnids are called “money” spiders.

These days, the easiest way to find out is to ask ChatGPT; in an instant, it told me that members of the family Linyphiidae are termed money spiders because of ancient myths that a spider brought good luck. That took three seconds. But I wasn’t convinced. A quick Google search led me to Frank Cowan’s Curious Facts in the History of Insects Including Spiders and Scorpions, published by J.P. Lippincott and Sons in 1865. I say quick, but it took about 20 minutes to find the book online and then read through it to find the reference to money spiders and, honestly, how am I more likely to accept information from a book from nearly 160 years ago than data from artificial intelligence (AI) today?

The app performs a spectrographic analysis of the frequency, pitch, tempo and timbre of the song it’s listening to and compares it with the birds’ songs in the archive

But what did I drag myself out of the sun to write to you about? Not money spiders or AI, I have to say – it’s just too easy to get diverted. Well, maybe it was AI: I was listening to the birds in the garden and marvelling at how my iPhone managed to differentiate between wrens and robins, great tits and greater whitethroats. How on earth does it do that? Another question for ChatGPT, maybe. It tells me of the Macaulay Library of Wildlife Sounds housed at Cornell University in an instant. The app performs a spectrographic analysis of the frequency, pitch, tempo and timbre of the song it’s listening to and compares it with the birds’ songs in the archive. It’s as easy as that!

This knowledge got me wondering if I could use that for something more personally important. I just handed in the corrected version of my doctoral thesis in education after my viva six months ago. I asked veterinary students their views on the animals used in their preclinical veterinary education. This yielded page after page of responses, which I spent hours reading through and performing a thematic analysis to determine the students’ opinions. So, as I reviewed the thesis to answer the questions my examiners had posed, I took the opportunity to evaluate those responses using AI. The programme took a good 15 seconds to mull over the responses I had read and reread over weeks. In the end, it came up with the same themes I had unearthed or rather “exposed” from those responses, which was gratifying. I’m unsure what I would have done if it had come up with different themes!

How long will it be, I wonder, before owners are invited to write out their concerns for their pets and wait a few seconds to see what AI comes up with as a diagnosis before coming to the clinic, to evaluate whether this is a case requiring a real live vet to examine the animal? Thankfully, it will be a long time, I think, before a robot can handle a fractious cat to determine what the diagnosis is. However, I’m sure radiographs, blood biochemistry results and ECGs will soon be evaluated through AI. It’ll be up to us to use these technical advances just as we did when radiography or blood biochemistry came along in the first place.

The trouble is that for every test, 1 out of 20 ‘normal’ results will be outside the normal range, so do 20 tests in any normal animal and one is likely to be outside the normal range

When I was a student, I remember being told that I should first come up with my list of differential diagnoses and work out which specific tests would rule these diagnoses in or out. These days, a full range of tests is far cheaper to perform than just one or two, so we get a list of results to mull over. The trouble is that for every test, 1 out of 20 “normal” results will be outside the normal range, so do 20 tests in any normal animal and one is likely to be outside the normal range. That’s fine if you know what you are looking for, but if you rely too much on the results to guide you it can cause all sorts of trouble.

AI is useful, that’s for sure, just as is a standard blood biochemistry, but only as long as we use it as our slave and not as our master

I remember when my birdsong app told me a rose-breasted grosbeak was in our garden. A what?! A quick Google search told me that this fast warbler sounds like a robin, though sweeter and more melodic as if it’s had operatic training. AI is useful, that’s for sure, just as is a standard blood biochemistry, but only as long as we use it as our slave and not as our master. That’s a more and more difficult balance to strike as technology gets better and better, or at least appears to be doing so.

David Williams

Fellow and Director of Studies at St John's College, University of Cambridge

David Williams, MA, VetMB, PhD, CertVOphthal, CertWEL, FHEA, FRCVS, graduated from Cambridge in 1988 and has worked in veterinary ophthalmology at the Animal Health Trust. He gained his Certificate in Veterinary Ophthalmology before undertaking a PhD at the RVC. David now teaches at the vet school in Cambridge.


More from this author

Have you heard about our
IVP Membership?

A wide range of veterinary CPD and resources by leading veterinary professionals.

Stress-free CPD tracking and certification, you’ll wonder how you coped without it.

Discover more