STUDY ABSTRACT – Veterinary Teleradiology Reporting Times

STUDY ABSTRACT – Veterinary Teleradiology Reporting Times

Veterinary Teleradiology Reporting Times: A Retrospective Analysis and Future Directions.

STUDY AUTHORS

Seth Wallack, Andrew Fox, Eric Goldman, Aziz Beguliev, Sommer Aweidah

STUDY PRESENTATION

This study abstract was presented as a poster at the Symposium on Artificial Intelligence in Veterinary Medicine
(SAVY 2.0) in Ithaca, NY | May 16-18, 2025

Thumbnail of poster - click to view full screen details from the poster are captured by this blog article.

Click image to view the full poster

Background

In human radiology, several metrics are utilized to assess report usefulness, including diagnostic accuracy and Average Time Reporting (ATR). Studies from 2013 and 2017 found reporting times averaged:

  • 98 seconds per thorax image,
  • 111 seconds per abdomen image
  • or ATR of 85 seconds per report, respectively. 1,2,3,4

A study for veterinary ATR for radiographs without and with AI assistance could not be found.5,6

Objectives

  • First objective is to baseline ATR for single region veterinary radiograph reporting WITHOUT AI assistance.
  • Second objective is to baseline and compare ATR for single region veterinary radiograph reporting WITH AI assistance.

Methods

Case Time Period: January 2023 and April 2025 (28 months).

Case Types: Single region reports were grouped into thorax vs. abdomen and further divided into non-STAT vs. STAT. No distinction was made between typing vs. dictating reports or between canines and felines.

Radiologists: Two US board-certified veterinary radiologists (Drs. Seth Wallack, DACVR and Andrew Fox, DACVR) with between 10-and 20-years experience.

Exclusion Criteria: For the reporting with and without AI assistance, exclusion criteria was cases with ATRs <20 seconds or > 20 minutes.

Platform: Both Radiologists used the patented Vetology® reporting platform featuring personalized generative AI conclusions and recommendations derived from the radiologists’ findings.

AI Generation Time: Separately, average AI generation time was calculated for thoracic and abdominal conclusions and recommendations. Two different findings examples (~200 words and ~300 words) were tested for both thoracic and abdominal AI generation to assess the impact of the findings’ word count on generation time. ATR includes all aspects of report creation from claiming to finalizing each single region case.

Results

WITHOUT AI:

    • 1696 total cases.
    • 1367 used for ATR.
    • 329 cases excluded (19%) for >20 minutes.

WITH AI Assistance:

    • 2489 cases.
    • 2280 used for ATR.
    • 209 cases excluded (8%) for > 20 minutes.
Chart showing the average time reporting (ATR) with and without AI

Veterinary Radiologist Average Time Reporting

Graphic showing 2 stopwatches side by side, comparing the ATR. Shows it faster by 24% when AI was used.
Chart showing number of single region cases/hour
table showing the time it takes to complete a 40-case day with and without AI

Comparison of Time to Generate Conclusions + Recommendations Using Personalized AI Model

Table showing data on the time to generate conclusions and recommendations depending on the number of words in the findings.

Conclusions

These results reflect a specific veterinary teleradiology company’s baseline ATR with and without AI prelim reporting, recorded at the given points in time. The performance of the platform itself will continue to improve as AI evolves and processing speeds accelerate.

Using the approach described above, this study provides an initial benchmark for veterinary radiology reporting times and suggests that hybrid AI tools may help improve efficiency.

This is a specific veterinary teleradiology company’s baseline ATR with and without AI prelim reporting. This baseline ATR may be comparable to industry-wide ATR for US and European board-certified veterinary radiologists.

This study demonstrates that veterinary radiograph ATR can be reduced by 24% (range 11-33%) using a personalized hybrid generative and visual AI reporting solution. Vetology AI assistance reporting was shown to increase the number of reports per hour from 6.5 to 8.0.

A hybrid model of radiologist-reported findings and Vetology AI-generated, personalized conclusions and recommendations could reduce a 40-case workday by up to 3 hours.

AI-assisted reporting has the potential to reduce radiologist workload fatigue. Generating personalized AI conclusions and recommendations takes seconds.

Funding Disclosures: Vetology Innovations provided funding for this study through salaries.

References
  1. Cowan IA, MacDonald SL, Floyd RA. Measuring and managing radiologist workload: measuring radiologist reporting times using data from a Radiology Information System. J Med Imaging Radiat Oncol. 2013 Oct;57(5):558-66. doi: 10.1111/1754-9485.12092. Epub 2013 Jul 12. PMID: 24119269.
  2. Rathnayake S, Nautsch F, Goodman TR, Forman HP, Gunabushanam G. Effect of Radiology Study Flow on Report Turnaround Time. AJR Am J Roentgenol. 2017 Dec;209(6):1308-1311. doi: 10.2214/AJR.17.18282. Epub 2017 Oct 5. PMID: 28981363.
  3. Boland GW, Halpern EF, Gazelle GS. Radiologist report turnaround time: impact of pay-for-performance measures. AJR Am J Roentgenol. 2010 Sep;195(3):707-11. doi: 10.2214/AJR.09.4164. PMID: 20729450.
  4. Mityul MI, Gilcrease-Garcia B, Mangano MD, Demertzis JL, Gunn AJ. Radiology Reporting: Current Practices and an Introduction to Patient-Centered Opportunities for Improvement. AJR Am J Roentgenol. 2018 Feb;210(2):376-385. doi: 10.2214/AJR.17.18721. Epub 2017 Nov 15. PMID: 29140114.
  5. Weissman A, Solano M, Taeymans O, Holmes SP, Jiménez D, Barton B. A SURVEY OF RADIOLOGISTS AND REFERRING VETERINARIANS REGARDING IMAGING REPORTS. Vet Radiol Ultrasound. 2016 Mar-Apr;57(2):124-9. doi: 10.1111/vru.12310. Epub 2015 Dec 17. PMID: 26677167.
  6. Adams WM. A survey of radiology reporting practices in veterinary teaching hospitals. Vet Radiol Ultrasound. 1998 Sep-Oct;39(5):482-6. doi: 10.1111/j.1740-8261 1998.tb01638.x. PMID: 9771603.

Want to see our Personalized Prelim Tool in action?

To tour the platform and learn more, contact our team, or book a demo for a firsthand look radiology support tool, click this box, and schedule an appointment!

Clinical Review: Accuracy of AI Software for the Detection of Confirmed Pleural Effusion in Dogs

Accuracy of Artificial Intelligence Software for the Detection of Confirmed Pleural Effusion in Thoracic Radiographs in Dogs

Abstract

The use of artificial intelligence (AI) algorithms in diagnostic radiology is a developing area in veterinary medicine and may provide substantial benefit in many clinical settings. These range from timely image interpretation in the emergency setting when no boarded radiologist is available to allowing boarded radiologists to focus on more challenging cases that require complex medical decision making. Testing the performance of artificial intelligence (AI) software in veterinary medicine is at its early stages, and only a scant number of reports of validation of AI software have been published. 

The purpose of this study was to investigate the performance of an AI algorithm (Vetology AI®) in the detection of pleural effusion in thoracic radiographs of dogs. 

  • In this retrospective, diagnostic case–controlled study, 62 canine patients were recruited.
    • A control group of 21 dogs with normal thoracic radiographs
    • and a sample group of 41 dogs with confirmed pleural effusion were selected from the electronic medical records at the Cummings School of Veterinary Medicine.
  • The images were cropped to include only the area of interest (i.e., thorax).
  • The software then classified images into those with pleural effusion and those without.
  • The AI algorithm was able to determine the presence of pleural effusion with 88.7% accuracy (P < 0.05). The sensitivity and specificity were 90.2% and 81.8%, respectively (positive predictive value, 92.5%; negative predictive value, 81.8%).

The application of this technology in the diagnostic interpretation of thoracic radiographs in veterinary medicine appears to be of value and warrants further investigation and testing.

KEEP READING

Click one of the buttons below to continue reading. If you have a subscription to Wiley Online, you can access the article there; otherwise, you click to load the PDF.

Clinical Review: AI vs. Veterinary Radiologist on Canine Cardiogenic Pulmonary Edema

Comparison of Artificial Intelligence to the Veterinary Radiologist's Diagnosis of Canine Cardiogenic Pulmonary Edema

Abstract

Application of artificial intelligence (AI) to improve clinical diagnosis is a burgeoning field in human and veterinary medicine. The objective of this prospective, diagnostic accuracy study was to determine the accuracy, sensitivity, and specificity of an AI-based software for diagnosing canine cardiogenic pulmonary edema from thoracic radiographs, using an American College of Veterinary Radiology-certified veterinary radiologist’s interpretation as the reference standard.

  • Five hundred consecutive canine thoracic radiographs made after-hours by a veterinary Emergency Department were retrieved.
  • A total of 481 of 500 cases were technically analyzable.
  • Based on the radiologist’s assessment:
    • 46 (10.4%) of these 481 dogs were diagnosed with cardiogenic pulmonary edema (CPE+).
    • Of these cases, the AI software designated 42 of 46 as CPE+ and four of 46 as cardiogenic pulmonary edema negative (CPE−).
  • Accuracy, sensitivity, and specificity of the AI-based software compared to radiologist diagnosis were:
    • 92.3%, 91.3%, and 92.4%, respectively
    • (positive predictive value, 56%; negative predictive value, 99%).

Findings supported using AI software screening for thoracic radiographs of dogs with suspected cardiogenic pulmonary edema to assist with short-term decision-making when a radiologist is unavailable.

KEEP READING

Click one of the buttons below to continue reading. If you have a subscription to Wiley Online, you can access the article there; otherwise, you click to load the PDF.

Pin It on Pinterest