Every Image Screened, Not Just the Ones You Choose to Send

Every Image Screened, Not Just the Ones You Choose to Send

ARTICLE ABSTRACT

This article walks through how automated AI screening complements teleradiology in a real practice workflow: the veterinarian reads the film, the AI corroborates what they saw and surfaces incidental findings, and teleradiology adds a board-certified or board-eligible specialist’s depth on the cases that genuinely warrant one.

The result is in-appointment answers for the pet owner, fewer next-day callbacks, and more focused telerad referrals on the cases that actually need a specialist’s interpretation.

How automatic AI screening adds comprehensive coverage without changing how you practice

Teleradiology is a trusted and valuable part of veterinary practice. When a case calls for a board-certified radiologist’s interpretation, the ability to send images and receive a detailed report has improved patient care across the profession. Most practices have a teleradiology provider they rely on, and that relationship matters.

Vetology’s AI screening serves a different purpose. It is not a replacement for teleradiology, it is a complement that adds something teleradiology was never designed to do: screen every image, automatically, before anyone has to decide which cases need a specialist read.

The two services work hand in hand. AI screening provides the always-on baseline that catches findings on every study. Teleradiology provides a board-certified or board-eligible radiologist’s interpretation on the cases where a specialist’s depth adds the most.

The Difference Between Selective and Comprehensive

Teleradiology works on a case-by-case basis. The veterinarian evaluates the images, identifies cases that would benefit from specialist input, and submits those for review. This is exactly how the service is designed to work, and it works well.

AI screening works differently. When a practice uses Vetology, every radiograph submitted through the workflow is automatically analyzed across 91+ classifiers covering conditions in canine and feline thorax, abdomen, and spine/musculoskeletal categories. There is no selection step. The screening happens on every study, validated on 300,000 board-certified veterinary radiologist-reviewed cases with published sensitivity and specificity for every condition.

This means the AI is reviewing images that the DVM may already feel confident about. Here’s where the added value shows up: A thorax taken to evaluate a cough also gets screened for cardiac changes, lymphadenopathy, and spinal findings. An abdomen taken for GI signs is screened for organ size changes, mineralization, and structural abnormalities. The AI adds breadth to every study, regardless of the original clinical question.

What This Looks Like at 2 PM on a Tuesday

Standard Teleradiology-Only Workflow

A Labrador presents for a persistent cough. You take thoracic radiographs. With teleradiology, you submit the images, move the client out, and call tomorrow with results. The pet parent goes home without answers, and you add another callback to tomorrow’s list.

icon graphic showing the veterinary diagnostic imaging workflow with AI

Imaging Workflow with AI in the Mix

With AI screening, the report is available within minutes, while the client is still in the room. The AI flags a bronchial pattern consistent with your clinical suspicion, but it also notes early left atrial enlargement and mild thoracic lymphadenopathy.

You now have a more complete picture to discuss with the client before they leave. If the lymphadenopathy reading prompts a closer look, you can submit that single case for a teleradiology read knowing exactly why you are asking for one.

The AI surfaces the question; the teleradiologist provides the credentialed answer. When you recommend a specialist read to a pet owner, the recommendation is grounded in something specific you saw on the image.

For some veterinarians, that in-appointment information is exactly what they want. Clients leave with clarity and a treatment plan instead of waiting overnight.

For others, the value is more analytical: five fewer callbacks per day at 8 to 10 minutes each adds up to 40 to 50 minutes of recovered time. That is nearly one additional appointment, or time back in a day that already feels too short. The per-case cost of AI screening on an unlimited subscription is a fraction of a teleradiology read, and the information arrives while it can still shape the visit.

Comprehensive Screening in Practice

In practical terms, AI screening means:

  • A thorax submitted for cardiac evaluation is also screened for pulmonary patterns, pleural findings, mediastinal changes, and thoracic spine conditions.
  • An abdomen submitted for vomiting is also screened for organ size, masses, mineralization, and structural findings across all visible organs.
  • A spine study is also screened for degenerative changes, congenital anomalies, and adjacent soft tissue findings.

The AI presents its findings in a structured report alongside the veterinarian’s own assessment. Some findings will confirm what the doctor already noted. Others may highlight something worth a closer look. The veterinarian always makes the final clinical decision.

This is not about replacing clinical judgment. It is about having a consistent, validated screening layer that catches incidental findings the same way every time, whether it is 9 AM or 5:45 PM on a Friday. When something on the screen warrants a specialist’s interpretation, the AI report helps the veterinarian put a focused clinical question on the teleradiology submission. Instead of “please read these films,” the question becomes “please assess thoracic lymphadenopathy and confirm cardiac silhouette.” The radiologist still reads with fresh eyes, on their own, but the clinician’s question is sharper. That sharper question is what makes the report more useful when it comes back.

How the Workflow Plays Out

You read the radiograph the way you always have. The AI report arrives on the same case within minutes, and you check it against what you saw. Most of the time, the AI confirms your read and may add a few findings you were not specifically looking for. That confirmation is the value. You move to diagnosis and treatment planning with one more layer of corroboration behind your decision, and the structured AI report becomes fast and familiar to read over time.

Some cases do not resolve that easily. The findings are subtle, the clinical picture is unclear, or you and the AI together still cannot get to a confident diagnosis or treatment plan. That is where teleradiology adds a layer of human specialist support. The case goes to a board-certified or board-eligible radiologist who can give the read the depth it needs.

Over time, the rhythm becomes natural. You read, the AI corroborates, and you decide whether you have what you need or whether the case warrants a specialist’s interpretation. The AI is the helper that confirms what you already saw. The teleradiologist is the human specialist you turn to when confirmation alone is not enough.

How the Whole Practice Benefits

Practice managers benefit from more complete initial visits. When more information is available during the appointment, more treatment decisions happen while the client is in the room. This means smoother scheduling, fewer follow-up calls to coordinate, and better client retention. Clients who leave with answers are more likely to follow through on treatment plans and return for follow-ups.

Veterinary technicians gain a new dimension to their work. Techs who capture quality radiographs can see the AI’s findings on the images they produced. Over time, this builds familiarity with a broader range of imaging findings and adds professional development value to the imaging workflow.

Front desk and client services staff benefit from the downstream effect: when appointments are more complete, there are fewer follow-up calls to manage and fewer schedule adjustments to coordinate. The day runs more predictably for everyone.

AI Screening and Teleradiology Work Together

These services answer different clinical questions, and knowing when to use each is part of running an efficient imaging workflow.

Lean on AI screening when:

  • You want a baseline read on every study, including the ones you already feel confident about
  • The pet parent is in the room and a same-visit conversation matters
  • You want a structured screen for incidental findings outside the original clinical question
  • You want to triage which cases in a busy day actually warrant a specialist’s time

Lean on teleradiology when:

  • The case is complex, the findings are subtle, or the clinical stakes are high
  • You want a board-certified or board-eligible radiologist’s interpretation on the medical record
  • The owner is asking for a specialist opinion before committing to next steps
  • You want a credentialed reading you can cite to a referring practice or in a follow-up conversation

In practice, the workflow is straightforward. The AI screens every case automatically. The veterinarian reviews the AI report alongside their own assessment of the films. When something warrants a specialist’s interpretation, the case goes to teleradiology with a sharper clinical question on the submission. The radiologist reads with fresh eyes, the way they always have. The clinician gets a credentialed reading on the cases that need one. The pet parent gets answers in the room when answers are available, and a thorough specialist review when one is appropriate.

Vetology offers both AI screening and teleradiology read by board-certified and board-eligible radiologists, including DACVR/ECVDI diplomates, board-certified cardiologists, and a board-certified dentist. STAT reads return in 2 hours; routine reads in 24. The AI subscription also works alongside whatever teleradiology provider a practice currently uses, so clinics do not have to choose between them.

Simple, Predictable Pricing

Vetology’s AI screening subscription is $200/month for unlimited studies. Not per-case, not tiered by volume. A flat monthly cost that covers every radiograph the practice submits, whether that is a handful per week or dozens per day. No contracts, no PACS required, and free DICOM storage included.

The AI subscription is the practice’s recurring imaging cost. Teleradiology fees, which start at $86 per single-region report, are billed to the pet owner on the cases the veterinarian sends for a specialist read.

The flat AI cost gives the clinic broad coverage on every study; the per-case telerad fee gives the pet owner a credentialed read on the cases that warrant one. Together they make for a more intentional imaging workflow.

The system integrates with widely used practice management systems including DaySmart Vet, ezyVet, VetRocket, ScribbleVet, and CoVet with more on the way.

Want to see AI in action?

To tour the platform and learn more, contact our team, or book a demo for a firsthand look at our AI and teleradiology platform.

How to Use Vetology: 30 Days to a Smart, Practical AI Radiology Workflow 

How to Use Vetology: 30 Days to a Smart, Practical AI Radiology Workflow 

ARTICLE ABSTRACT

Vetology is a veterinary AI radiology platform that pairs automated AI screening of canine and feline radiographs with on-demand access to board-certified and board-eligible radiologists.

Key Takeaways:

  • Setup is fast. A single 45 – 60-minute remote call configures DICOM, account settings, and includes an initial training.
  • The platform works with all major digital X-ray brands and includes unlimited DICOM storage, a built-in PACS, and integrations with select PIMS and AI scribe platforms.
  • Radiographs auto-route to the AI screening tool, and reports return within minutes with findings, conclusions, and recommendations.
  • The recommended workflow is to read the radiographs first, then review the AI report as a structured second look.
  • Board-certified and board-eligible radiologists are available on demand, with STAT reads in two hours, routine reads in 24 hours.
  • Most clinics reach full workflow integration within days, but we continue to provide support with client services check-in at month one, and beyond.

Adding a new tool to any workflow can feel intimidating. But with Vetology’s fast veterinary AI implementation, validated AI screening software, and access to board-certified (and board-eligible) radiologists, you can be up and running in no time. 

Here’s what the first 30 days with Vetology look like, including the teleradiology setup process, veterinary DICOM integration, and team training and support.

Setup and Support

The Vetology support team remotely manages all aspects of setup. During a one-hour scheduled call, the support team will: 

  • Configure your account settings
  • Establish and test your DICOM connection
  • Troubleshoot installation issues
  • Provide initial training

The Vetology platform supports all brands of digital radiograph equipment, and your subscription includes unlimited, free DICOM storage and a built-in PACS (if you need one), this can eliminate the need for separate, expensive storage systems, however, changing your PACS is at your clinic’s discretion. Vetology also integrates with several PIMS and AI scribe platforms, giving clinics the option to submit visit notes to the platform alongside images.

Once the support team configures your hospital’s settings and enables the “auto-send” feature, all images captured on in-house equipment will route to Vetology’s AI screening tool, with the option to consult a board-certified or board-eligible radiologist. You can use the platform within an hour of installation.

Learning To Use The Vetology Platform

After a guided onboarding, day-to-day use is simple. Radiographs from your existing X-ray equipment route to the Vetology cloud automatically, and within minutes an AI screening report is generated. The report is available on the platform, by email, and through PMS integrations in some cases.

Each AI screening report lists detected findings, infered conclusions, and offers recommendations for next steps. Our recommended workflow is to read the radiographs first, form your own impression, then review the AI report as a structured second look. Used this way, it offers support, catching subtle findings without replacing your clinical expertise.

In straightforward cases, the report often confirms what you already saw and supports faster treatment decisions. In complex cases, it can help you decide whether to escalate to a board-certified veterinary radiologist.

For practice managers, this same workflow doubles as a training tool. Newer veterinarians can compare their impression against the AI report and a senior colleague’s review, freeing senior vets to mentor rather than carry every second read.

Adjusting To A New Workflow

As veterinary teams gain more experience with the platform, they become more proficient at using the AI screening results, identifying radiographic findings they may have missed before, and making clinical judgments about whether they need more help with a given case.

For veterinary technicians and support staff, veterinary AI implementation and our support team, can help them learn better radiograph positioning, and best practices to improve image quality and, therefore, AI and radiologist read accuracy.

After a few weeks, most veterinary clinics have a better understanding of where Vetology fits into their daily workflows. However, this is a good time to check in with the support team and make adjustments to address any workflow hiccups.

Two common workflow “hacks” the support team recommends include assigning a staff member to confirm image receipt and retrieve reports, and being sure to review AI screening results before approaching client conversations, as the results may change the clinical picture and plan.

Escalating To A Radiologist

Vetology gives your clinic direct access to boarded and board-eligible veterinary radiologists who can read images from dogs, cats, small mammals, exotics, and large animals. When you’re unsure about an image or case, you can submit the images to a human radiologist via the platform and receive a comprehensive report within two hours for STAT reads and 24 hours for routine cases. Vetology board-certified veterinary radiologists are available for follow-up questions via phone or email.

Over your first 30 days using Vetology, you’ll get better at determining when to escalate a case. Common reasons include:

  • AI findings that contradict clinical findings or history
  • Inconclusive results, or unclear next steps
  • Unusual anatomy
  • Unfamiliar species or complex conditions
  • Second opinions before surgery or referral
  • Solo practitioners who need a little more guidance before they can form a treatment plan.

Through our platform,  a second set of expert eyes is always available to help with imaging cases.

Checking In: Your First Month With Vetology

Vetology is invested in each clinic’s success on our platform. After the first month, the support team will check in to collect feedback on both the AI screening tool and teleradiology services, helping you to work through anything that needs adjustment.

The Vetology client services team is made up of experienced veterinary technicians, technology specialists, and customer care professionals who handle more than 14,000 support communications each year. Call anytime to reach a real person right away, or get a quick response via email or live chat.

Want to see AI in action?

To tour the platform and learn more, contact our team, or book a demo for a firsthand look at our AI and teleradiology platform.

How to Read AI Metrics Like a Confident Veterinarian

How to Read AI Metrics Like a Confident Veterinarian

A practical guide for veterinary professionals who want to understand AI validation data, not just trust it.

ABSTRACT

Vetology publishes 11 performance metrics for each of its 89+ veterinary radiology classifiers, built on a foundation of 300,000 multi-image patient cases. This article explains what each metric means in plain clinical language so veterinary professionals can interpret AI screening results with confidence. It covers sensitivity and specificity (how well the AI classifies cases), prevalence (how common a condition is in real-world practice), positive and negative predictive values (how reliable an individual prediction is once prevalence is factored in), confidence intervals, radiologist agreement rates, AUC, F1 score, and accuracy.

A key distinction: sensitivity and specificity evaluate model performance independent of prevalence, while PPV and NPV evaluate prediction reliability and are directly affected by how common a disease is. For rare conditions, a PPV that meaningfully exceeds the underlying prevalence indicates real predictive value. All metrics are published with full transparency at vetology.net/ai-classifier-performance.

Key terms: veterinary AI, classifier performance, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), prevalence, AUC, radiologist agreement rate, confusion matrix, veterinary radiology, AI validation, diagnostic AI screening

Why This Matters for Your Practice

We recently expanded our public AI performance dashboard from four metrics to eleven for each of our 89+ classifiers. That is a lot of numbers. And if you are like most veterinary professionals, you did not go to vet school to interpret ROC curves.

But these metrics directly affect how you use AI screening results in your clinical decisions. When an AI report flags cardiomegaly or rules out pleural effusion, the metrics behind that classifier tell you how much weight to give the result. Understanding a few key numbers can change how confidently you act on what the AI is telling you.

Here is what each metric means, in plain language, with real examples from our published data.

The Two Metrics You Probably Already Know

Sensitivity (the “catch rate”)
When the condition is present, how often does the AI detect it?
A sensitivity of 89.5% means the AI correctly identifies the condition in roughly 89 or 90 out of every 100 cases where it truly exists. The remaining cases are missed findings (false negatives).

What this means for you: Higher sensitivity means fewer missed findings. For conditions where early detection is critical, like heart failure, you want sensitivity to be as high as possible.

Specificity (the “all clear” rate)

When the condition is absent, how often does the AI correctly say so?
A specificity of 92.1% means that when there is no finding, the AI agrees 92 out of 100 times. The rest are false alarms (false positives).

What this means for you: Higher specificity means fewer unnecessary follow-ups. When the AI says “not present” and specificity is high, you can feel confident about that negative result.

Prevalence

How common is this condition in real-world practice?
We calculate prevalence from our clinical case database rather than the test set, so the number reflects actual clinical frequency. This tells you the baseline probability before the AI even looks at the image. A condition with 15% prevalence behaves very differently than one at 0.5%.

Why it matters here: Prevalence is essential for understanding the next two metrics, PPV and NPV. Without knowing how common a condition is, those numbers cannot be interpreted correctly.

REAL EXAMPLE

Our Heart Failure (Canine) classifier has 89.5% sensitivity and 92.1% specificity.

That means it catches about 9 out of 10 true heart failure cases, and when it says the heart looks normal, it is right about 92% of the time.

The Two Metrics That Answer Your Real Question

Sensitivity and specificity describe how the AI performs in controlled testing. But when you are looking at a patient’s results, the question you are actually asking is different: “The AI flagged this finding. Should I believe it?”

That is where PPV and NPV come in.

While Sensitivity and Specificity are metrics for evaluating what percentage of the time we expect a case to be classified correctly, Positive Predictive Value (PPV) and Negative Predictive Value (NPV) are metrics for evaluating what percentage of the time a prediction class is correct.

The biggest difference is that PPV and NPV metrics consider how prevalent a disease is, while Sensitivity and Specificity do not.

Sensitivity and Specificity are more useful for evaluating model performance, whereas PPV and NPV are more useful for interpreting model predictions.

Positive Predictive Value (PPV)

When the AI flags a finding, how often is it actually there?

PPV depends heavily on how common the condition is. A rare condition (low prevalence) will naturally have a lower PPV even with strong sensitivity and specificity, because most of the population does not have it.

We calculate PPV using real-world prevalence from our clinical case database so the number reflects what you would see in practice.

Negative Predictive Value (NPV)

When the AI says a finding is not present, how often is it right?
For most conditions, NPV is very high because most patients do not have any given condition.

An NPV of 99.9% means you can be extremely confident in a negative result. This is where AI screening is often strongest: helping you confidently rule things out.

REAL EXAMPLE

Our Heart Failure (Canine) classifier has 89.5% Sensitivity and 92.1% Specificity, with a PPV of 11.9% and an NPV of 99.9%. That looks lopsided, and it is supposed to.

Heart failure has a prevalence of about 1.2% in our clinical database. So when the AI flags it, there is roughly a 1 in 8 chance the condition is truly present, which is still a significant increase from the baseline 1 in 83 rate. A PPV notably higher than the underlying prevalence indicates the model is providing real predictive power beyond random guessing. When it says “no heart failure,” you can be very confident.

How much PPV is clinically useful is ultimately a question for clinicians, and should be an ongoing discussion point as we continue to retrain and improve our models.

The clinical takeaway: a positive flag for a rare condition is a signal to look more closely, not a diagnosis. A negative result is a strong reassurance.

The Metrics That Give You Context

95% Confidence Interval

How precise is the measurement?

A confidence interval of “85% – 93%” means the true sensitivity most likely falls within that range. Narrower intervals mean more cases were tested and the measurement is more precise.

Wider intervals (common for rarer conditions) mean fewer test cases were available.

We publish confidence intervals for both sensitivity and specificity so you can judge how much certainty is behind each number.

Radiologist Agreement Rate

How often do specialists agree with each other on this finding?

This might be the most important context metric on the dashboard. Some findings are straightforward and board-certified radiologists almost always agree; others are more subjective.

If specialists disagree 10-30% of the time on a given finding, an AI performing in that range is working within the natural variability of expert interpretation.

This number gives you a benchmark for what “good” means for each specific condition.

REAL EXAMPLE

Our Cardiomegaly (Canine) classifier has a Radiologist Agreement Rate of 93%. That means even board-certified radiologists disagree about 7% of the time on this finding.

The AI’s sensitivity of 75.6% and specificity of 86.3% should be understood in that context.

The Metrics for the Data-Curious

The remaining metrics are primarily used by data scientists and statisticians to evaluate classifier quality. They are published for completeness and for those who want the full picture.

AUC (Area Under the Curve)

How well does the classifier distinguish positive from negative overall?

A single number summarizing overall quality. 1.0 is perfect; 0.5 is no better than a coin flip. Values above 0.85 indicate strong performance.

Our Heart Failure classifier has an AUC of 0.95.

F1 Score

How well does the classifier balance catching findings with avoiding false alarms?

The harmonic mean of precision and recall. Useful for comparing classifiers where both false positives and false negatives matter.

Accuracy

What percentage of all cases did the AI get right?

This sounds like the most important number, but it can be misleading for rare conditions. If a condition has 1% prevalence, a system that always says “not present” would be 99% accurate while catching nothing. That is why we publish sensitivity and specificity alongside accuracy.

How to Use This in Your Practice

You do not need to memorize these metrics. But the next time you review an AI screening report, three quick checks can change how you use the results:

  1. Check the NPV for negative results. For most conditions, the NPV is 99%+. When the AI says “not present,” you can move on with confidence.
  2. Check the prevalence for positive flags. A positive flag on a rare condition (prevalence under 2%) is a signal to investigate further, not a confirmation. A positive flag on a common condition (prevalence above 10%) carries more weight.
  3. Check the Radiologist Agreement Rate for borderline calls. If specialists disagree 20% of the time on a finding, an AI result in the gray zone is reflecting genuine clinical ambiguity, not a system failure.

The full metrics for all 89+ classifiers are published at vetology.net/ai-classifier-performance. We publish them because informed trust is better than blind trust, and veterinary professionals deserve the data to make their own judgment calls.

View the complete AI performance dashboard

Sensitivity, specificity, PPV, NPV, confidence intervals, and Radiologist Agreement Rate for every classifier we validate.

Vetology’s Veterinary AI Dashboard Now Tracks 11 Public Metrics

Vetology’s Veterinary AI Dashboard Now Tracks 11 Public Metrics

FOR IMMEDIATE RELEASE

Vetology Expands Public AI Validation Dashboard to 11 Metrics Per Condition Classifier, Commits to Ongoing Model Retraining

Vetology publishes full statistical profiles for 89+ classifiers, reinforcing that building AI is only half the job.

March 31, 2026 – SAN DIEGO – Vetology, a provider of AI-generated radiology screening reports and board-certified veterinary teleradiology, today expanded its publicly available AI performance dashboard from four metrics per classifier to eleven. The update covers 89+ validated classifiers across canine and feline thoracic, abdominal, and musculoskeletal imaging.

The expanded dashboard now reports sensitivity, specificity, positive predictive value, negative predictive value, AUC, F1 score, accuracy, prevalence, confidence intervals, and Radiologist Agreement Rate for each condition. The data is available on our website. 

The update also reflects Vetology’s commitment to maintaining its existing classifiers alongside building new ones. Of the 89+ classifiers currently published, 31 are retrained models that were originally released, then revalidated against updated board-certified radiologist consensus data. The oldest classifiers in the current dashboard date to August 2024; all have been revalidated with confusion matrices generated as recently as February 2026.

“AI is changing fast, and we are working just as hard to keep pace. We put the same rigor into maintaining our older models as we do into building new ones. Publishing the data for all of them, new and retrained, is how we honor our commitment to our veterinary partners and patients.”

Eric Goldman, President, Vetology

New classifiers added in this update include Obscuring Pleural Effusion, Esophageal Enlargement, Intervertebral Disc Disease (Thoracic), Small Intestine Enlargement (Feline), Colon Diffuse Distension (Feline), and a consolidated Heart Failure classifier for canine imaging. The Heart Failure classifier reports 89.5% sensitivity and 92.1% specificity; the Obscuring Pleural Effusion classifier, designed to flag cases where fluid volume may limit diagnostic interpretation, reports 87.2% sensitivity and 96.7% specificity.

“We’re improving our classifiers every month, and every update is revalidated against fresh consensus reads from board-certified radiologists – not the same training set warmed over. That’s why we publish eleven metrics per classifier instead of the one or two you’ll see from other vendors. Sensitivity by itself doesn’t tell a clinician whether to trust a result. PPV, confidence intervals, specificity – that’s what lets a veterinarian decide how much weight to put on what the model is telling them. We think that level of transparency should be the baseline for veterinary imaging AI. As far as we can tell, nobody else is publishing it.”

Cory Clemmons, CTO, Vetology

Vetology’s validation data is built on a foundation of 300,000 multi-image patient cases, with classifier performance validated against board-certified veterinary radiologist consensus. The company publishes these metrics as part of its commitment to transparency in an industry where, according to a 2026 Frontiers in Veterinary Science audit, 63.3% of commercial veterinary AI vendors do not disclose validation data publicly.

ABOUT VETOLOGY
Vetology provides AI-generated radiology screening reports and on-demand teleradiology consultations from board-certified veterinary radiologists, cardiologists and a dentist, giving veterinary practices both speed and specialist depth in a single platform. The Vetology AI screening system covers a growing list of conditions across canine and feline thoracic, abdominal, and musculoskeletal imaging. Screening results are designed to fit naturally into existing clinic workflows, so veterinary teams can move from image to informed decision without adding steps to their day. Vetology was founded on the belief that humans and AI are better together.

Learn more at vetology.net.

Media Contacts

Thanks for reading! If you’d like to learn more or have any questions, we’d love to hear from you.

Vetology’s Veterinary AI Dashboard Now Tracks 11 Public Metrics

Vetology Strengthens Leadership Team with New Director of Sales

FOR IMMEDIATE RELEASE

Veterinary commercial leader Pierre D'Amours joins growing team as Vetology expands board-certified radiologist services and AI diagnostic platform

March 16, 2026 – SAN DIEGO, CA – Vetology, a provider of AI-assisted radiology and board-certified teleradiology services for veterinary practices, today announced the addition of Pierre D’Amours as Director of Sales. The newly created role reflects the company’s growth trajectory.

President Eric Goldman, who has led Vetology’s commercial efforts since founding the company, and D’Amours will partner closely to build a sales organization that brings Vetology’s services to more veterinary practices across North America and internationally.

Vetology’s platform now includes 94+ feline and canine AI classifiers that screen radiographs for conditions across thorax, abdomen, spine, and musculoskeletal studies, with new classifiers releasing monthly and all performance metrics published publicly. The company also provides on-demand access to board-certified veterinary radiologists for specialist-level interpretation. As the platform and radiologist team grow, Vetology is investing in the commercial infrastructure to match.

We started Vetology to close the gap between the number of practices that need diagnostic imaging expertise and the number of board-certified radiologists available to provide it.  AI was the solution — a way to give every practice access to consistent, validated screening regardless of where they are and when they need it. We paired that with our own team of board-certified radiologists so practices have both. I’ve been having this conversation with practices since day one. Pierre has the industry relationships and credibility to help us bring Vetology’s service and solutions to more practices, and I’m excited to work alongside him.

Eric Goldman, President, Vetology

An Industry Insider

D’Amours brings seven years of veterinary commercial experience as Vice President of North America Sales & Operations at Movora (Vimian Group AB), where he ran a $70M+ veterinary medical devices and SaaS business. He is fluent in English and French, holds a Bachelor of Commerce from Concordia University, and has deep relationships across veterinary practices, distributors, and corporate groups throughout North America.

Pierre understands the challenges inherent in running a veterinary practice and how the right technology can solve real problems in day-to-day operations. At Vetology he will work with veterinary doctors and management teams to make sure that we are delivering on our promises both during and after the sale.

When I evaluated Vetology, what stood out was a company that had done the hard work first — building the AI, hiring board-certified radiologists, validating the classifiers, and publishing all of it for the industry to review. That kind of transparency is rare in this space. I’ve spent years working with veterinary practices, and the right technology should solve real operational problems, not add complexity.

My focus is to partner closely with DVMs to make sure we deliver on that promise — during the sales process and well after implementation — and to build a sales team grounded in trust, honest about where our solutions fit, and focused on long-term partnerships over transactions.

Pierre D’Amours, Director of Sales, Vetology 

# # #

ABOUT VETOLOGY

Vetology is a veterinary diagnostic imaging support company that provides AI-generated screening reports and traditional teleradiology services by board-certified veterinary radiologists. Built by radiologists, Vetology focuses on improving patient outcomes through accuracy, speed, and reliability in diagnostic imaging. Our platform is designed to integrate seamlessly into existing hospital workflows, helping clinicians make informed decisions quickly.

Learn more at vetology.net.

Media Contacts

Thanks for reading! If you’d like to learn more or have any questions, we’d love to hear from you.

Pin It on Pinterest