Netherlands
June 1, 2022
Machine Learning
June 1, 2022

Caption Health

The Ultrasound dilemma

Ultrasound (US) is a complex assistance tool that requires advanced skills, usually limited to those with specialized training. Caption AI provides a broad spectrum of features to address the US’s limitations, mainly overcoming its operator dependency. Effective maneuvering of a US probe involves hand-eye coordination that takes a lifetime to develop. Caption AI emulates a sonographer’s expertise by providing real-time support on how to handle the transducer.

Caption Health’s AI Aspects

Caption AI can unlock the advantages of point-of-care US (POCUS) across multiple clinical settings, from more timely and accurate diagnosis and monitoring in the intensive care unit to improving perioperative management and decreasing delays before surgery. (1)

Emulating expertise with AI: Caption AI offers a new POCUS technology that allows healthcare providers to confidently perform quality US exams regardless of prior experience.

Transforming care: Caption AI enables fast, precise, and timely evaluation of cardiac function to establish a patient diagnosis and improve clinical management.

Standardizing quality: Real-time adjustments on diagnostic image quality and computerized evaluations of the overall exam quality help improve the quality of care.

Driving impact: Expanded access to ultrasound decreases costs, increases clinical effectiveness, and improves revenue for hospitals and institutions.

Starting smart: Caption AI provides a positional diagram suggesting where to place the US transducer for each view. A representative image gives a visual representation of what should be seen in that area.

 

Caption Health’s IT Solutions

Caption Health AI assists physicians and health providers in analyzing US images. The tool provides instant feedback on diagnostic image quality and automated evaluation of overall exam quality, driving an increase in the quality of care. 

  • Expert guidance: handling a US probe involves excellent hand-eye coordination; Caption AI imitates this process by providing real-time guidance on how to position and move the transducer. 
  • Automated quality evaluation: conventional US depends on an expert practitioner to identify anatomical structures and evaluate image quality. The software indicates to technicians how close they are to taking a good image and automatically saves the picture when it reaches the minimum standard for diagnosis.

Caption Health AI assists physicians and health providers in analyzing US images. The tool provides instant feedback on diagnostic image quality and automated evaluation of overall exam quality, driving an increase in the quality of care. 

  • Expert guidance: handling a US probe involves excellent hand-eye coordination; Caption AI imitates this process by providing real-time guidance on how to position and move the transducer. 
  • Automated quality evaluation: conventional US depends on an expert practitioner to identify anatomical structures and evaluate image quality. The software indicates to technicians how close they are to taking a good image and automatically saves the picture when it reaches the minimum standard for diagnosis.
  • Intelligent analysis: the system automatically calculates cardiac ejection fraction from any combination of up to three cardiac views commonly acquired on site: parasternal long axis, apical four chambers, and apical two chambers.

Caption Health software is integrated with the Terason uSmart 3200T Plus portable US system, which offers a wide range of clinical applications like thorax, abdomen, and vascular scanning. The system can be adapted to multiple scan models, including Color Doppler, Pulsed Wave Doppler, CW Doppler, and M-Mode. 

 

Caption Health’s Finances

CH recently secured a Series B funding of $53 million that allows them to continue developing their FDA-cleared technology. The existing investor DCVC led the financing. Other investors included Atlantic Bridge, Edwards Lifesciences, and existing investor Khosla Ventures

This capital will allow the company to increase its commercial operations, continue developing new AI technologies, and establish new partnerships. As the use of the software increases, the current company plans include implementing new features to expand the application of Caption AI to new clinical scenarios.

CH’s objective is to make Caption AI available to the market as soon as possible to assist clinicians taking care of patients with COVID-19. Recently, the FDA granted clearance to CH for Caption AI after multiple requests from hospitals across the country. (3)

CH recently secured a Series B funding of $53 million that allows them to continue developing their FDA-cleared technology. The existing investor DCVC led the financing. Other investors included Atlantic Bridge, Edwards Lifesciences, and existing investor Khosla Ventures

This capital will allow the company to increase its commercial operations, continue developing new AI technologies, and establish new partnerships. As the use of the software increases, the current company plans include implementing new features to expand the application of Caption AI to new clinical scenarios.

CH’s objective is to make Caption AI available to the market as soon as possible to assist clinicians taking care of patients with COVID-19. Recently, the FDA granted clearance to CH for Caption AI after multiple requests from hospitals across the country. (3)

AI Impact on Implanted Electrophysiological Devices

A new deep learning (DL) system software was developed to learn transthoracic echocardiography (TTE) images. It aimed to provide diagnostic imaging in patients with and without implanted electrophysiological devices and thus, evaluate the right ventricle size and function. The study included 240 patients who were examined by a sonographer without any support from the trained software and nurses with AI support. Results showed that AI-guided imaging provided high-quality imaging for right ventricle assessment in more than 80% of cases. No statistical difference between the tests performed by the nurses and the sonographer was seen. (4)

Automated Echocardiographic Quantification of Left Ventricular Ejection Fraction Using a Machine Learning Algorithm

Echocardiographic quantification of left ventricular ejection fraction (LVEF) depends on automatic or manual identification of endocardial boundaries followed by a model-based calculation of end-systolic and end-diastolic left ventricle volumes. Currently, artificial intelligence models can automatically detect left ventricular volumes and function. However, these systems are prone to errors in certain patients. In this project, researchers tried a new approach. They are developing an algorithm that mimics the human expert’s eyes. A computer capable of estimating the degree of ventricular contraction and expansion, independently of ventricular size.

The group developed a machine learning ML algorithm trained to automatically estimate LVEF on a database of more than 50,000 echocardiographic studies, including AP2 and AP4 views. The system was tested on a group of 99 patients, and the ejection fraction (EF) values were compared with the average measurements of 3 experts using conventional volume-based techniques. The automated estimation of LVEF values was highly consistent and in excellent agreement with the reference values, with a sensitivity of 0.90% and specificity of 0.92% for detection of EF ≤35%. These results from the software were similar to the expert clinicians’ measurements. These results suggest important clinical implications, with the possibility of echocardiography exams including automated estimates of ventricular function along with the images. (5)

The group developed a machine learning ML algorithm trained to automatically estimate LVEF on a database of more than 50,000 echocardiographic studies, including AP2 and AP4 views. The system was tested on a group of 99 patients, and the ejection fraction (EF) values were compared with the average measurements of 3 experts using conventional volume-based techniques. The automated estimation of LVEF values was highly consistent and in excellent agreement with the reference values, with a sensitivity of 0.90% and specificity of 0.92% for detection of EF ≤35%. These results from the software were similar to the expert clinicians’ measurements. These results suggest important clinical implications, with the possibility of echocardiography exams including automated estimates of ventricular function along with the images. (5)

Machine Learning and Echocardiography

An ML algorithm can provide accurate diagnostic US images, thus allowing exact left ventricular ejection fraction (LVEF) fraction calculation by applying the human approach of “eyeballing.” The study included 19 random medical students who had no previous knowledge of ultrasound (US). In this study, one of the three views parasternal long-axis (PLAX), apical-4-chamber (AP4), or apical-2-chamber (AP2) was obtained by the novices in 91% of the attempts. The novices receive a 2.5-hour online teaching course on the anatomy of the heart and basics of echocardiography. Without seeing any patients, it took the novices between 2.5 and 4 minutes to acquire each loop, and the experts needed a mean of 32 seconds. Diagnostic image quality was obtained in most patients (91%), demonstrating that the system allows even novices with minimal training to perform an echocardiography study. (6)

An ML algorithm can provide accurate diagnostic US images, thus allowing exact left ventricular ejection fraction (LVEF) fraction calculation by applying the human approach of “eyeballing.” The study included 19 random medical students who had no previous knowledge of ultrasound (US). In this study, one of the three views parasternal long-axis (PLAX), apical-4-chamber (AP4), or apical-2-chamber (AP2) was obtained by the novices in 91% of the attempts. The novices receive a 2.5-hour online teaching course on the anatomy of the heart and basics of echocardiography. Without seeing any patients, it took the novices between 2.5 and 4 minutes to acquire each loop, and the experts needed a mean of 32 seconds. Diagnostic image quality was obtained in most patients (91%), demonstrating that the system allows even novices with minimal training to perform an echocardiography study. (6)

Acquisition of Diagnostic Echocardiograms by Novices Using Artificial Intelligence

The use of AI in echocardiography has been limited to post-processing analysis of acquiring images. A group of researchers developed a DL system capable of guiding image acquisition; with it, novices without prior training in US can achieve studies with ten standard TTE views. More than 5,000,000 observations trained the algorithm on image orientation and quality. Studies were obtained from 30 patients by eight nurses without prior US experience guided by the DL algorithm. 

The results indicate that the nurses were able to obtain diagnostic quality studies comparable to sonographers for most of the measured parameters except for the tricuspid valve and inferior vena cava. The use of this DL algorithm could increase the use of echocardiography in resource-limited environments or settings where immediate interrogation of cardiac function/anatomy is needed. (7)

The use of AI in echocardiography has been limited to post-processing analysis of acquiring images. A group of researchers developed a DL system capable of guiding image acquisition; with it, novices without prior training in US can achieve studies with ten standard TTE views. More than 5,000,000 observations trained the algorithm on image orientation and quality. Studies were obtained from 30 patients by eight nurses without prior US experience guided by the DL algorithm. 

The results indicate that the nurses were able to obtain diagnostic quality studies comparable to sonographers for most of the measured parameters except for the tricuspid valve and inferior vena cava. The use of this DL algorithm could increase the use of echocardiography in resource-limited environments or settings where immediate interrogation of cardiac function/anatomy is needed. (7)

Contact Us