An artificial intelligence model surpasses clinicians in assessing images of tympanic membranes obtained from children with possible middle ear effusions. The new AI tool was developed by G. Crowson, M.D., from Massachusetts Eye & Ear in Boston. The tool was tested through a retrospective cohort study at a tertiary academic medical center from 2018 to 2021. The researchers employed a training collection of 639 images of tympanic membranes with traditional otitis media with effusion and acute otitis media to train the neural network and a proprietary commercial image classifier from Google. The researchers found that the model achieved a mean projection accuracy of 80.8 percent, while the sample from Google achieved 85.4 percent projection accuracy. In a validation survey 39 clinicians who analyzed 22 otoscopy pictures, the average diagnostic accuracy was 65.0 percent. The model produced attained an accuracy of 95.5 percent on the same data set.(1)
An artificial intelligence model surpasses clinicians in assessing images of tympanic membranes obtained from children with possible middle ear effusions. The new AI tool was developed by G. Crowson, M.D., from Massachusetts Eye & Ear in Boston. The tool was tested through a retrospective cohort study at a tertiary academic medical center from 2018 to 2021. The researchers employed a training collection of 639 images of tympanic membranes with traditional otitis media with effusion and acute otitis media to train the neural network and a proprietary commercial image classifier from Google.
The researchers found that the model achieved a mean projection accuracy of 80.8 percent, while the sample from Google achieved 85.4 percent projection accuracy. In a validation survey 39 clinicians who analyzed 22 otoscopy pictures, the average diagnostic accuracy was 65.0 percent. The model produced attained an accuracy of 95.5 percent on the same data set.(1)
The University of Chicago Medicine has joined the I3LUNG project to develop a decision-making tool for individual treatment plans to fight against lung cancer. The project will use different kinds of artificial intelligence (AI) and machine learning protocols to process and analyze clinical data, laboratories, radiology images, and biological characteristics of tumors from 2,000 patients across numerous research centers. The team at the University will apply the expertise of researchers from the labs of Garassino. This data analysis is expected to reveal new associations for new algorithms predicting immunotherapy outcomes.
This highly creative project aims to use machine learning and artificial intelligence to anticipate the effects of immunotherapy or other curative treatments, which we are not currently capable of doing. According to data, less than 30% of patients will have a great response to immunotherapy, but doctors cannot predict who those patients are. Improving the ability to make these predictions can help the patient and physicians better tailor treatments for individual cases.(2)
The University of Chicago Medicine has joined the I3LUNG project to develop a decision-making tool for individual treatment plans to fight against lung cancer. The project will use different kinds of artificial intelligence (AI) and machine learning protocols to process and analyze clinical data, laboratories, radiology images, and biological characteristics of tumors from 2,000 patients across numerous research centers.
The team at the University will apply the expertise of researchers from the labs of Garassino. This data analysis is expected to reveal new associations for new algorithms predicting immunotherapy outcomes. This highly creative project aims to use machine learning and artificial intelligence to anticipate the effects of immunotherapy or other curative treatments, which we are not currently capable of doing. According to data, less than 30% of patients will have a great response to immunotherapy, but doctors cannot predict who those patients are. Improving the ability to make these predictions can help the patient and physicians better tailor treatments for individual cases.(2)
Artificial intelligence (AI) is powering prosthetic hands to new levels compared to most traditional prosthetics. Electromyography (EMG) measures muscular electrical activity in response to a nerve’s impulse to move the hand. The new prosthetics also depend on EMG but utilize AI to improve their performance significantly.
The AI-enabled prosthetic hands have become easier to control and master for daily usage compared to standard prosthetic hands. Users have had shorter training times while the captured EMG signals are interpreted into a more precise control of the prosthetic. Users’ EMG signals are processed by an algorithm coupled to the device, and its output is transmitted to a cloud server.
The cloud incorporates a hand “signal training” database in its servers to create a hand control model that fits the user’s EMG and muscle control patterns. Once the initial signal training is complete, the prosthetic hand is invested with this control model. Over time, the machine learning (ML) coupled to the prosthetic piece “learns” from the repetitive use patterns of the user and refines motor performance to comply more closely with the user’s habits and needs.(3)
Artificial intelligence (AI) is powering prosthetic hands to new levels compared to most traditional prosthetics. Electromyography (EMG) measures muscular electrical activity in response to a nerve’s impulse to move the hand. The new prosthetics also depend on EMG but utilize AI to improve their performance significantly. The AI-enabled prosthetic hands have become easier to control and master for daily usage compared to standard prosthetic hands. Users have had shorter training times while the captured EMG signals are interpreted into a more precise control of the prosthetic. Users’ EMG signals are processed by an algorithm coupled to the device, and its output is transmitted to a cloud server.
The cloud incorporates a hand “signal training” database in its servers to create a hand control model that fits the user’s EMG and muscle control patterns. Once the initial signal training is complete, the prosthetic hand is invested with this control model. Over time, the machine learning (ML) coupled to the prosthetic piece “learns” from the repetitive use patterns of the user and refines motor performance to comply more closely with the user’s habits and needs.(3)
Google has been working for three years to design a new tool powered by artificial intelligence. It recently announced trials to help users understand skin, hair, and nail complaints. Once launch the device, users can use their phone’s camera to take three pictures of their hair, skin, or nail concerns from different positions and angles. The app then asks questions about symptoms to help the tool determine the possible cause. The AI algorithm analyzes this information and matches it from its knowledge of 288 conditions to give users a distilled list of possible conditions. The tool will show dermatologist-reviewed details and answers to commonly asked questions for each possible matching condition. Researchers developed and refined the model with 65,000 images and case data of diagnosed skin conditions across different demographics.(4)
Google has been working for three years to design a new tool powered by artificial intelligence. It recently announced trials to help users understand skin, hair, and nail complaints. Once launch the device, users can use their phone’s camera to take three pictures of their hair, skin, or nail concerns from different positions and angles. The app then asks questions about symptoms to help the tool determine the possible cause.
The AI algorithm analyzes this information and matches it from its knowledge of 288 conditions to give users a distilled list of possible conditions. The tool will show dermatologist-reviewed details and answers to commonly asked questions for each possible matching condition. Researchers developed and refined the model with 65,000 images and case data of diagnosed skin conditions across different demographics.(4)
Artificial intelligence can detect COVID-19 infection in people’s voices using a mobile phone app, according to an investigation presented by the Institute of Data Science, Maastricht University, The Netherlands, at the European Respiratory Society International Congress in Barcelona, Spain. The new intelligence model used in this research is more precise than PCR tests and is inexpensive, very quick, and easy to use. These features make this model appealing to be applied in low-income countries where PCR tests or other labs are expensive. The app used data from the University of Cambridge’s crowd-sourcing COVID-19 Sounds App, which includes 893 audio samples from 4,352 healthy and non-healthy participants, 308 of whom had tested positive for COVID-19. The researchers used a voice analysis technique called Mel-spectrogram analysis, which recognizes different voice features such as loudness, power, and variation over time. They found that one model, Long-Short Term Memory (LSTM), outperformed the other models. LSTM is based on neural networks. Its overall precision was 89%, its ability to detect positive cases correctly, and its ability to identify negative patients was 83%.(5)
Artificial intelligence can detect COVID-19 infection in people’s voices using a mobile phone app, according to an investigation presented by the Institute of Data Science, Maastricht University, The Netherlands, at the European Respiratory Society International Congress in Barcelona, Spain.
The new intelligence model used in this research is more precise than PCR tests and is inexpensive, very quick, and easy to use. These features make this model appealing to be applied in low-income countries where PCR tests or other labs are expensive.
The app used data from the University of Cambridge’s crowd-sourcing COVID-19 Sounds App, which includes 893 audio samples from 4,352 healthy and non-healthy participants, 308 of whom had tested positive for COVID-19. The researchers used a voice analysis technique called Mel-spectrogram analysis, which recognizes different voice features such as loudness, power, and variation over time. They found that one model, Long-Short Term Memory (LSTM), outperformed the other models. LSTM is based on neural networks. Its overall precision was 89%, its ability to detect positive cases correctly, and its ability to identify negative patients was 83%.(5)
The Koch Institute for Integrative Cancer Research at the Massachusetts Institute of Technology (MIT) and the Massachusetts General Hospital (MGH) have created a unique deep-learning approach to help categorize tumors of unknown origin by analyzing gene expression profiles related to early cell development and differentiation. The researchers took the gene expression of tumor specimens from The Cancer Genome Atlas (TCGA). The investigators split the knowledge base into parts corresponding to a certain point in a tumor’s development. The researchers then gave each of these parts a numerical value; and turned it into a machine learning model tagged as Developmental Multilayer Perceptron (D-MLP), which provides a tumor with a score based on its growth and then predicts its origin.
After that, an image model is used through a system called DALL-E, which uses natural language to transform words into images to join separate elements or numbers into a single character and build a set of images that can be matched, thus applying this concept to cancer.(6)
The Koch Institute for Integrative Cancer Research at the Massachusetts Institute of Technology (MIT) and the Massachusetts General Hospital (MGH) have created a unique deep-learning approach to help categorize tumors of unknown origin by analyzing gene expression profiles related to early cell development and differentiation.
The researchers took the gene expression of tumor specimens from The Cancer Genome Atlas (TCGA). The investigators split the knowledge base into parts corresponding to a certain point in a tumor’s development. The researchers then gave each of these parts a numerical value; and turned it into a machine learning model tagged as Developmental Multilayer Perceptron (D-MLP), which provides a tumor with a score based on its growth and then predicts its origin. After that, an image model is used through a system called DALL-E, which uses natural language to transform words into images to join separate elements or numbers into a single character and build a set of images that can be matched, thus applying this concept to cancer.(6)
The new tool is a mobile app designed by Epocrates showing bacterium types based on ZIP code, which helps clinicians better choose antibiotics and evade worsening antimicrobial resistance. The new tool uses bacteria found in urine, skin, and other sample types based on geographic area. Clinicians can also find appropriate antibiotic drug options, dosing, possible interactions, and safety information through the platform. With this, clinicians now have an innovative digital tool in their pocket that offers localized vulnerability data needed to confidently make informed and practical point-of-care decisions through knowledge of the bacteria in their patient’s community. Bugs + Drugs collects insight from de-identified ambulatory care microbiology data within athenaClinicals, Athenahealth’S electronic health record. AthenaClinicals boasts a network of over 145,000 clinicians while serving roughly 20% of the U.S. population. With such a broad reach, Epocrates expects to provide accurate, relevant bacterium data throughout the country.(7)
The new tool is a mobile app designed by Epocrates showing bacterium types based on ZIP code, which helps clinicians better choose antibiotics and evade worsening antimicrobial resistance. The new tool uses bacteria found in urine, skin, and other sample types based on geographic area. Clinicians can also find appropriate antibiotic drug options, dosing, possible interactions, and safety information through the platform. With this, clinicians now have an innovative digital tool in their pocket that offers localized vulnerability data needed to confidently make informed and practical point-of-care decisions through knowledge of the bacteria in their patient’s community.
Bugs + Drugs collects insight from de-identified ambulatory care microbiology data within athenaClinicals, Athenahealth’S electronic health record. AthenaClinicals boasts a network of over 145,000 clinicians while serving roughly 20% of the U.S. population. With such a broad reach, Epocrates expects to provide accurate, relevant bacterium data throughout the country.(7)
The brain research and advocacy non-profit Cohen Veterans Bioscience (CVB) announces the publication of results from its digital health research program analyzing data from the Parkinson’s Progression Markers Initiative (PPMI) to detect the existence or absence of Parkinson’s disease (PD). Sensor technology has shown promise in aiding in detecting and classifying diseases like PD but has the tiniest validation in real-world settings. As part of the Parkinson’s Progression Markers Initiative (PPMI) study cohort, investigators collected data passively and continuously using the Verily Study Watch in a subject’s natural environment. Employing this data, investigators at CVB, in association with PPMI investigators, used novel deep learning artificial intelligence (AI) techniques to explore the potential for indicating the presence of PD through real-life activity.
The results of this research are promising. In a pilot sample, investigators could discriminate between subjects with and without a PD diagnosis with near 90% accuracy on single walk-like measures and 100% accuracy when assessing data accumulated over one day.(8)
The brain research and advocacy non-profit Cohen Veterans Bioscience (CVB) announces the publication of results from its digital health research program analyzing data from the Parkinson’s Progression Markers Initiative (PPMI) to detect the existence or absence of Parkinson’s disease (PD). Sensor technology has shown promise in aiding in detecting and classifying diseases like PD but has the tiniest validation in real-world settings.
As part of the Parkinson’s Progression Markers Initiative (PPMI) study cohort, investigators collected data passively and continuously using the Verily Study Watch in a subject’s natural environment. Employing this data, investigators at CVB, in association with PPMI investigators, used novel deep learning artificial intelligence (AI) techniques to explore the potential for indicating the presence of PD through real-life activity. The results of this research are promising. In a pilot sample, investigators could discriminate between subjects with and without a PD diagnosis with near 90% accuracy on single walk-like measures and 100% accuracy when assessing data accumulated over one day.(8)