Google’s medical AI fails in its first real tests despite its great potential



[ad_1]

Related news

Artificial Intelligence, especially if we talk about the one that Google manages, has become a claim for sale and has glorified itself until almost having it as a miracle worker. Nothing could be further from the truth; is an incredibly promising technology, but even Google’s has its flaws.

Google Health, the medical division of Google, created a model of ‘deep learning’ who is dedicated to observing images of the eye for evidence of Diabetic retinopathy, one of the leading causes of vision loss worldwide. Google researchers conducted tests in rural clinics in Thailand, and the result was disastrous.

Despite the fact that the model had very high theoretical precision, the AI ​​turned out to be a complete failure in the tests. It was impractical and very frustrating for the staff, as the results were inconsistent and far removed from the results that it had produced in previous tests.

Google’s AI doesn’t measure up

The lessons that AI had to learn are very difficult; Medicine contains many factors that can lead to Artificial Intelligence, based on training, not of the expected results. Despite doing these practices it is necessary to check the usability of AI in real field tests.

The document that Google itself has unveiled details the deployment of this tool, aimed at increasing the existing process by which patients from various clinics in Thailand are screened for retinopathy. Nurses take patients one by one, take photos of their eyes, and send them in batches to ophthalmologists. Once this is done, they evaluate the photos and return results, and they take a long time; between 4 and 5 weeks.

Google I / O.

Therefore, the idea behind this model is to provide this same experience but in seconds. Internal tests revealed 90% accuracy; nurses made preliminary recommendations, and conventional results that took an average of one month were delivered with AI in just over a minute. But the study shows that this is not necessarily the case.

“We observed a high degree of variation in the ocular detection process in the 11 clinics in our study. The image capture and classification processes were consistent across all clinics, but the nurses had a high degree of autonomy in how they organized flow detection job.

The environment and the places where the exams were carried out They were very varied. Only 2 clinics had a dedicated screening room that could be darkened to ensure patients’ pupils were large enough to take a high-quality “background” photo. The images sent to the server did not meet the high standards of the algorithm. “

The problem, as it is easy to imagine, is in the variability. “The learning system has strict guidelines regarding the images it will evaluate. If an image has a bit of blur or a dark area, the system will reject it. […] The high system standards for image quality are disagree with consistency and image quality that nurses routinely captured. This mismatch caused frustration and additional work.

The process is complicated

Illustration of the image of Artificial Intelligence in culture.

Illustration of the image of Artificial Intelligence in culture.

In general, the problem was that the process that was supposed to be simplified it got complicated. Other factors added to the problem; Internet connection in clinics were less reliable and slower, and because they had to upload the images to the server, they had to wait for the images to be uploaded in high quality. The system would reject low quality images.

Images took 60-90 seconds to load. It may seem little, but remember that there is a detection queue and this time limits the number of patients that can be examined in a day. To make matters worse, in one of the clinics where the tests were being carried out The connection fell, leaving the internet idle for 2 hours. The examined patients went from 200 to 100.

One thing led to another; In the attempt to take advantage of this technology, fewer people received treatment and the nurses had to entertain themselves looking for other solutions to new problems. In fact, patients were even advised not participate in the study.

But some case had to go well, right? Even the best performers had problems; many of the patients were not prepared for instant evaluations, much less establish follow-up appointments immediately after sending the images.

Interpretation of a futuristic Artificial Intelligence.

Interpretation of a futuristic Artificial Intelligence.

“As a result of the design of the prospective study protocol, and potentially the need to make field plans to visit the referral hospital, we observed nurses from clinics 4 and 5 discourage patients to participate in the prospective study, for fear that this would cause unnecessary difficulties. “

“[Los pacientes] They are not concerned with precision, but they are concerned with what the experience will be like. Will I waste my time if I have to go to the hospital? “Says a nurse.” I assure you [a los pacientes] they don’t have to go to the hospital. They ask: ‘Does it take longer?’, ‘Am I going somewhere else?’ Some people are not ready to go, so they will not join the investigation.

Not all is bad news

The idea of ​​AI should not be dismissed anymore. The problem is not that AI is not a good tool, but that the solution it must adapt to the problem and the place. Patients and nurses appreciated when the system was working well, sometimes helping to show that it was a serious problem and that it needed to be addressed soon. In addition, dependence on a severely limited resource was sometimes reduced with that of local area ophthalmologists.

However, the same study authors made it clear that this is a very premature application. The authors argue that “attending to people (their motivations, values, professional identities) is vital when planning these deployments.” Therefore, it is easy to think that under the right conditions and with a couple of adjustments, the system can greatly benefit the environments in which it is applied.

[ad_2]