Allen Institute’s VeriSci uses artificial intelligence to verify scientific claims



[ad_1]

Researchers affiliated with the University of Washington and the Allen Institute of Artificial Intelligence say they have developed an artificial intelligence system, VeriSci, that can automatically verify scientific claims. Apparently, the system can not only identify summaries within studies that support or refute the claims, but can also provide rationale for its predictions in the form of evidence drawn from the summaries.

Automatic fact checking could help address reproducibility crises in the scientific literature, in which many studies have been found to be difficult (or impossible) to replicate. A 2016 survey of 1,500 scientists reported that 70% of them had tried but failed to replicate at least one other scientific experiment. And in 2009, 2% of scientists admitted to falsifying studies at least once, and 14% admitted personally knowing someone who did.

The Allen Institute and the University of Washington team sought to address the problem with a corpus (SciFact) containing (1) scientific claims, (2) summaries supporting or rejecting each claim, and (3) annotations with justified justifications. They cured him with a labeling technique that makes use of quote sentences, a source of natural claims in the scientific literature, after which they trained a BERT-based model to identify rational sentences and tag each statement.

The SciFact dataset comprises 1,409 verified scientific claims with a corpus of 5,183 abstracts, which were collected from a public database (S2ORC) of millions of scientific articles. To ensure that only high-quality articles were included, the team filtered articles with less than 10 citations and partial text, randomly sampling from a collection of well-considered journals spanning basic science domains (for example, Cell, Nature) to clinical medicine.

VB Transform 2020 Online: July 15-17, 2020: Join top AI executives at VentureBeat’s AI event of the year. Sign up today and save 30% off on digital access passes.

To tag SciFact, the researcher recruited a team of scorers, who were shown a citation sentence in the context of their source article and asked to write three content-based claims while ensuring that the claims will fit your definition. This resulted in so-called “natural” claims where the scorers did not see the article summary at the time they wrote the claims.

A scientific natural language processing expert created claim denials to obtain examples where an abstract refutes a claim. (Claims that could not be denied without introducing bias or obvious bias were omitted.) The scorers tagged the claim-summary pairs as Supports, Refutations or Insufficient Information, as appropriate, identifying all the rationale for the Supports or Refutations labels. And the researchers introduced distractors in such a way that for each citation sentence, the articles cited in the same document as the sentence were sampled but in a different paragraph.

VeriSci

Above: VeriSci results on various COVID-19 related claims. In some cases, the label is predicted
given the wrong context; the third sentence of evidence for the first claim is a finding about lopinavir, but for the
wrong disease (MERS-CoV).

The SciFact-trained model – VeriSci – consists of three parts: Recovery Summary, which retrieves summaries with the greatest similarity to a given claim; Foundations Selection, which identifies the foundations of each candidate abstraction; and Label prediction, which makes the final label prediction. In experiments, the researchers say that about half the time (46.5%), they were able to correctly identify the Supports or Refutations labels and provide reasonable evidence to justify the decision.

To demonstrate VeriSci’s generalization, the team conducted an exploratory experiment on a set of scientific claims data about COVID-19. They report that most of the COVID-related claims produced by VeriSci, 23 of 36, were deemed plausible by a medical student scorer, demonstrating that the model could successfully retrieve and classify the evidence.

The researchers acknowledge that VeriSci is far from perfect, mainly because it is confused with context and because it does not carry out evidence synthesis or the task of combining information through different sources to inform decision making. That said, they claim that their study demonstrates how factual verification could work in practice while shedding light on the challenge of understanding scientific documents.

“Scientific verification of the facts poses a set of unique challenges, pushing the limits of neural models in understanding and reasoning in complex language. Despite its small size, VeriSci training in SciFact leads to better performance than training in fact-checking data sets built from Wikipedia articles and political news, “the researchers wrote.” Adaptive techniques Domain names are promising, but our findings suggest that additional work is needed to improve the performance of end-to-end fact-checking systems. “

The VeriSci and SciFact publication follows the launch of Supp AI’s Allen Institute, an AI-powered web portal that allows consumers of supplements like vitamins, minerals, enzymes, and hormones to identify products or medications with which they may interact negatively. More recently, the non-profit organization updated its Semantic Scholar tool to search 175 million academic articles.

[ad_2]