[ad_1]
Another major announcement from the Google Search On 2020 presentation on the company’s latest advancements in artificial intelligence concerns the update of search algorithms.
Last year, Google’s search engine switched to BERT (Transformers Bidirectional Encoder Representations or Bidirectional Encoder Network) technology, which gives the search engine the ability to understand natural human language. As part of Search On 2020, the company announced a number of other enhancements to more accurately interpret user queries.
About 15% of Google’s daily search volume is brand new, according to Prabhakar Raghavan, head of search and corporate assistant. This means that the company needs to continually work to improve the SERP.
Supposedly, this is partly due to bad and error-prone queries. According to Katie Edwards, vice president of engineering at Google, one in ten search terms contains errors. Google has long struggled with illiterate queries with the “Did you mean:” feature, which suggests the correct spelling. At the end of the month, this feature will receive a major update with a new 680 million parameter neural network spell checker. It kicks in in less than three milliseconds after each request, and the company promises that the new algorithm will offer even more accurate suggestions for misspelled words. This change alone will significantly improve spelling than all the improvements in the last five years, according to a Google blog post.
Another innovation: Google search can now not only index entire web pages, but individual sections of these pages. By better understanding the relevance of specific passages rather than the entire page, you can more easily find the information you need. Google says that the introduction of this technology (next month) will improve search results in all languages by 7%, including Ukrainian. The technology will be introduced around the world.
Google will also connect neural networks to understand the subtopics of search queries. This will provide a greater variety of content when your search term covers a broad term (for example, it will help you find home exercise equipment designed for small apartments, rather than just providing general information about workout equipment).
Finally, Google is also beginning to use computer vision and speech recognition to understand the deep semantics of videos and automatically highlight key points. Automatic tags will allow you to divide the video into parts, similar to sections in a book. For example, cooking videos or sports recordings can be automatically analyzed and divided into chapters. This will let you know what interests you without having to view or scroll through the recording. Google has started testing video highlighting technology and expects to use it for at least 10% of all searches by the end of the year.
Source: Google