Google announces “buzz to search” machine learning music search feature



[ad_1]

The company also notes that people don’t have to worry about delivering a perfect pitch to take advantage of this new search capability.

soundwave.jpg

Image: iStock / inueng

On Thursday, Google announced a new “hum to search” feature that allows people to identify songs simply by humming part of a track. Google notes that people should not worry about their musical abilities, “you don’t need a perfect pitch to use this feature,” in a press release.

The new search capability is available on the Google app and mobile devices, as well as the Google search widget. When using the widget, people will first need to tap on the small microphone icon and request the feature by clicking the button labeled “search for a song” or by saying “what is this song?” The person should then proceed to hum part of the song.

The buzz search feature is also available in the Google Assistant using a similar framework. To identify a song in this format, first ask “Ok Google, what is this song?” before humming a song. It is important to remember that an audio finder will need to know a little about the song to help identify a particular track. According to the Google statement, people will have to hum a part of the song for 10 to 15 seconds.

WATCH: TechRepublic Premium Editorial Calendar – Downloadable IT Policies, Checklists, Toolkits, and Research (TechRepublic Premium)

The feature uses machine learning to identify potential leads based on a person’s hummed sequence. After humming a tune, Google will provide a series of “melody-based best choices.” People can then play these closer matches and browse information related to artists, tracks, albums, and more.

The machine learning identification process

A given song will have countless identifying characteristics, and Google compares a song’s melody with its “fingerprint.” With this in mind, Google has designed its machine learning to be able to “match your humming, whistling, or singing to the correct ‘fingerprint’.” These models create a “number sequence representing the melody of the song” based on a person’s humming tune.

WATCH: OpenAI presents a neural network capable of creating music and releases a debut mixtape (TechRepublic)

Models have been trained to identify specific songs from multiple sources, such as studio recordings, singing, whistling, and humming, according to Google, and all other elements of a recording, such as instruments, voice pitch and timbre are removed. leaving “the numerical sequence of the song, or the fingerprint”. These number sequences are then compared to thousands of other clues to determine possible matches.

Currently the feature is only available on iOS in English. On Android, hum to search is available in more than 20 languages ​​and the company hopes to expand these capabilities to more languages, depending on the release.

See also

[ad_2]