By Eberechi Obinagwam,
Google has announced six new features and developments in Search on.
Google made this announcement on Wednesday at its annual event.
A statement from Google said that the search revealed how machine learning advancements are helping people to gather and explore information in new ways.
After the search, Google came up with these six features and developments in Search on.
Multisearch is expanding:
Each month, people use Lens to answer more than 8 billion questions, using their camera or an image, and earlier this year, Google introduced multisearch—a major milestone in how people search for information. With multisearch, you can take a picture or use a screenshot and then add text to it — similar to the way you might naturally point at something and ask a question about it. Multisearch is available in English globally, and will now be rolling out in 70 languages in the next few months.
Multisearch near me
“Multisearch near me”which was previewed during Google I/O earlier this year supercharges multisearch capability, allowing you to take a screenshot or a photo of an item, and then find it nearby. So if you have a hankering for your favorite local dish, all you need to do is screenshot it, and Google will connect you with nearby restaurants serving it. Multisearch near me will start rolling out in the U.S.A later this year.
READ ALSO
Banks’ USSD debt rises to N80bn – Telecom operators
Translation in the blink of an eye
One of the most powerful aspects of visual understanding is its ability to break down language barriers. With Lens, Google has gone beyond translating text to translating pictures- with Google translating text in images over 1 billion times per month, in more than 100 languages.
With major advancements in machine learning, Google is now able to blend translated text into complex images, so it looks and feels much more natural. For example, if you point your phone at text on a poster, the translated text will be realistically overlaid over the pictures underneath. Google has also optimized their machine learning models to do all this in just 100 milliseconds — shorter than the blink of an eye. This uses generative adversarial networks (also known as GAN models), which is what helps power the technology behind the Magic Eraser on Pixel. Google has announced that the improved experience is launching later this year.
Google for iOS updates
Google is putting some of its most helpful tools right at your fingertips on iOS. From now on, you’ll see shortcuts right under the search bar to shop your screenshots, translate text with your camera, hum to search and more.
Even faster ways to find what you’re looking for
Google is working to make it possible to ask questions with fewer words- or even none at all- and still help you find what you are looking for. In the coming months, when you begin to type a question, Google will start providing relevant content right away, before you’ve even finished typing. And for those who don’t know exactly what they’re looking for until they see it, Google will help you to specify your question. So if you’re looking for a holiday destination in Kenya, Google will provide keyword or topic options like “best cities in Kenya for families” so you can find results that are most relevant to you.
New ways to explore information
Google is reinventing the way it displays Search results to better reflect the ways people explore topics. So you’ll start to see the most relevant content, from a variety of sources, no matter what format the information comes in — whether that’s text, images or video- including content from creators on the open web. So if you’re searching for topics like cities, you may see visual stories and short videos from people who have visited, tips on how to explore the city, things to do, how to get there and other important aspects you might want to know before you embark on your travels. These new updates will be rolled out in the coming months.