Why inappropriate predictions happen
We have systems in place designed to automatically catch inappropriate predictions and not show them. However, we process billions of searches per day, which in turn means we show many billions of predictions each day. Our systems aren’t perfect, and inappropriate predictions can get through. When we’re alerted to these, we strive to quickly remove them.
It’s worth noting that while some predictions may seem odd, shocking or cause a “Who would search for that!” reaction, looking at the actual search results they generate sometimes provides needed context. As we explained earlier this year, the search results themselves may make it clearer in some cases that predictions don’t necessarily reflect awful opinions that some may hold but instead may come from those seeking specific content that’s not problematic. It’s also important to note that predictions aren’t search results and don’t limit what you can search for.
Regardless, even if the context behind a prediction is good, even if a prediction is infrequent, it’s still an issue if the prediction is inappropriate. It’s our job to reduce these as much as possible.
Our latest efforts against inappropriate predictions
To better deal with inappropriate predictions, we launched a feedback tool last year
and have been using the data since to make improvements to our systems. In the coming weeks, expanded criteria applying to hate and violence will be in force for policy removals.
Our existing policy protecting groups and individuals against hateful predictions only covers cases involving race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. Our expanded policy for search will cover any case where predictions are reasonably perceived as hateful or prejudiced toward individuals and groups, without particular demographics.
With the greater protections for individuals and groups, there may be exceptions where compelling public interest allows for a prediction to be retained. With groups, predictions might also be retained if there’s clear “attribution of source” indicated. For example, predictions for song lyrics or book titles that might be sensitive may appear, but only when combined with words like “lyrics” or “book” or other cues that indicate a specific work is being sought.
As for violence, our policy will expand to cover removal of predictions which seem to advocate, glorify or trivialize violence and atrocities, or which disparage victims.
How to report inappropriate predictions
Our expanded policies will roll out in the coming weeks. We hope that the new policies, along with other efforts with our systems, will improve autocomplete overall. But with billions of predictions happening each day, we know that we won’t catch everything that’s inappropriate.
Should you spot something, you can report using the “Report inappropriate predictions” link we launched last year, which appears below the search box on desktop: