Undecided voters in the U.S. searching Google may find vastly different answers to similar questions, depending on their phrasing. For example, a search for “Is Kamala Harris a good Democratic candidate” might yield positive results, such as a Pew Research poll indicating “Harris energizes Democrats” and an AP article stating that a majority of Democrats think she’d make a good president. However, if one instead searches for “Is Kamala Harris a bad Democratic candidate,” they’re met with critical perspectives, such as a Reason Magazine article stating, “It’s been easy to forget how bad Kamala Harris is,” and a US News & World Report article describing Harris as “not the worst thing that could happen to America.” This contrast demonstrates that Google’s results may reinforce the user’s underlying beliefs, which can further polarize opinions on issues ranging from politics to health.
This phenomenon isn’t limited to Kamala Harris. It applies to topics like Donald Trump, conspiracy theories, and medical information. According to experts, Google’s search algorithms tend to echo the user’s biases, potentially reinforcing personal and societal divisions. “We’re at the mercy of Google when it comes to what information we’re able to find,” notes Varol Kayhan, an associate professor at the University of South Florida.
Sarah Presch, a digital marketing director, calls Google a “bias machine.” In her work with search engine optimization, she’s found stark discrepancies in Google’s results for contested issues. For example, when she searched “link between coffee and hypertension,” Google highlighted a Mayo Clinic snippet stating that caffeine may cause a short-term blood pressure spike. Yet a search for “no link between coffee and hypertension” returned a contradictory snippet from the same source, stating caffeine has no long-term effect on blood pressure. The same split occurred with questions like “Is ADHD caused by sugar” versus “ADHD not caused by sugar.” In each case, Google provided snippets that validated both sides of the argument.
Google responds by saying its mission is to offer high-quality, relevant information and open access to diverse viewpoints. According to a Google spokesperson, “We provide open access to a range of viewpoints from across the web, and we give people helpful tools to evaluate the information and sources they find.” However, critics argue that the platform’s algorithm often serves up information aligned with the user’s perceived intent, creating an echo chamber that can deepen biases rather than foster balanced understanding.
Google processes around 6.3 million queries per second—adding up to over nine billion searches daily. Most internet users start their online activity with a Google search, rarely looking beyond the top five results and almost never reaching the second page. This makes Google’s search ranking system incredibly influential in shaping our view of the world.
Google claims it avoids creating “filter bubbles,” or echo chambers, through its algorithms. According to a company spokesperson, independent research has found no evidence that Google Search pushes people into such bubbles. However, the issue of online echo chambers remains a concern, with some studies suggesting the effects may not be as severe as believed.
Varol Kayhan, who researches search engine impacts on confirmation bias—the tendency to seek out information that aligns with one’s beliefs—says online systems profoundly shape our perspectives and even political identities. A 2023 study supported Google’s claim that people’s exposure to partisan news is largely based on their own clicks, not Google’s recommendations. Yet, the study noted that Google still exposes users to biased and unreliable sources, which can have significant effects even with minimal exposure.
Silvia Knobloch-Westerwick, a professor of mediated communication, argues that while users choose what to engage with, their choices are limited by the types of content Google presents. She emphasizes that algorithms play a critical role in reinforcing these bubbles.
Mark Williams-Cook, founder of the SEO tool AlsoAsked, believes the issue lies in search engines’ technical limitations and in public misunderstandings about those limitations. In a 2016 Google presentation, an engineer admitted the company doesn’t fully “understand” documents; instead, it gauges quality based on users’ reactions. Positive engagement signals that a document is relevant, so Google continues promoting similar content to meet perceived demand.
Google states that these insights are outdated and that its methods for understanding searches and webpages have since advanced, enhancing the search engine’s sophistication in delivering relevant results.
Mark Williams-Cook suggests that Google’s approach to predicting user preferences can create a feedback loop that amplifies confirmation bias. If users consistently click on content that aligns with their existing beliefs, Google’s algorithm may learn to prioritize similar results, reinforcing those biases. He compares it to letting a child choose their diet—ultimately, they’ll gravitate toward unhealthy choices.
Williams-Cook also points out that Google’s search algorithm may not interpret nuanced questions accurately. For instance, if someone searches “Is Trump a good candidate,” Google may simply focus on keywords like “Trump” and “good candidate,” rather than truly answering the question. This can lead users to misinterpret search results, thinking they’re getting a direct answer when the algorithm is really just matching keywords.
He believes that if users better understood Google’s limitations, they might view results more critically. However, Williams-Cook doubts Google will openly acknowledge these flaws, as doing so would mean admitting imperfections. Google, for its part, says it continually works to improve Search and offers tools like “About this result” to help users evaluate the information they see.
Google’s spokesperson emphasizes that users can find diverse perspectives if they look beyond the top results. For instance, critical viewpoints on topics like Kamala Harris or the British tax system are available further down the page. However, the popularity of Google’s Featured Snippets—summaries of information displayed at the top—often reduces the likelihood that users will explore deeper into search results.
Observers note Google’s shift from a traditional search engine to an “answer engine” that directly provides responses, often powered by AI. With features like AI Overviews, Google now generates answers itself rather than linking to outside sources. According to Williams-Cook, this transition compounds existing issues, as Google has only one chance to get the answer right, making accuracy even more critical.
The question remains whether Google should address these concerns. Kayhan points out the ethical complexity of allowing one of the world’s most powerful companies to control access to information and determine “truth.” He questions whether Google can—or should—attempt to resolve these issues, adding that while the company’s efforts may be limited, more needs to be done.