AI Trust and Safety: How to Hide Google’s AI Overviews and Protect Your Searches
As generative AI spreads into every corner of the web, AI trust and safety have become everyday concerns, not just technical ones. Google's AI Overviews can be helpful, but they can also surface misleading answers, scams, or incomplete information. If you rely on search for work, research, or decision-making, you need ways to keep control over what you see and how you validate it.
This guide explains how to hide or reduce Google's AI Overviews, what their limitations are, and how to build safer habits around AI-generated answers—whether you are an individual power user or responsible for AI risk in your organization.
Source context: This article is inspired by reporting from WIRED on keyboard tricks to avoid AI Overviews and broader concerns about misleading AI summaries.
If your organization is already thinking about policies, controls, and governance around AI tools—beyond just Google Search—you may benefit from dedicated AI risk management support. At Encorp.ai, we help companies design AI risk management frameworks, automate assessments, and align with privacy regulations like GDPR.
Explore how Encorp.ai can help: AI Risk Management Solutions for Businesses.
You can also learn more about our broader work in secure, compliant AI at encorp.ai.
What Google AI Overviews are and why they matter
Google's AI Overviews are generative summaries that appear at the top of some search result pages. Instead of showing you only the familiar list of blue links, Google may first display a synthesized answer, with citations to the underlying pages.
What an AI Overview does
At a high level, an AI Overview:
- Generates a natural-language answer based on content from multiple web pages
- Attempts to cover common sub-questions in a single block
- Shows a small set of source links below the summary
For casual queries (for example, "How long to boil an egg?"), this can feel convenient. But from an AI trust and safety perspective, it introduces several issues:
- Unclear reasoning: You see a final answer, not the steps used to get there.
- Overconfidence: The tone may sound authoritative even when the answer is uncertain.
- Context loss: Nuances, caveats, and minority expert views are often dropped.
Why accuracy and trust matter
Soon after launch in 2024, AI Overviews attracted criticism for incorrect and sometimes bizarre answers—like suggesting people add glue to pizza, which arose from training data that included a sarcastic comment on Reddit taken too literally.
Industry bodies and researchers have repeatedly warned about the risks of over-relying on generative AI for factual questions:
- The OECD highlights the need for human oversight and transparency in AI system outputs.
- The UK Information Commissioner's Office (ICO) stresses explaining limitations and uncertainty to users of AI systems (Guidance on AI and data protection).
In short: AI-generated answers can be helpful, but they're not always right, and they rarely show you the full picture.
Quick fix: use the en‑dash "–ai" trick to remove Overviews
One of the simplest ways to avoid AI Overviews today in desktop browsers is a small search query hack using an en-dash.
How the "–ai" trick works
Google Search supports an operator where anything after an en-dash is excluded from results. Today, that mechanism can be used to suppress AI Overviews entirely.
Steps:
- In your desktop browser, go to google.com.
- Type your query as usual.
- At the end of the query, add a space and then an en-dash followed by any letters or numbers, such as:
–ai–1–z
- Press Enter.
If the behavior is still active in your region, your results should now show traditional web links only, with no AI Overview at the top.
Note: This relies on how Google currently interprets the en-dash operator. Because the impact on AI Overviews is likely incidental, it may change or be removed at any time.
Typing an en-dash vs hyphen
The trick depends on an en-dash (–), not a standard hyphen (-). Many users report that a simple hyphen can work as well in practice, but to follow reports from reliable tech sources, it's worth knowing how to type an en-dash correctly:
- Windows:
Alt+0150on the numeric keypad - macOS:
Option+-
For speed, you can experiment with just adding -ai and see whether Overviews disappear in your browser.
Browser and OS differences
Behavior can vary by platform:
- Desktop browsers (Chrome, Firefox, Edge, Safari): The
–aitrick commonly works to suppress Overviews and return a clean list of links. - Mobile browsers and apps: On iOS apps (Safari + Google Search in-browser), some tests have shown AI Overviews still appearing, with a separate "Classic Search" option to revert to more traditional results.
- Android (especially Pixel devices): Reports suggest that adding
–aican work here as well, but this is not guaranteed and may change.
Because this is an undocumented side effect, think of it as a useful shortcut, not a permanent feature.
Using the Web tab and Classic Search
Even when the –ai trick doesn't work, you may have other options:
- On some result pages, Google shows a "Web" tab near the top. Selecting it prioritizes a list of text-based links.
- In other layouts, a "Classic Search" button appears on the right side of an AI-heavy page. Clicking it reloads to a more traditional mix of results.
These interface options are not perfect, but they give you a quick, official way to minimize AI summaries for specific queries.
Alternate ways to avoid AI summaries
If you find yourself frequently fighting AI Overviews, it may be worth adjusting your tools and defaults instead of relying solely on query tricks.
Use Web filter or Classic Search by default
Some users get in the habit of:
- Clicking the "Web" tab immediately after each search
- Switching to Classic Search when it appears
This adds an extra click, but it creates a muscle memory similar to adding site:, filetype:pdf, or Reddit to niche queries.
Switch your default search engine in Chrome
If you want to reduce your exposure to AI-generated summaries across the board, you can change your default search engine.
In Chrome desktop:
- Open Settings.
- Go to Search engine > Manage search engines and site search.
- Under Default search engines, choose an alternative provider.
This lets you keep Chrome as a browser while using a search engine with different AI behaviors.
Try privacy-first search engines
Several search engines allow tighter control over AI and tracking, with settings geared toward AI data privacy and transparency:
- DuckDuckGo: Focuses on privacy by design and offers an optional AI-based "DuckAssist" with clear toggles.
- Brave Search: Positions itself as an independent index with optional AI-powered summaries that you can turn on or off.
These services are not necessarily "private AI solutions" in the enterprise sense, but they do offer:
- Less pervasive tracking than mainstream engines
- More explicit control over when generative AI appears
- Clearer documentation on what is logged and how queries are used
If your primary concern is minimizing tracking and surprise AI answers in personal browsing, they're worth testing.
Limits and risks of the en‑dash trick
From an AI risk management perspective, relying on an undocumented behavior is always fragile.
Why the trick may be temporary
The suppression of AI Overviews when you add –ai (or similar strings) appears to be a side effect of Google's exclusion operator. Google has not documented or committed to this behavior. As AI product interfaces evolve, the company could:
- Change how exclusions are interpreted
- Decouple AI Overviews from the underlying query tokens
- Introduce a formal setting for users who prefer "links-only" search
Any of these would make the current workaround unreliable.
Edge cases in mobile and app experiences
Mobile experiences add more variability:
- The
–aipattern may not remove all AI summaries in native apps. - Some regions or rollout waves may not expose Web/Classic toggles.
For people designing AI governance and controls in organizations, this means you cannot simply tell employees "type –ai and you're safe." You need a more systematic approach.
When to rely on other controls
Instead of counting on a single query trick, combine multiple layers:
- Browser choice and configuration: Use privacy-focused browsers with stronger tracking protection and clear search settings.
- Search engine defaults: Choose providers and configurations that align with your risk appetite for generative AI.
- Organizational guidance: Define when it's acceptable to rely on AI summaries and when staff must consult original sources.
Verify and protect yourself: best practices
Even if you successfully hide most AI Overviews, you will still encounter generative AI in search and other tools. You need habits that strengthen AI trust and safety and AI data security at the same time.
1. Click through to original sources
Whenever you see an AI-generated answer:
- Scan the citations: Open at least two or three of the linked pages.
- Compare details: Check whether key facts—numbers, dates, legal terms—match across sources.
- Look for expertise signals: Reputable publishers, peer-reviewed journals, or recognized standards bodies.
Studies from organizations like NIST show that large language models can confidently produce incorrect information (NIST AI evaluations). Cross-checking is non-negotiable when decisions matter.
2. Watch for scams and social engineering
As highlighted in various reports on AI misuse and security best practices, attackers can exploit:
- Misleading summaries that point to malicious sites
- Fake "official" pages boosted by SEO
- Phishing attempts that look like legitimate support or login pages
Self-defense tips:
- Never enter credentials directly after clicking on an unfamiliar link.
- Check URLs carefully—look for HTTPS and well-spelled domain names.
- Use password managers; they often refuse to autofill on lookalike domains.
3. Control what you share with AI systems
Generative AI tools and AI-enriched search experiences may log and analyze your prompts. For AI data privacy and security:
- Avoid entering sensitive personal data (IDs, medical details) into public AI tools.
- Do not paste confidential company information into consumer chatbots or search bars with integrated AI.
- Review the provider's privacy policy and data retention rules.
Regulators like the European Data Protection Board have issued guidance reminding organizations that using cloud AI services still requires full GDPR compliance in how data is collected, processed, and stored.
4. Use privacy and security extensions
Consider browser extensions or built-in protections that:
- Block trackers and third-party cookies
- Flag known malicious domains
- Limit or disable third-party scripts on unknown sites
These help reduce the downstream risks if an AI summary does send you to a less reputable site.
Long-term controls for individuals and organizations
The quick fixes above are useful, but sustainable AI risk management requires structural choices—especially for organizations.
For individual users
Think in layers:
- Tool selection: Prefer browsers and search engines that give you control over AI features.
- Account settings: Regularly review privacy and personalization settings in your Google, Microsoft, or other major accounts.
- Education: Stay up to date with changes in search behavior and AI interfaces. Follow reliable sources such as the Electronic Frontier Foundation for privacy updates.
For organizations and teams
If your employees rely on web search while handling customer, financial, or strategic data, you should treat search as part of your secure AI deployment and governance strategy.
Key steps:
- Define acceptable use: Document when generative AI (including AI Overviews) can be used, and for what tasks.
- Set defaults via IT: Configure browsers and search engines centrally to align with your policies.
- Train staff: Explain the limitations of AI summaries, how to verify information, and what not to paste into any AI tool.
- Evaluate enterprise-safe options: For high-risk environments, consider enterprise search or private AI solutions that:
- Run within your own cloud or on-prem environment
- Enforce AI GDPR compliance and local regulations
- Offer auditing and logging aligned with your internal controls
Analyst firms like Gartner and Forrester increasingly recommend weaving generative AI and search tools into broader AI governance and security programs, not treating them as separate, "nice-to-have" add-ons.
Conclusion: practical next steps for AI trust and safety
Hiding Google's AI Overviews is one small but meaningful way to reclaim control over your information environment. To put AI trust and safety into practice today:
- Use the
–aior similar en-dash trick in desktop searches where it works. - Rely on the Web tab or Classic Search when AI summaries are not desired.
- Consider privacy-focused search engines if you want fewer defaults toward generative AI.
- Always click through to original sources and verify key facts.
- Avoid sharing sensitive or confidential data with public AI tools.
- For organizations, treat search and generative AI as part of your broader AI governance, security, and compliance strategy.
If you're ready to move beyond individual workarounds and build a structured approach to AI risks, you can learn how Encorp.ai helps organizations automate assessments, integrate controls, and stay compliant: AI Risk Management Solutions for Businesses.
Thoughtful habits today will make your AI-enabled future more accurate, secure, and aligned with your values.
Martin Kuvandzhiev
CEO and Founder of Encorp.io with expertise in AI and business transformation