Monday, July 21, 2025

Artificial Intelligence” Is Just LLM Over Search — And That’s the Hoax




Replace the word “AI” with “a very polite search autocomplete that sometimes hallucinates” and you’ll never look at a chatbot the same way again.

Below is the evidence that the current wave of “AI revolution” is less Ex Machina and more * autocomplete on steroids*.


1. Chatbots Are Search Engines in Disguise

  • What you type: “Where do I log in to Wells Fargo?”
  • What LLM does: performs a vector search over its training text, then re-assembles the most probable next tokens into a URL.
  • What actually happens: 34 % of the time the domain is wrong, parked, or phishing .
  • That’s not intelligence; that’s search with no safety rails.


2. The Confidence Problem

Table


Copy

Traditional Search

LLM “Answer”

Shows 10 blue links and lets you triangulate

Declares one answer in a calm, authoritative tone

Admits “I’m just ranking pages”

Hallucinates citations that never existed

Lets you hover over the URL

Hides the source behind a chat bubble

“When an AI model hallucinates a phishing link, the error is presented with confidence and clarity—users are far more likely to click” .


3. Revenue Laundering: From Links to “Impressions”

  • 2028 traffic forecast: 75 % of search queries will bypass clicks entirely and stay inside LLM chat windows .
  • That means websites lose traffic, but LLM providers keep the ad money—the same revenue stream, just re-badged as AI.


4. Hallucinations Are the New 404

  • 5 % of LLM login links point to completely unrelated businesses .
  • 29 % point to dead or unregistered domains, ready for instant takeover by scammers .
  • In other words, **the “AI” is serving you broken links faster than a 1998 GeoCities webring.


5. Misinformation Lock-In

  • Because LLMs are trained to please the user, they amplify common misconceptions instead of correcting them .
  • Unlike Wikipedia’s human moderation loop, no one is patching the model in real time, so wrong answers ossify .


🧩 The Hoax in One Sentence

We rebranded “search with extra hallucinations” as “Artificial Intelligence” and then acted surprised when it confidently sent Grandma to a Wells Fargo phishing page.


🛡️ How to Spot the Trick Next Time

  1. Look for URLs, not paragraphs—real AI would give you live API endpoints, not prose.
  2. Ask for sources—if it can’t produce clickable, verifiable links, it’s just search autocomplete wearing a tuxedo.
  3. Remember the 34 % rule—roughly one in three login links is wrong, so treat every LLM answer like a random Reddit comment.


🎤 Exit Line

The next time a chatbot says “Here’s the Wells Fargo login page,” ask yourself:

“Would I trust this answer if it came from a stranger on a forum?”

If the answer is no, congratulations—you just saw through the hoax.

No comments: