Disclaimer: This article is for informational purposes only and does not constitute financial advice. BitPinas has no commercial relationship with any mentioned entity unless otherwise stated.
📬 Get the biggest crypto stories in the Philippines and Southeast Asia every week — subscribe to the BitPinas Newsletter.
Global traffic data shows shifting dynamics in the generative AI market as ChatGPT’s dominance continues to decline while rivals like Google’s Gemini and Perplexity AI gain traction, according to Similarweb’s latest Global AI Tracker report.
Meanwhile, new research warns of potential “brain rot” in large language models (LLMs) exposed to low-quality training data.
AI Global Traffic Share



According to Similarweb’s Global AI Tracker released in October 2025, ChatGPT’s global traffic share dropped to 74.1%, down from 87.1% a year ago and 76.4% just a month prior. Despite maintaining a commanding lead, the OpenAI-developed chatbot has seen a gradual decrease in its share over the past year amid intensifying competition.
Meanwhile, Google’s Gemini recorded significant growth, climbing from 6.4% to 12.9% in the same period. The platform steadily increased its user base throughout 2025, likely fueled by its integration with Google Search and Android devices.
Perplexity AI also showed strong momentum, surpassing the 2% global share threshold for the first time to reach 2.4%.
Moreover, the report also revealed that other competitors maintained stable but smaller shares.




Anthropic’s Claude and xAI’s Grok each held 2.0%, while Microsoft’s Copilot remained at 1.2%, continuing its role as an embedded assistant across Office and Windows products.
In addition, DeepSeek, a China-based AI platform that peaked earlier this year, slipped slightly to 3.7% from 4.0% in September.

Read more BitPinas articles about AI here.
AI Models Can Suffer “Brain Rot”
While Similarweb’s data reflects rising competition among AI tools, another study warned of the long-term risks of poor data quality in training these systems.
Researchers from the University of Texas at Austin, Texas A&M University, and Purdue University found that LLMs may suffer a long-term decline in reasoning ability when continuously trained on low-quality, engagement-focused data.
The study, titled “LLMs Can Get Brain Rot!,” found that AI systems exposed to large amounts of “junk” text, such as viral social media posts with poor semantic quality, showed significant drops in performance across reasoning, comprehension, and safety benchmarks.
Tests showed that accuracy on reasoning tasks, such as the ARC-Challenge with chain-of-thought prompts, fell from 74.9% to 57.2% as the proportion of junk data increased. Similarly, long-context understanding scores declined from 84.4% to 52.3%. Researchers said the main failure pattern was “thought-skipping,” where models began omitting key reasoning steps in their answers.

The team also noted that the decline was partly irreversible, as additional instruction tuning or retraining with high-quality data failed to fully restore lost performance. Even after remediation, degraded models showed around a 17% gap compared to control versions trained only on clean data.
Researchers warned that continual pretraining on low-quality web content could lead to “cognitive decline” in AI systems, similar to how humans are affected by poor information diets.
They then urged developers to perform regular “cognitive health checks” on large models and tighten quality controls on training datasets to prevent long-term degradation.
“The decline includes worse reasoning, poorer long-context understanding, diminished ethical norms, and emergent socially undesirable personalities. Fine-grained analysis shows that the damage is multifaceted in changing the reasoning patterns and is persistent against large-scale post-hoc tuning. These results call for a re-examination of current data collection from the Internet and continual pre-training practices. As LLMs scale and ingest ever-larger corpora of web data, careful curation and quality control will be essential to prevent cumulative harms.”
LLMs Can Get Brain Rot!
What are General AI Tools?
General AI tools are designed to perform a wide range of automation tasks that interpret human language and generate responses through chat or search interfaces, according to the Similarweb Global AI Tracker report.
Many of these tools offer both free and paid versions that allow users to make advanced queries and access API integrations.
The report noted that sectors such as search, discussion forums, social media, and educational technology are among those being disrupted by the growing adoption of general AI tools. Similarweb also clarified that it bases its growth measurements on total website visits, excluding API usage and third-party integrations.
Lastly, Similarweb wrote that the diversification of user preferences toward models offering real-time access, personalization, and niche functions is reshaping the generative AI market as it enters its third year of mass adoption.
This article is published on BitPinas: ChatGPT’s Global Share Drops to 74%; Another Study Warns of AI “Brain Rot” Risks
What else is happening in Crypto Philippines and beyond?




