Bluesky released its first transparency report this week documenting the actions taken by its Trust & Safety team and the results of other initiatives, like age-assurance compliance, monitoring of influence operations, automated labeling, and more.

The social media startup — a rival to X and Threads — grew nearly 60% in 2025, from 25.9 million users to 41.2 million, which includes accounts hosted both on Bluesky’s own infrastructure and those running their own infrastructure as part of the decentralized social network based on Bluesky’s AT Protocol.

During the past year, users made 1.41 billion posts on the platform, which represented 61% of all posts ever made on Bluesky. Of those, 235 million posts contained media, accounting for 62% of all media posts shared on Bluesky to date.

The company also reported a fivefold increase in legal requests from law enforcement agencies, government regulators, and legal representatives in 2025, with 1,470 requests, up from 238 requests in 2024.

While the company previously shared moderation reports in 2023 and 2024, this is the first time it’s put together a comprehensive transparency report. The new report tackles other areas outside of moderation, like regulatory compliance and account verification information, among other things.

Moderation reports from users jump 54%

Compared with 2024, when Bluesky saw a 17x increase in moderation reports, the company this year reported a 54% increase, going from 6.48 million user reports in 2024 to 9.97 million in 2025.

Though the number jumped, Bluesky noted that the growth “closely tracked” its 57% user growth that occurred over the same period.

Techcrunch event

Boston, MA
|
June 23, 2026

Around 3% of the user base, or 1.24 million users, submitted reports in 2025, with the top categories being “misleading” (which includes spam) at 43.73% of the total, “harassment” at 19.93%, and sexual content at 13.54%.

A catch-all “other” category included 22.14% of the reports that didn’t fall under these categories, or others like violence, child safety, breaking site rules, or self-harm, which accounted for much smaller percentages.

Within the “misleading” category’s 4.36 million reports, spam accounted for 2.49 million reports.

Meanwhile, hate speech accounted for the largest share of the 1.99 million “harassment” reports, with about 55,400 reports. Other areas that saw activity included targeted harassment (about 42,520 reports), trolling (29,500 reports), and doxxing (about 3,170 reports).

However, Bluesky said that the majority of “harassment” reports included those that fell into the gray area of anti-social behavior, which may include rude remarks, but didn’t fit into the other categories, like hate speech.

a screnshot showing a list of categories and total reports and percentages of total, with misleading content and harrassment leading the table.
Image Credits:Bluesky

Most of the sexual content reports (1.52 million) concerned mislabeling, Bluesky says, meaning that adult content was not properly marked with metadata — tags that allow users to control their own moderation experience using Bluesky’s tools.

A smaller number of reports focused on non-consensual intimate imagery (about 7,520), abuse content (about 6,120), and deepfakes (over 2,000).

Reports focused on violence (24,670 in total) were broken down into sub-categories like threats or incitement (about 10,170 reports), glorification of violence (6,630 reports), and extremist content (3,230 reports).

In addition to user reports, Bluesky’s automated system flagged 2.54 million potential violations.

One area where Bluesky reported success involved a decline in daily reports of anti-social behavior on the site, which dropped 79% after the implementation of a system that identified toxic replies and reduced their visibility by putting them behind an extra click, similar to what X does.

Bluesky also saw a drop in user reports month-over-month, with reports per 1,000 monthly active users declining 50.9% from January to December.

a graph showing the number of reports, with most of the reports filed in the month of January 2025 featuring mostly violence, harassment, and other categories.
Image Credits:Bluesky

Outside of moderation, Bluesky noted it removed 3,619 accounts for suspected influence operations, most likely those operating from Russia.

The company said last fall it was getting more aggressive about its moderation and enforcement, and that appears to be true.

Last year, Bluesky took down 2.44 million items in 2025, including accounts and content. The year prior, Bluesky had taken down 66,308 accounts, and its automated tooling took down 35,842 accounts.

Moderators also took down 6,334 records, and automated systems removed 282.

a screenshot showing a pie chart, showing a number of takedowns by Bluesky's policies.
Image Credits:Bluesky

Bluesky also issued 3,192 temporary suspensions in 2025, and 14,659 permanent removals for ban evasion. Most of the permanent suspensions were focused on accounts engaging in inauthentic behavior, spam networks, and impersonation.

However, its report suggests that it prefers labeling content more than it does booting out users. Last year, Bluesky applied 16.49 million labels to content, up 200% year-over-year, while account takedown grew 104% from 1.02 million to 2.08 million. Most of the labeling involved adult and suggestive content or nudity.



Source link