Not every AI tool you stumble across in your phone’s app marketplace is the same. In fact, many of them may be more of a privacy gamble than you would have previously thought.

A plethora of unlicensed or unsecured AI apps on the Google Play store for Android, including those marketed for identity verification and editing, have exposed billions of records and personal data, cybersecurity experts have confirmed.

A recent investigation by Cybernews found that one Android-available app in particular, “Video AI Art Generator & Maker,” has leaked 1.5 million user images, over 385,000 videos, and millions of user AI-generated media files. The security flaw was spotted by researchers, who discovered a misconfiguration in a Google Cloud Storage bucket that left personal files vulnerable to outsiders. In total, the publication reported, over 12 terabytes of users’ media files were accessible via the exposed bucket. The app had 500,000 downloads at the time.

Another app, called IDMerit, exposed know-your-customer data and personally identifiable information from users across 25 countries, predominantly in the U.S.

Information included full names and addresses, birthdates, IDs, and contact information constituting a full terabyte of data. Both of the apps’ developers resolved the vulnerabilities after researchers notified them.

Still, cybersecurity experts warn that lax security trends among these types of AI apps pose a widespread risk to users. Many AI apps, which often store user-uploaded files alongside AI-generated content, also use a highly criticized practice known as “hardcoding secrets,” embedding sensitive information such as API keys, passwords, or encryption keys directly into the app’s source code. Cybernews found that 72 percent of the hundreds of Google Play apps researchers analyzed had similar security vulnerabilities.



Source link