The fast-growing popularity of AI-powered ChatGPT ever since its debut last year has not only created a buzz in the tech industry but also has caught the attention of hackers.
In a recently released security report by Facebook’s parent company, Meta, the company warns on how threat actors offer malware disguised as ChatGPT or other AI-related tools by targeting users through malicious browser extensions, ads, and various social media platforms and compromise accounts and steal their personal information.
“Over the past several months, we’ve investigated and taken action against malware strains taking advantage of people’s interest in OpenAI’s ChatGPT to trick them into installing malware pretending to provide AI functionality,” Meta writes in the new security report released by the company.
Meta claims that since March 2023, it has detected nearly ten new malware families using ChatGPT and other generative AI-related themes designed to deliver malicious software to users’ devices.
The company has blocked more than 1,000 unique ChatGPT-themed malicious URLs that offer malware disguised as ChatGPT or other AI-related tools from being shared on Facebook, Instagram, and WhatsApp. Meta says it has also notified “industry peers, researchers, and governments” about the links, too.
“In one case, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools. They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware,” the company added.
“In fact, some of these extensions did include working ChatGPT functionality alongside malware, likely to avoid suspicion from official web stores.”
Meta said in one recent campaign, it disrupted some of the hackers’ activities that leveraged people’s interest in Open AI’s ChatGPT to lure users into installing malware. However, after detection by its security teams and industry counterparts, the bad actors quickly pivoted to other themes, including posing as Google Bard, TikTok marketing tools, pirated software and movies, and Windows utilities.
According to Meta, the main aim behind the ChatGPT-themed malware is to run unauthorized ads from compromised business accounts across the internet.
The identified malware families included Ducktail, NodeStealer, and newer malware, which hosted itself through a number of services across the internet, including file-sharing services Dropbox, Google Drive, Mega, MediaFire, Discord, Atlassian’s Trello, Microsoft OneDrive, and iCloud.
Meta identified these malware operations at different stages of their lifecycle. In response to this development, Facebook has added new controls to help businesses stay safe across their Meta accounts.
These controls will include a new support tool that guides people step-by-step on how to identify and remove malware, including using third-party antivirus tools. It has also rolled out an ability for businesses to have more visibility and control over administrator changes in Business Manager as well as increase protections for sensitive account actions.
Additionally, the company will also begin rolling out “Meta Work” accounts later this year to allow business users to log in and operate Business Manager without requiring a personal account. “This will help keep business accounts more secure in cases when attackers begin with a personal account compromise,” Meta said in a statement.