Fraudsters steal accounts with fake ChatGPTs։ How will the Meta combat this?

May 5, 2023  14:19

Meta plans to introduce new "work accounts" for companies to protect them from hackers, as ChatGPT and other generative artificial intelligence tools have increased the number of accounts and their data being stolen.

In a statement released by the company, it said that since March 2023 alone, its researchers have discovered ten families of malware that use ChatGPT and other similar tools to steal user accounts. The company said it has blocked more than 1,000 malicious links on its platform.

According to Meta, scams often involve mobile apps or browser extensions that pose as tools for ChatGPT. In some cases, these tools offer some functionality of ChatGPT, but their real purpose is to steal user data.

The company said it is specifically targeting people who run businesses on Facebook or otherwise use the platform for work. Fraudsters often log into users' personal accounts to gain access to their linked business page or advertising account, which may have a bank card attached to it.

To combat this, Meta plans to introduce a new account type for businesses called Meta Work. These accounts will allow users to access Facebook Business Manager tools without logging into a personal Facebook account.

"This will help increase the security of business accounts in cases where attackers start with an attack on a personal account," the company said in a statement. Meta said it will begin limited testing of the new work accounts this year and expand it over time.

Meta researchers are not the first to warn about fake tools operating under the name of ChatGPT that hack people's accounts. In March of this year, the company Guardio discovered a Chrome extension that posed as ChatGPT software, but it was used to hack a number of Facebook accounts.


 
 
 
 
  • Archive