Blog

Exploring the Threat Landscape: Brand Impersonation

  • Brand Protection
Exploring the Threat Landscape: Brand Impersonation

With consumer to brand interactions increasingly taking place online, widespread impersonation is quickly becoming an everyday reality. From the rising tide of impersonation profiles on social media, to scam websites designed to mislead and defraud, consumers are being exploited by bad actors at an alarming rate.

In 2023, impersonation scams alone were responsible for a staggering global loss of $6.8b[1] – impacting on consumer finances, worldwide sales tax revenue, and millions of jobs across all sectors. And on a business level, the impact losing control over how a brand is experienced online can be catastrophic. Consumer safety issues, revenue leakage and reputational damage are just three of the likely consequences of unmitigated brand abuse.

Our latest eBook, The Three Key Components of a Successful Brand Protection Program, analyzes the current threat landscape facing brands and the tools businesses can use to maintain consumer trust and brand reputation.

Bad actors exploit consumers through three distinct threats: counterfeits, brand impersonation, and grey market goods. In this blog we explore current and emerging brand impersonation threats in more detail and how you can adapt your Brand Protection strategy to safeguard your consumers’ digital trust.

The impact of impersonation on consumers and brands

Brand impersonation refers to the imitation of legitimate brands in order to promote counterfeits, scams, or other illicit items across social media, marketplaces, websites and other key channels. Impersonators will steal your carefully crafted logos, trademarks, official images, hashtags, campaign slogans, and other marketing materials to reach your consumers online.

It’s likely that you’ll only hear about these fake websites and adverts after the fact — alerted by victims, other business functions, or even senior management. By this point, it’s already too late; thousands of well-intentioned consumers have visited the websites and fallen victim. Some have likely had their valuable personal data harvested and resold to scammers for further attacks on consumer digital safety and privacy.

Real world examples of brand impersonation

As highlighted in our recent eBook, impersonators often use a combination of eCommerce and social channels , and P2P transaction services to sell infringing products and target consumers with phishing scams. These ‘infringer networks’ will often re-create old pages with very similar or identical layouts and photos.

Take for example a consumer typing “Brand X soccer boots” into a search engine. They will be greeted with thousands of listings for websites. They will also be bombarded with retargeted adverts for soccer boots on search engines, social media, and websites – but many of the search results and adverts are directing to sites selling fakes. Worse still, some of these sites aren’t even selling fakes – they’re just harvesting personal information and credit card details.

AstraZeneca – Fighting impersonation websites & social media profiles

During the COVID-19 Pandemic, AstraZeneca’s brand protection program focused on threats to their newly-developed vaccine – this led to the discovery of countless scams both online and offline. There was an extraordinary volume of this fraudulent activity, with criminal entities offering hundreds of millions of doses to governments. Of course, these doses never existed. Soon after, AstraZeneca encountered fake websites offering vaccines directly to consumers.

AstraZeneca partnered with Corsearch to take these websites and their illicit operators down, safeguarding patients and other third parties. On the social media front, Corsearch and AstraZeneca’s Global Security Team were able to quickly establish strong relationships with social media platforms, with a focus on swiftly removing scam posts that purported to sell vaccines. The team also monitored on-the-ground activity, such as images of packaging being stolen and spread through online conspiracy theory groups.

After the pandemic, bad actors switched their focus to other parts of AstraZeneca’s portfolio such as respiratory oncology. Illegal product diversion became a significant issue – but impersonation profiles and fake websites remained a threat.

“Through our partnership with Corsearch, we are already taking down 50 to 60 replica and phishing websites a month that try to defraud our suppliers and the public. This leads to tens of millions of dollars each quarter in cost and risk avoidance for AstraZeneca.”

Dimeji Dimeji, Head of Assurance Services, AstraZeneca

Read the AstraZeneca story >

The threat of generative AI

Bad actors are increasingly using artificial intelligence (AI) to fuel their impersonation schemes and phishing scams.

Deepfakes

Deepfakes are hyper-realistic video or audio recordings generated by AI that can convincingly depict a person saying or doing things they never actually did. In a recent study, only 32.9% of participants detected something out of the ordinary when asked to view several videos that included deep fakes[2].

Fake social media accounts

The emergence of generative AI has given bad actors the tools to create convincing fake social media accounts and content at scale – and at a faster pace than ever. The quantity and quality of misinformation could increase, posing challenges to the authenticity of content on social media.

Phishing content

AI can generate convincing phishing content that mimics legitimate communications from trusted brand profiles, making it harder to distinguish between real and fake messages. It is also used to make scams more personalized and convincing, paving the way for large-scale social engineering attacks.

Online travel agency Booking.com recently warned about the steep rise in travel scams, fueled by artificial intelligence. Marnie Wilking, Chief Information Security Officer at Booking.com, states that this increase could be as much as 900% over the last 18 months[3] – with the uptick credited to phishing scams fueled by generative AI tools.

eBook: The three components to fighting brand impersonation

Successful Brand Protection programs combine three key components to protect consumers and deliver substantial ROI.

Use our eBook to dive deeper into the technology and expertise you’ll need from your solution provider to support these components and realize your strategy.

Read eBook >

eBook - Three Key Components of Brand Protection - Read eBook

Get visibility and control of your brand, anywhere online

With impersonation threats continuing to evolve at rapid pace, it’s essential that you take a proactive approach and can adapt your strategy quickly. Your team will need the right solution that effectively blends advanced AI and human expertise to provide full visibility of threats, swift enforcement at scale, and lasting impact and tangible ROI.

Our AI-fueled and expert-guided Brand Protection solutions offer:

  • The broadest visibility of threats across all online channels
  • Instant identification of new impersonation profiles
  • Enforcement with proven success
  • Lasting impact to protect your brand, IP, and consumers

Learn more about the AI-assisted capabilities that will enable you to swiftly detect and remove brand impersonation across your key channels in part two of this blog series. Or request a demo to see our capabilities in action.


[1] Visual Capitalist (2024). Visualizing Global Losses from Financial Scams: https://www.visualcapitalist.com/global-losses-from-financial-scams/

[2] Royal Society Open Science (2023). Deepfake detection with and without warnings

[3] BBC News (2024). Booking.com warns of up to 900% increase in travel scams: https://www.bbc.co.uk/news/articles/c8003dd8jzeo