Begin typing your search...

‘75% of desktop scams are social engineering’

It’s time we put people, not tech, at the center of cybersecurity; Indians are targeted 60x a week with scams, says Villiam Lisy, senior scientist at Gen

‘75% of desktop scams are social engineering’
X

Nasdaq-listed Gen is a global company dedicated to powering digital freedom through its trusted cyber safety brands, Norton, Avast, LifeLock, Avira, AVG, ReputationDefender and CCleaner.

In an interview with Bizz Buzz, Villiam Lisy, Principal Scientist, AI at Gen, says, “There’s a new generation, and it’s not Gen X, Y, or Z. It’s Gen D: Generation Digital. Our family of consumer brands is rooted in providing safety for the first digital generations. Now, Gen empowers people to live their digital lives safely, privately, and confidently today and for generations to come. We bring award-winning products and services in cybersecurity, online privacy and identity protection to more than 500 million users in more than 150 countries.”

What is social engineering?

Technical security vulnerabilities represent only part of the risks in the online world. Unfortunately, in the security chain, humans are often the weakest link; social engineering is the practice of taking advantage of people’s natural vulnerabilities to carry out malicious acts. Cyber criminals rely on interpersonal manipulation and deliberately exploit character traits and emotions such as helpfulness, trust, fear, or respect for authority to feign a personal relationship with the victim. In this way, cyber criminals entice their victims to disclose sensitive information, make bank transfers or install malware on their private PC or in their employer's corporate network.

How big is the social engineering and digital safety sector?

Today social engineering makes up more than 75 per cent of all cyberthreats on both desktop and mobiles. This means that criminals are actively trying to manipulate our vulnerabilities to get us to transfer money and give up precious information.

Where does India stand in cyber security as a data-driven economy is the main focus?

Scammers target Indian adults with nearly 60 scam attempts on average a week. Millennials in India are nearly twice as likely to be scammed than an older adults (47 per cent to 26 per cent), indicating that upcoming generations who are sharing more of their data online are much more at risk when it comes to cyberthreats.

How do you see the growth potential of digital marketing and the role of social influencers in spreading awareness?

Indians are most often targeted by scammers through social media. Digital marketing and social media influencers could be very influential in spreading awareness about how people can help protect themselves–talking about the threats where they are happening most is a powerful approach.

Why is it important to strengthen human-centric digital safety?

Traditionally, the focus of cyber security has primarily been on closing technical security gaps. We are making progress every day in optimising our technologies to defend our customers against cyberattacks. However, those threats that directly target people – such as social engineering threats – are still massively neglected. Other examples include cyberbullying, life-threatening internet challenges, the spread of fake news, and undesirable side-effects of personalisation algorithms.

According to Gen, over 75 per cent of desktop scams in 2023 were due to social engineering. In India, nearly 8 in 10 adults (78 per cent) reported that they have fallen victim to a cyber crime at some point, with over two-thirds (68 per cent) experiencing a cyber crime in 2022. Of the 243 million adults in India who experienced cyber crime in the past 12 months, over 2.3 billion hours were spent trying to resolve the issues created (an average of 9.5 hours1) for an estimated Rs 2.4 trillion lost last year alone.

Of the Indian adults who have been cybercrime victims this past year, 81 per cent were impacted financially. On top of this comes the influence of not only people, but institutions and nations, such as the swaying of elections and politics through fake news and deep fakes. This is where Human-Centric Digital Safety (HCDS) becomes an increasingly important lever for making the internet a safer place. Through this approach, we focus on the “human factor” of digital safety and put educating people at the center of cyber security.

Why is it that cybersecurity companies like Avast can successfully combat HCDS threats?

There are several good reasons why cybersecurity companies should be committed to HCDS. For Avast in particular, the main reason is probably our independence. First, we are completely committed to helping keep our users as secure online as possible – we are experts in that. So, we don't have to worry about engagement rates or advertisers, as is the case with the big e-commerce or social media platforms.

In addition, we can rely on anonymised data from more than 500 million customers worldwide. This means that our Threat Labs can identify specific patterns of fraud across several networks and misinformation at an early stage and quickly protect our users. In addition, our customers benefit from the expertise and experience we have gained over the years in combating cyberattacks, so that we can regularly provide them with sensible and easy-to-implement rules of conduct to protect themselves against social engineering. And who can better combat online threats of all kinds and inform people about them than those who deal with it every day? All these facts are derived from 2023 Norton Cyber Safety Insights Report: Online Creeping, conducted online in partnership with The Harris Poll (surveyed 10,003 adults ages 18+ across 10 countries, including 1,000 Indian adults. The survey was conducted during November 29–December 19, 2022.)

How can artificial intelligence (AI) protect against social engineering?

Personalisation algorithms or deep fakes have shown that some dangers on the internet are essentially based on artificial intelligence. But AI can also successfully help fight cyber crime. For example, tremendous advances in the field of natural language understanding by computers make it possible to automatically explain why a certain message is likely a scam, detect toxic or coercive conversations, or verify that a claim is not true based on trustworthy evidence.

In the future, this could be used increasingly to combat fake news or hate speech on social media platforms, for example, algorithms are constantly improving their ability to derive complex emotional states. Therefore, AI could be used in future to automatically explain why a certain message is a scam, to detect toxic or coercive conversations or to verify that a claim is not true based on trustworthy evidence. AI systems now perform such tasks even better than systems that still rely on human decisions.

With this in mind, AI offers various solutions to protect against social engineering, helping us manage a sheer mass of digital threats whose distribution is amplified by automatic text and image generators. What's more, AI can keep up with the pace of the internet.

YouTube videos go viral within hours, generating clicks in the millions; AI systems can scan comment columns quickly, efficiently, and reliably for problematic content.

Progress is also being made in communication with users. So far, we have only been able to warn them in the case of technical threats like malware, but not when it comes to pure HCDS threats, chatbots are getting better and better at carefully explaining these threats to the users.

Santosh Patnaik
Next Story
Share it