By Harish Kumar G S, Executive Director – Sales at Checkpoint Software Technologies
Me: “Can you be misused for cyber-attacks?”
Chat GPT: “As an AI language model, I don’t have agency or intentions of my own. However, like any technology, I can potentially be misused by individuals with malicious intent.
Ultimately, the responsibility for the use and potential misuse of AI systems lies with the users. It’s important for individuals and organizations to use AI technology ethically and responsibly, adhering to legal and ethical guidelines, and taking precautions to prevent any harm or misuse.”
In this conversation, ChatGPT is not wrong…
Generative artificial intelligence (AI) tools such as Chat GPT and Google Bard are gaining popularity all over the world. These tools allow their users to boost efficiency and productivity in their daily tasks and responsibilities.
However, great capabilities can be misused. Soon after they have been introduced, generative AI tools has been used to create malwares, many of which appeared in the official app stores. Generative AI has been used for creating emails and messages for AI-powered phishing campaigns, and an AI generated video on YouTube which was found to be malicious.
Focus especially here in India needs to be paid to the growing use of AI in businesses.
According to the Microsoft Work Trend Index 2023 report, 90% of Indian leaders say employees they hire will need new skills to be prepared for the growth of AI (artificial intelligence). The report also highlights that some 83% of Indian employees are willing to delegate as much work as possible to AI to lessen their workload.
Unfortunately, generative AI companies are not handling security issues as fast, despite the accelerated rate of its use for business around the world. In a recent report, the Check Point Research team found that Bard (google’s generative AI tool) imposes almost no restrictions on the creation of phishing emails and that with minimal manipulations, it can be utilized to develop malware keyloggers, which poses a security concern.
For these reasons, some companies have decided to ban the use of AI tools on corporate devices or networks, and other companies who permit employee access to generative AI tools, have ended up as victims of sensitive data leakage.
Therefore, until proper measures are available to protect sensitive data from leaking and internal information from being stolen, companies should pay extra-care in their security policies.
Mobile devices are no exception. In fact, mobile users are more likely to download a malicious app or fall prey to phishing attempts. Mobile devices smaller screens, large quantities applications and notifications can distract users making it harder for them to avoid clicking on malicious links or downloading malicious files. For many of us, social networks are the most used apps on our mobile device, which means mobile users are more exposed to social engineering and phishing. Moreover, the blurred lines between personal and professional use of mobile devices makes them a main entry point into an organization, so protection against AI threats should be a top priority.
In India, the mobile landscape has been growing exponentially. Reports by Deloitte indicate that by 2026 India will have 1 billion smartphone users with rural areas driving the sale of internet-enabled phones.
What do you need to do to protect your mobile fleet against AI threats?
- The most important thing to realize is that mobile device users cannot be trusted. Attacks keep increasing in amount and sophistication and are now too advanced and complicated to be detected by a human being. A classic example are phishing sites that have evolved so much, that they look the same as the original site. In order to deal with this level of sophistication, you need advanced technology to protect the device and prevent threats from entering your organization.
- Mobile devices are an entry point to your organization. With AI evolving so quickly, detection and remediation is not enough. Your mobile security solution must include preventative capabilities to stop the threat on the device level before gaining any access to your corporate assets.
- Generative AI learning capabilities are overwhelmingly impressive. To keep up, you need to use AI to fight AI. Make sure your security solution uses AI and machine learning technology to maintain the upper hand.
How can Check Point Harmony Mobile help you?
Harmony Mobile is Check Point’s, industry leading MTD solution. It is designed to protect your mobile fleet from any threat, including AI generated ones.
How do we do this?
- We do not trust what we see. We dig deeper
Check Point Harmony Mobile behavioral engines, run every application through different emulators to understand and detect any abnormal, suspicious, risky, or malicious behaviors. Malicious apps, whether man-made or created by a generative AI, will be recognized, and blocked before they even reach the device. - We do not let anything infiltrate your organization
If a malicious agent has made its way into the device, Check Point Harmony Mobile’s Network Protection, based on ThreatCloud AI engines and state-of-the-art algorithms, will make sure the communication between the malicious app and its CnC server will be blocked and reported to both end-users and security admin. - We go the extra mile
If your organization needs a stricter approach, Check Point Harmony Mobile vetting engines can tag mobile apps using Generative-AI services and back-end infrastructures. This means an organization can choose to restrict device access to corporate resources if it contains Generative-AI.