Express Computer
Home  »  Guest Blogs  »  Gartner: Addressing the top security risks in AI coding

Gartner: Addressing the top security risks in AI coding

0 3

By Aaron Lord, Sr Director Analyst, Gartner

Generative AI coding assistants are set to revolutionise the way software engineers develop applications. Gartner predicts that by 2028, the collaboration between humans and AI assistants could reduce coding task completion times by 30%. However, these emerging technologies come with several inherent risks.

A recent Gartner survey reveals that one in three IT leaders believe generative AI will fundamentally alter their organisational operations. Yet, they also cite data security, inaccuracy, misuse, and governance as their primary concerns. To mitigate these risks effectively, software engineering leaders must thoroughly understand the prevailing security threats, their implications, and the appropriate mitigation strategies associated with AI coding assistants, which are outlined below.

Vulnerable output
A large language model (LLM) is trained on collections of code samples, and a percentage of those code samples will contain security vulnerabilities, either through malice or mistake. When accepted as is, this vulnerable output may introduce security issues in applications. Inexperienced developers may have an overreliance on the AI coding assistant, exacerbating the risks.

To address this, employ Application Security Testing (AST). Automated security scanning in the form of static application security testing (SAST) and dynamic application security testing (DAST) can help discover vulnerabilities in code. Furthermore, ensure training is in place to educate software engineers on the risks of becoming overly reliant on AI coding assistants. Restrict the use of AI coding assistants to those who have completed training.
Additionally, favour tools that allow the user to choose a response from a few AI suggestions. This will enable the user to review and, if necessary, weed out inappropriate suggestions. Use code security assistants as an additional guardrail against vulnerable output.

Intellectual property (IP) violation
The LLM training code license and ownership may introduce IP and copyright issues, as AI coding assistants could generate copy-protected code that is owned by another organisation. This presents a significant risk for enterprises.
To mitigate this risk, software engineering leaders must perform a third-party review to evaluate how the vendor prevents IP violations. It is crucial to ask the vendor what data sources its LLM uses to train the AI coding assistant, as the risk of IP violations may increase depending on the source of the training code. Additionally, use software composition analysis tools that are adept at identifying licensing issues with software packages. These tools can provide an extra layer of security and ensure compliance with IP regulations.

Training data poisoning
Training data poisoning occurs when malicious actors intentionally add insecure or harmful data to the LLM. This can result in the AI coding assistant producing vulnerable, inaccurate, or even toxic output. To address this, perform a third-party review to evaluate how the vendor prevents training model poisoning. It is essential for the vendor to verify the provenance of the training data to ensure that malicious actors are not influencing the data on which the LLM is trained.

Additionally, discover how often the AI model is refreshed or updated. Regularly refreshing the LLM can include the removal of data that has violated privacy or IP policies, as well as updating the LLM can lead to improvements in data protection, performance, and accuracy over time. Employ Application Security Testing (AST) as another layer of defence. AST can help discover vulnerable output that is intentionally malicious, ensuring that any harmful code is identified and addressed before it impacts the application.

Adversarial prompting
Adversarial prompting occurs when malicious actors manipulate the behaviour of the LLM by crafting adversarial prompts. This can compromise open-source LLMs, which may then be transferred to commercial LLMs, leading to unintended changes in the behaviour of the AI coding assistant. A particularly dangerous subtype of adversarial prompting is “prompt injection”, where crafted prompts cause the LLM to take unauthorised actions.

To mitigate these risks, software engineering leaders should perform a third-party review to evaluate how the vendor accounts for adversarial prompting. This review should include an assessment of the vendor’s strategies for preventing both general adversarial prompting, including prompt injection. By implementing these strategies, software engineering leaders can better protect their systems from the risks associated with AI coding assistants and ensure their reliable performance.

Get real time updates directly on you device, subscribe now.

Leave A Reply

Your email address will not be published.

LIVE Webinar

Digitize your HR practice with extensions to success factors

Join us for a virtual meeting on how organizations can use these extensions to not just provide a better experience to its’ employees, but also to significantly improve the efficiency of the HR processes
REGISTER NOW 

Stay updated with News, Trending Stories & Conferences with Express Computer
Follow us on Linkedin
India's Leading e-Governance Summit is here!!! Attend and Know more.
Register Now!
close-image
Attend Webinar & Enhance Your Organisation's Digital Experience.
Register Now
close-image
Enable A Truly Seamless & Secure Workplace.
Register Now
close-image
Attend Inida's Largest BFSI Technology Conclave!
Register Now
close-image
Know how to protect your company in digital era.
Register Now
close-image
Protect Your Critical Assets From Well-Organized Hackers
Register Now
close-image
Find Solutions to Maintain Productivity
Register Now
close-image
Live Webinar : Improve customer experience with Voice Bots
Register Now
close-image
Live Event: Technology Day- Kerala, E- Governance Champions Awards
Register Now
close-image
Virtual Conference : Learn to Automate complex Business Processes
Register Now
close-image