Generative AI is dominating the news cycle — and with good reason — as companies embrace opportunities to boost productivity and improve customer experiences in a myriad of ways.
But amid this innovation lurks a new risk for enterprises: How can they keep trusted data secure?
The latest findings from Salesforce’s Generative AI Snapshot Research Series, an ongoing research study of over 4,000 full-time employees, uncover that 73% of employees believe generative AI introduces new security risks, though most use or plan to use the technology. Despite these concerns, the research reveals that few know how to protect their companies from said risks.
Generative AI adoption is moving quickly — will transform customer relationships
Employees see the potential of generative AI and are already actively using or planning to use the technology. They note benefits including serving customers better and saving time as reasons to use generative AI.
As generative AI becomes more widely adopted, trust and security concerns surface. With little knowledge on how to leverage generative AI responsibly, employees risk inaccuracies, biases, and security issues.
“Generative AI has the potential to help businesses connect with their audiences in new, more personalized ways,” said Paula Goldman, Chief Ethical and Humane Use Officer, Salesforce. “As companies embrace this technology, businesses need to ensure that there are ethical guidelines and guardrails in place for safe and secure development and use of generative AI.”