No free lunch: New security initiatives for AI and our investment in Harmonic

Generative AI applications are everywhere. Naturally, as over a hundred million people use them, many employees want to apply them to their jobs. There are now almost 9,000 AI apps that serve a variety of functions including LLM applications for finance, sales, and software development.

These developments are great for workforce productivity. However, for security, these applications can be a nightmare. Many of these platforms ask for access to confidential data and data leakage is a serious problem. In March, OpenAI’s ChatGPT had a bug that allowed some users to see what other users were asking it. And, for the wide swath of startups building most of these applications, security is not always a priority.

It’s not hard to see why CISOs and security leaders are worried. Apple has restricted some employees from using ChatGPT as well as other AI tools like Microsoft’s GitHub Copilot due to data privacy concerns. JPMorgan made a similar decision due to compliance concerns. This is at the same time that both of these companies are working on GenAI products internally for their customers. Samsung banned the use of GenAI tools on their company-owned devices. They also recognized that some of their employees will use the AI services on their personal devices instead and asked them to not input company data on their own devices.

Existing data protection solutions are not well equipped to help with this challenge. Many are fundamentally rules-based which restricts flexibility around who can access what tools and input what data. They also lead to a lot of false positives creating headaches for security analyst teams that need to painstakingly go through every flagged security issue.

At Storm, we’re excited to be investing in Harmonic’s $7M seed round as they work to solve this problem and help accelerate enterprise adoption of Generative AI. It’s great to have the opportunity to work with Alastair and Bryan again after having been a supporter of their previous company, Digital Shadows, which was sold to Reliaquest/KKR in July 2022.

Harmonic is solving this issue by creating visibility into every AI application a company’s employees use and then identifying compliance and security risks. The solution can prevent the leakage of complex data and IP with human-like accuracy. Harmonic also allows security teams to define their policies in plain English instead of complex DLP rules. This leads to more nuance in security policies and creates a better experience for end users who want to use the AI applications. Finally, Harmonic’s software allows companies to automatically resolve incidents when they do arise, lightening the load of stretched security teams. As the GenAI landscape continues to develop, we’re excited to see how Harmonic can help lead companies to secure adoption.

Venture Capital