Conversation with Elad Schulman, CEO and Co-Founder of Lasso Security
Enterprise LLM security; LLM security threats; Industries using LLM security; Balance between LLM security and innovation; Future outlook for LLM security
In this conversation, I speak with Elad Schulman, CEO and Co-founder of Lasso Security, a startup and pioneer in the LLM security space.
In our conversation, we explore a wide range of topics, starting with Elad Schulman's personal journey into the world of AI and cybersecurity. We delve into the core mission and services of Lasso Security, specifically its focus on securing the adoption and integration of LLMs across various industries. We cover the unique challenges and needs of LLM security compared to traditional cybersecurity approaches, highlighting the importance of specialised solutions in this new era. We discuss the differentiators that set Lasso apart in a rapidly evolving market, touching upon the importance of being model agnostic and the company's approach to addressing the specific security needs of organisations using LLMs. Additionally, we touch on the broader implications of LLM adoption in industries, particularly those that are heavily regulated and the balance between innovation and security in deploying these technologies.
Here is the transcript of conversation.
Elad Schulman's background and founding of Lasso Security
Aditya: Before we get into the details of Lasso Security and AI security itself, I'd love to hear about your journey, your own personal journey. Can you tell us a little bit about your own background and what led you into the world of AI and security?
Elad: Sure, thanks. So first of all, thanks for having me here today. It's a pleasure. Let me tell you a bit about my background. So if I'll go way back, I'm actually, as a young kid, mathematics was running in my veins, but also, hacking into all sorts of games and systems since a very early age. So I was a geek, let's call it as such. But professionally, for the last 25 years, I've been in the software industry. Always on the vendor side, always ended up by software B2B security and monitoring. So I've been around the block for a lot of years now, and as we like to say, this is not our first rodeo, so it's not my first rodeo as well.
I started in the early 2000s as a software developer in a company called Mercury Interactive and relatively quickly, I moved on to their management teams, and later on, I moved to product management where I spent the majority of my career. We got acquired by HP software at one point, and then started the journey between startups and corporates. You have to work in one in order to understand what makes it tick. And I think this is where I the first time I learned what makes it tick, how it works. After a few years, I left and founded a startup in cybersecurity called Segasec, doing brand protection and anti-phishing. We had a quite successful journey and got acquired at the end of 2019 by Mimecast. So, the beginning of 2020, I got in there. Covid hit unfortunately, I left there. We parted ways later in the year, and then I took a break, but I moved to the other side and started doing a lot of angel investing on my own and working with funds, joining boards of a few companies, advising a few companies, and then definitely changing gears. And then that might relate to another story of how Lasso began, but fast forward a few years later, but that would probably relate to the story behind Lasso and what brought us there.
Aditya: In terms of what Lasso does and what it stands for, can you tell us a little bit more about what are the key services you offer? And what is it that you're trying to solve?
Elad: Okay, so basically, everyone is storming the LLM era. And we are saying that there are a lot of pioneers out there. And we want to bring these pioneers into the LLM era in a secure, safe way. So our mission is basically to help these organisations join the revolution and enable them to work with these new tools without compromising on security and safety. In the pace that everything is happening, security is not top of mind for them. They want to make sure that it's working and everyone is excited about it. Everyone is playing with it. But we want to make sure that this is happening again in a secure and safe way, without bringing too complex and too cumbersome a world, and definitely not something which is boring for them, but would be exciting for them to use and to help them. So this is basically the premise of our solution and services. And we'll talk about this in a bit, we are making sure that we address the coming world - the unknowns - what is happening today, but also what will happen tomorrow, and that the people can just join the revolution.
LLM Security as New Category
Aditya: What are the main differences between the traditional cybersecurity space and what you're seeing in the LLM security space? Why is it that we need a specialised security offering for LLMs rather than just use the traditional cybersecurity solutions that are out there?
Elad: So this is definitely a question that we're being asked a lot. And we're also writing about that. You can read also a blog post on our website. And we're doing a lot of thought leadership on that. But the basis of it, the way we see it, and I'll explain, why is that LLM security is a new category. It's not yet another feature that belongs in another platform. This new world is completely different. It is conversational, it is unstructured, it is situational, and you need to understand that. And if in the past a lot of the attack vectors were around the binaries of the world, today's new attack surface is text. It is conversation, and as such, you need to understand it. You need to have a deep understanding of how LLMs operate and what makes them tick. And more than that, there are so many, there are thousands and many more. There are tens of thousands of different models out there. You need to understand why one tool behaves in a certain way, and why a different tool behaves in a different way, and would give you different answers for the same question. And then there are the browser security tools, data security and the DLP(data loss prevention) tools. They're trying to solve a generic problem. And while you need browser security for LLMs, you need data security and need DLP for LLMs. They have a variance which is different, and this is why I think that they might provide a feature to address those. But in many cases, it is marketing that they're doing. They have big baggage, and they will not pivot completely to this world, and we can talk about the different attack vectors in this world, and why they cannot address it. But again, in order to understand the broader context and getting to a conversation. This is not their comfort zone, not their DNA, not where their solution lies. And this is why you need dedicated solutions for this world, and I think it's already getting proven, and it will be proven more and more as we go forward.
Aditya: You touched upon the cybersecurity side of things. What are the key differentiators for Lasso? I know there are a few companies that are out there. It's still a burgeoning space, it's still new. I've come across companies like Optiv Security, Aujas, WhyLabs and Mithril Security. But then there's also the foundation model companies like OpenAI and Anthropic and the rest of them. And then there's the cloud providers. And as you've described it, not all of them are focused specifically on LLMs. But can you tell me, what are the key differentiators compared to all these different providers that are targeting this market?
Elad: So around the ones which are not core LLM security. So this is not their focus. And while they can bring a lot of good innovation in that world, still it, it's not enough. And if you're talking about the model providers, they will bring innovation in their space and not something which is generic.
So first of all, it's being model agnostic. So we're not, although we're doing a lot with Chat GPT and OpenAI. We're not LLM security for OpenAI for Chat GPT, but also for others. So one of them is being model agnostic.
The second one is that there are a lot of different interaction points that an organisation has with the LLMs. And we look into all of those and providing products that would solve pains for the employees, for the developers, for the analysts, and for the applications which are connected to the different models. But everything has to be tied together into a single platform because everything interacts. And this is not a point solution for just working with GPT, for example, or chat applications in general, but also looking at a more broader context and integrating those and passing the know-how and the best practices and the specialties between the different products and intertwining them. And looking also at the lifecycle of LLM. And providing visibility into what's happening there, and providing you capabilities to know what you are using and how you are using it, and what you can do with it. And also with the deep expertise into the LLMs.
And also, and this is also key, to train models of our own to address some of the use cases and to identify these. So in each and every one of them, we need to have technical expertise and the secret sauce that we have, but also, and this is part of the concept of everything that Lasso is, is that it is a wild wild West, and it is changing, and there are. There will be new villains, and we will need to adapt.
And this is why I'm saying that the platform would need also to anticipate and be able to shift gears. Once new threats will come into play, and while there are good LLM security companies out there, I think that in each and every one of the areas that I've highlighted we have a slightly different angle, and we're tying it together in a unique way but that the journey it just started, and there probably will be several good LLM security companies, not just one. Of course, we are working to lead the pack. But we need more competitors. We need more to succeed in this in this realm, and I'm happy to see other companies getting into that space.
Aditya: Do you use LLMs within your own product stack?
Elad: So yes, we are. Those are things that we're using internally, and we're fine-tuning them to our needs and the use cases that we address. We're not just relying on LLMs. There's also plain NLP, and even regular expressions that are still valid for some of the cases. I think that in order to address some of the use cases without using your own LLMs, you don't stand a chance.
Aditya: And how does that compare to say traditionally, AI/ML within this space? Right? AI/ML has been used in cybersecurity to understand patterns and predict threat vectors. Do you see a marked difference in how those solutions might evolve, say, compared to using LLMs for the same thing?
Elad: So first of all, we see some of them that are pivoting which involves working with LLMs especially the ones that were trained on huge data. And as such, the attack surface is different. The fact that it is generative makes it a bit different. Also, the fact that a lot more are using it in various deployment models, the employees are using them a lot more. And so things are running very fast. So it is a leap from the previous world, and some of the previous AI/ML security tools would be able to adapt to this world. But still, it is also quite different than that world. But we do see some of those more traditional players moving into this world. And they will have a better chance than someone that was not in the space ever, so they also have a good positioning for fitting into this world.
LLM Security Threats
Aditya: Let's dive into a little bit more of the security features and some of the specific threats that Lasso addresses. Can you just walk us through some of them, and elaborate on one or two a little bit?
Elad: Sure. So let's talk about the attack vectors, or at least what's known today. And then we talk about how we can address them in some cases. So the first one is the more naive one. It is the data leak which are the employees which are mistakenly sharing information they're not supposed to give in. In other cases, they're getting information they're not supposed to get. So it is people that are uploading all the information they have. It's not just asking questions, but putting complete files or complete JSON files into GPT or Bard, and getting them to summarise some of them, or format them, or extract information from them. But within them, there could be sensitive pieces of information, and for that, there are different aspects that one can apply. That is, from the browser extension one can sit and listen to the traffic and block it. It can be a piece of software, each plugging into the gateway to the entrance to the organisation and basically tapping into the data, applying our models on it, and also not just alerting, but also masking some of the data and even blocking some of the interactions based on what we identified there. In some cases it's similar to a more traditional data leak, DLP, or even browser security aspect. But in other cases, when you're potentially putting core IP of an organisation into the LLM, this is not something that the basic tools can identify. Also if you're exposing a piece of code which is proprietary for the organisation - these are some of the concerns. Of course, it has to be done real-time. In other cases, it could be the more relevant attacks for this world. The jailbreaks, or the prompt injection, meaning that someone is trying to intervene with the interaction with the model, whether directly or indirectly, and manipulate and get the model to do something that it's not supposed to do like fit data that it shouldn't, or apply also all sorts of other malicious actions in the backend. And if we're working with the models that are deployed in, integrated into the applications via API calls, this is where it can be the secure gateway of the world that would secure the APIs, and also things that could be plugged into the environment of the developers and be part of the development lifecycle to be embedded within the application code itself, so that different mechanisms, different techniques that can be applied. But basically, wherever there's an interaction, we are thinking, okay, how that interaction is working and what is the best suitable solution to address this pain point, and also the operational considerations of the organisation, as I mentioned before. If you're working with that enterprise, you need to know how it thinks. So it's not just providing the functionality, but if I'm providing you a tool, can you deploy it easily? What is the friction that it introduces? Who are going to be the people that would object to deploying it, not just people from security, but people from it or the development teams. How can we reduce their concerns while integrating a new solution into their into their stack?
Aditya: Let's dive into a little bit more of the security features and some of the specific threats that Lasso addresses. Can you walk us through some of them, and elaborate on one or two a bit?
Elad: Sure. Let's talk about the attack vectors, or at least what's known today, and then we'll discuss how we can address them in some cases. The first one is the more naive one, the data leak, where employees mistakenly share information they're not supposed to. In other cases, they're receiving information they're not supposed to get. So, it involves people uploading all the information they have. It's not just asking questions but putting complete files or complete JSON files into GPT or Bard, and getting them to summarise some of them, or format them, or extract information from them. But within them, there could be sensitive pieces of information. For that, there are different approaches one can apply, such as a browser extension that can sit and listen to the traffic and block it. It can be a piece of software, each plugging into the gateway to the entrance of the organisation and basically tapping into the data, applying our models on it, and also not just alerting but also masking some of the data and even blocking some of the interactions based on what we identified there. In some cases, it's similar to a more traditional data leak, DLP (Data Loss Prevention), or even browser security aspect. But in other cases, when you're potentially putting IP, core IP of an organisation, this is not something that the basic tools can identify. If you're exposing a piece of code which is proprietary to the organisation, these are some of the concerns. Of course, it has to be done real-time.
In other cases, it could be the more relevant attacks for this world, the jailbreaks, or the prompt injection, meaning that someone is trying to intervene with the interaction with the model, whether directly or indirectly, and manipulate and get the model to do something that it's not supposed to do, like feed data that it shouldn't, or apply also all sorts of other malicious actions in the backend. And if we're working with the models that are deployed in, integrated into the applications via API calls, this is where it can be the secure gateway of the world that would secure the APIs, and also things that could be plugged into the environment of the developers and be part of the development lifecycle to be embedded within the application code itself, so that different mechanisms, different techniques that can be applied. But basically, wherever there's an interaction, we are thinking, okay, how that interaction is working and what is the best suitable solution to address this pain point, and also the operational considerations of the organisation as I mentioned before. If you're working with that enterprise, you need to know how it thinks so, it's not just providing the functionality. But if I'm providing you a tool, can you deploy it easily? What is the friction that it introduces who are going to be the objection? The people that would object to deploying it, not just people from security, but people from IT, or the development teams. How can we reduce their concerns while integrating a new solution into their stack?
Enterprise Integration and Journey
Aditya: How does your solution integrate with an existing enterprise system? Is it within the IT system, the cybersecurity infrastructure? Is it within the web browser? I mean, I guess there are different ways of integrating.
Elad: Of course. So the different areas, definitely, in some of the cases security provisions and IT for talking about the browser extension, then security operations and IT need to be involved, it would be deployed across the browser. Then different profiles of the users. If it's on the developers' workstations we have a plugin for the IDEs and then definitely the developers are involved. And of course, if we're plugging into the various gateways, or who are introducing gateways of our own, then people from application security or API security. So there, there are a lot of different stakeholders. And as I mentioned, because we're integrating with different areas, it's not that the same person we're talking with all the time. We need to bring other stakeholders, and the more that we take, more stakeholders need to get into the play. But you don't have to use all of our products and all of our solutions. You can start with one of them based on the pain that you have and based on the people that we talk with and then grow later on. So it's not a must that you have to deploy everything in order to get value right coming out of the gate.
Aditya: What has been your experience in terms of where enterprises are seeing the most pain? And where do they typically start off with on this journey? Can you also share any success story of how you've taken a long organisation from say, step one to other steps. I know you're still an early startup. But any experiences that or case studies that you can share would be helpful.
Elad: Sure. So basically, where we're starting is by asking the stakeholders who is using generative AI within the organisation? Who and what? It's not just about the people, but also the applications. Do you know that? Do you have a list? Do you understand how they're interacting with those applications? Do you know what data is being passed? Are you aware of any issues? This is where the discussion starts, and we begin to unveil whether they know or don't know. This is the point where we can start applying some of the tools they can use to gain this understanding. Once they've grasped this, we move into an intelligent discussion about their understanding of how it's being used, how the employees are using it, how the developers are using it. And then we address specific concerns, like what happens if a piece of code is being sent, or if PII (Personally Identifiable Information) is being sent. What can I do with it? Can I anonymise it? Can I stop it? Can I train the user? Then we start to apply the detection and prevention mechanisms. So, once you're out of the gate, you don't need to apply them immediately. You're just monitoring in a learning mode. And then you're learning what's happening. These are the ones that you know. You don't know too much, but they are the ones that require assistance. I already have the applications, the models, the data there. But now I have a question: How can I prevent internal users from being exposed to data they shouldn't see? For example, how can I prevent a developer from seeing salary data that I might have? That person is not allowed to access these systems, but within the models, I have this data, and someone can either mistakenly access it or manipulate the model to access it in a jailbreak scenario. And then we're talking about specifying what are the use cases? What is a normal question someone can ask? What is a typical response they should or shouldn't get? This is where either we already have the mechanisms, or we can fine-tune some of the models based on the use cases we have. So, there's also a different level of maturity within the organisations because some of them are saying, listen, people are using it. Originally, I might have blocked it. But I'm thinking, okay, but if you're blocking it, you're blocking progress. Do you know what to block? You do know what to block. Some of them are saying, yeah, but we've issued a policy. But then the question is, are you enforcing the policy? They're not enforcing it. So they're recommending people please don't share sensitive information with the application. But that is not something they can definitely enforce. And this is where the journey starts. But as you said, we are in an early state, but the market itself is also in a very exploratory mode. But everyone is using it. Every organisation that I'm talking with, everyone they have, one of these problems is not necessarily the most burning problem they have. But then it also comes down to the question, what is the value that I'm giving you? How much are you willing to pay for it? Not everyone will be willing to pay the same, and not everyone is getting the same value. But not for everyone. It is that top concern today. For some, it is top of mind. But not for everyone. But as we move forward, it will be top of mind for everyone.
Industry Verticals for LLM Security
Aditya: I'm guessing there'll be specific industries like financial healthcare, like highly regulated industries where this is possibly top of mind. Right? Because then it puts them into trouble. People can share sensitive information without knowing. So is that where you see the low hanging fruit or the growth market for you, at least initially as this technology develops. And they're also the ones that are the most restrictive in terms of policies, I'm guessing so. Again, there are ways that you can then balance the innovation and the security aspects. So any thoughts on the industries that you're seeing out there and the conversations you're having.
Elad: So we see this basically all across. It is perceived that the heavily regulated verticals would be more relevant and definitely as regulations is moving forward. But it's not just that. Because even in areas that are less regulated where you can lose pieces of information, or sensitive or core, IP, that you have might leak, and can provide even reputational damage to you, or you can lose your business you can lose your customers. Trust? It is, it is everywhere. So we see this. Of course, in financials. We see this in healthcare. We see this in retail. We see this in automotive. It's basically everywhere. Currently, we do see more traction with the more regulated, but also with the early adopters. The ones which are running very, very fast are saying, I'm going to increase adoption internally and I want to make sure that security and privacy is covered.
Balancing LLM Security with Innovation
Aditya: Its great to know that there's so much enthusiasm. At the same time they're in the exploratory phase. But its good to know that it's coming across the board. I mean, we are hardly 12 months into this into the LLM cycle. So it's still very early days. But what is great to see is the traction. So I wanted to also touch upon this debate that's happening around safety and innovation. You touched upon it a little bit. How to navigate this balance. How do you approach that debate? And how do you approach that within your own products? And different customers might have different thoughts on it. But do you have your thesis or philosophy around this, because you don't want to stifle innovation or have too many guard rails, like prevent the developers from developing any code. And so there's all of these aspects which I think come into play. So what's your view on this? And how do you approach it?
Elad: So, generally, security and usability are contradictory and they're constantly colliding. But for many years now, and I think the wave has shifted fairly recently, it is moving towards more enabling cybersecurity. So all of us need to make sure that progress happens and that the organisation can work and adopt innovation, develop new capabilities, and basically drive the business. Because, theoretically, we could unplug all computers from the network and say, "Listen, security is perfect. No one can do anything." You know, I'm taking this to the extreme. But we need to make sure that we're enabling and also within our products. So, we're protecting the employees, the developers, the applications. We need to make sure that there's minimal friction with some of the products. There is zero friction, which means that, or at least one of my former managers was calling it cinema, so zero to minimal the one that you do not feel. But it is really protecting you, because we can block everything, we can make sure that they're trained to death. And instead of accelerating everything that they do, and this is what generative AI is bringing to our world, is accelerating productivity. We need to make sure that we're reducing friction, and that we're also not in cybersecurity. There are dozens of tools that an organisation is using. And we're introducing yet another one. We want to make sure that it's not generating a lot of noise, definitely not a lot of false positives. It allows people to work and it is enabling progress. So it is the concept that we bring. It is the features that we bring. It's a deployment mechanism that we are plugging into. And eventually, we want to be silent. We want people to not notice that we exist. But to know that we've protected them. And this is eventually, if no one knows that we're there, but we actually protected them, the question is, did we do enough work? So part of our work is also to do PR that we have blocked it and reports about what we've blocked. And to tell them, "Listen, you're protected. And this is why." It's not just learning that something is happening. This is why I said originally that it's a real-time and in-line technology that could react in real time and prevent things from happening, prevent the mistakes. But later on to be as seamless as possible. And along the way, if we've prevented and we have stopped an interaction of a specific user along the way. We've also indicated, "Okay, but notice, something has happened for the next time. Please do not repeat the same behaviour." And then we've also trained the user along the way. But these are the things that we are constantly thinking about and concerned about. And this is going across the board, everyone within my organisation and also working with customers. We're stressing to them that we're here to help them and to work for them and not vice versa. I don't want them to work for me with the tools and the platform that I bring them.
Aditya: So you're kind of a silent sheriff. In a way, you don't want to be too noisy. You want to make sure to maintain law and order, but you know that there's a sheriff. You don't want to see all the law enforcement people around you, but you want to know that you're protected. There is a sheriff that is protecting you.
Elad: Yeah, but you don't necessarily want to see that person all the time.
Future Developments and Expectations for 2024
Aditya: Sure, that makes sense, that's great. So two questions I had in the end, one was around future developments and innovation. What's the roadmap for innovation looking like in this space? Looking ahead, what are some of the things that you have in the pipeline for Lasso's products, and any new features or tools that we should be excited about?
Elad: So first of all, it's changing all the time, and we're introducing new capabilities all the time. But if I'm thinking forward, looking at two things. There will be new technologies and new mechanisms in this world. It might be a shift from text to voice and video and new capabilities. So we constantly need to be aware of what's coming next. What's going to be the new attack surface, how people are going to use these tools. And keep an eye on that. But, on the other hand, we need to see not just what we think might be the new attack surface, but what actually the attackers are going to do, and in cybersecurity, we do not dictate the pace, they are dictating the pace, and once they move and they introduce new attack vectors on the new tools or on the existing tools, we would need to shift and pivot and adapt to what they're doing. And these two things will pull us into different directions all the time, and we need to be up to par all the time. So it's not just the innovation that I'm going to bring you next month in the new tools that I'm going to support, or the new models that I'm going to bring and capabilities. But also what are the things that I don't know? So the unknown unknowns are the things that make me stay awake at night and keep anticipating. And once things are surfacing we need to very quickly react to that. So these are the things that excite me. And this again I can't highlight enough, this is running at a pace that we've never seen before. This revolution is potentially larger than the cloud revolution and Internet revolution combined. So as such we need to have very fast horses.
Aditya: The last one I wanted to talk about, or get your opinion on was, do you expect in 2024, there might be a big LLM cybersecurity attack that becomes very public? Is that something you expect to see with the adoption of these tools?
Elad: For sure. With the adoption of these tools, we also need to keep in mind that the ease of use and the advancement of these tools are making the lives of the villains easier. They can generate more sophisticated attacks in a much simpler way, and with everyone adopting these technologies, they just need to go to where it's easiest. So, we will see major breaches, we will see major attacks. It's definitely something to stay on alert for in 2024, for sure.