Finding the balance: Productivity and innovation in IT’s AI era
Atera’s CPO and Wiz’s CISO offer an insider perspective on navigating AI’s future in IT.
In this webinar you’ll learn about:
- How Atera and Wiz develop and leverage AI
- The trade-offs IT leaders consider between platform and point solutions
- Strategies to balance productivity gains with a manageable tech stack
- A live Q&A
Featured next-gen speakers:


You might also like:
Webinar Transcript
Anna: Okay, how about we start with very quick intros. So, I’m joined today by Ryan, CIO and CISO at Wiz, and Tal, who is our very own CPO at Atera. I am going to hand it over to you. Tell us a bit about who you are, what Atera and Wiz are exactly for anyone that doesn’t know, but I doubt it.
Tal: Ryan, go ahead.
Ryan: Sure. Hi everybody, thank you so much for the chance to join you all virtually today. Again, my name is Ryan Gazans and I am the CIO and CISO at Wiz. If you’re not familiar with us, Wiz is a platform that we built to help teams securely develop, build, and run everything they produce in the cloud, be it on AWS, Azure, GCP. Our focus is really on democratizing security across all of the teams that have to partner to efficiently and effectively use the cloud across R&D, DevOps, and security. As CIO and CISO, my roles include leading our IT function, which includes IT engineering, Ops, and business applications, as well as our security teams, which are responsible for security engineering, operations, risk, and compliance. I’ve been at Wiz for about two and a half years now, and previously came from Facebook, now Meta, and before that, Mandiant. Over to you, Tal.
Tal: Thanks, Ryan. I’m Tal, the Chief Product Officer at Atera. Atera is an AI-first remote IT all-in-one solution. We’ll probably touch on this later on during the webinar. Many of you are already familiar with Atera. In my previous roles, I’ve been the VP of Product at Redis and also served several senior roles in product departments at Converged and Flash Networks, among others. Fun fact: I started my career as a data scientist specializing in computer vision, so I have a very warm corner in my heart for AI and data science.
Anna: Thank you both. Maybe quickly, I’m Anna, by the way. I missed that part. I’ll be moderating today’s session. I lead the content and communications team here at Atera, and I’m very excited to kick this off.
Just a quick minute before we start, I want to go over a few housekeeping rules. This webinar is being recorded. Of course, the recordings of the session are going to be shared with everyone that registered for the webinar. You can drop your questions or comments for Ryan and Tal in the Q&A section. Please help us stay organized so we can go by order. We’re going to have 15 minutes at the end of this session to take all your questions. That’s it, let’s dive right in.
Like I said, I’m super excited to have you both here. As we’re going to explore how AI serves as a powerful catalyst for enhanced efficiency and streamlined operations, how it enables focus on more strategic initiatives, I also want to touch on the complexity and the challenges that organizations today must navigate carefully when introducing this new layer of AI. As we know, this can lead to integration issues, the need for upskilling, and might change team dynamics. We really wanted to take today to discuss how AI drives significant productivity gains while also addressing some of these complexities.
I’ll start. As this technology continues to evolve, how would you each describe its transformative role in enhancing productivity across industries and specifically within IT departments today?
Tal: Cool. I’ll start. So maybe a little bit of forward. You know, there’s been a lot of big words around productivity and complexity and a lot of things. From my perspective, the main issue here is to learn a little bit about AI and what its impact is going to be on your organizations, if there is an impact. I think it’s a big “if.” When looking at what AI has done already, I would probably look at it from the micro level. I would look at the impact on GDP, and to date, it has been nothing. You don’t see a big impact on GDP on the macro level, so it’s very, very hard to see a real impact. There’s a lot of hype and big words on this, but there has been no real impact on GDP. So, I don’t assume there’s a lot of impact on productivity around companies. I do, however, think that it will have an impact.
The real question is how and when. There’s a saying by Keynes, who was a famous economist, that said, “It’s easy to know what’s going to happen; finally, at the end, everyone dies. The real question is when.” So, I think it’s a similar question here.
Ryan: I’d agree with Tal. If you look at the demand curve, even just on the consumer side—let’s set aside business—all of the major services like ChatGPT are basically still doubling their user base year-over-year. So the demand growth is there. There’s a lot of experimentation, especially when it comes to translating that from casual usage into something that is deeply integrated and part of how you, as a business, are being productive. I think the disruptive effects are materializing more immediately in some industries and areas than others. Clearly, in areas like creative content or improving existing agent-based systems like support chatbots, where there’s a higher tolerance for some error and hallucination, it is having a small impact and growing quickly. These are also the areas where people have questions about what this means for jobs and economic growth. For expert systems and use cases, I think it’s still in the experimentation stage. Enterprises are excited but cautious because while these features and promises are appealing, they come at a cost. We’ll talk about this, but adding vendors incurs complexity and debt. None of the products that are integrating or providing these features are doing so for free. They are, in part, passing along the charges from some of the major AI platforms they themselves are using. A lot of organizations have to start thinking about the cost-benefit analysis and can’t just blanket adopt all of the AI features for all of the SaaS vendors they use right out of the gate. That type of cost-benefit analysis and rooting it back in requirements is what you’ll see a lot of in the coming year. No company is going to be able to just pay for every SaaS vendor they use that provides AI-enabled features to turn it all on and use it across the board. And that’s good; it’s going to lead to more judicious usage. We can certainly talk more about how that cost-benefit analysis will play out.
Anna: Yes, we’ll get to that a bit later. Maybe starting from the beginning, Tal touched on it a bit, saying that AI was everywhere. Obviously, it’s not going anywhere. But looking at this past year’s wave of AI products and services, and from your personal experience either working with AI vendors or introducing this technology into your own products, would you consider generative AI to still be a hype or no?
Ryan: I’ll talk from our experience adopting it in our own product. There are a few areas within the Wiz product we’ve experimented with and ultimately shipped AI-enabled features. One is around search. If you have a cloud environment and we’re inventorying it and the risks in it, it’s really nice to have a simple way to ask what under the hood is a very complex graph query. So a question like, “Show me virtual machines in my environment containing sensitive data that is exposed to the internet,” translates a simple, natural language question into an underlying precise system understandable query, like a machine-understandable graph query. This is actually a very good use case for large language models (LLMs), and one where you can also have some tolerance for imprecision. Another example is AI-suggested remediation. Given a risk or a problem—this is true in a lot of security domains—you can ask the system to generate a recommendation on what to do. In Wiz, we identify issues as we process the environment to look for risks, and then we have a remediation engine that says, “Based on what we know about your environment and these findings, and based on the expert data that we’ve populated the system with, here are the steps you can take to fix it. Here are some example commands you could run or infrastructure code that you could commit to do that.”
When you control the experience and box it in like that, in a very tailored path for the user, you can get a lot of value. So I think that’s where we’re going from experimentation to finding value. The flip side of that is a lot of the hype around fully autonomous agents that can just autonomously solve any class of problem and reason at a very high order. We’re still a few generations away from that. People who are not practitioners using these technologies day-to-day sometimes buy into some of the hype from all the investment going into the space and realize that in practice, a lot of these systems are still a bit goofy, still hallucinate a lot, and still make mistakes that are just not acceptable in certain production usage. That’s where I think a lot of organizations will have reality checks and shape their investments while still keeping close tabs on how these products are evolving.
Tal: I agree with Ryan. I’d like to touch on this from a different angle. When you look at different AI solutions out there, you probably see every day tens of commercials for different AI solutions for every vertical and every specific use case.
I think you look at two different use cases. The first use case is what I call enhancement. You have people working, and you want to enhance their work. This is a use case that’s hard to quantify, and many times the business model around this is a business model that’s not one-to-one with the benefit. It’s very, very hard to quantify. By the way, in a survey that was done by McKinsey, I think the number one concern for CIOs was how to quantify the value of AI solutions.
There’s another type of solution, and that type is an actual save, usually of human labor. It doesn’t have to be human labor; it could be other types of savings as well. For example, replacing a large system that costs a lot of money with a smaller system that costs less. Essentially, you see the ROI very, very quickly. Many times, this is also associated with the business model. If you have a business model for AI that’s not software as a service but service as a software, meaning that you actually charge based on success, this allows you to adopt an AI system without bearing the cost of risk, which is usually what we look at when we see all of those commercials.
I would say that if you look at those AI solutions, the first ones to succeed are the ones that show clear ROI. It’s very, very hard, but again, if it’s success-based, it’s much easier to show ROI. So, yeah, I would say this is where I would go first.
Ryan: What’s interesting about that, Tal, is that the companies and organizations that have the broadest data that could help with that ROI analysis are also the companies that are selling AI-enabled products and services. They have somewhat of a vested interest in showing how amazing it’s being. Not picking on any company in particular here, but I’ll take an example of why it’s so hard to get that ROI data. A week or two ago, headlines were made during Google’s earnings call when Sundar Pichai announced that 25% of code at Google is now being generated by AI. When I heard that percentage, it was like, well, that doesn’t tell me much because, one, 25% is measured by what? Lines of code for net new commits? What’s the denominator? Two, if you’ve ever worked as a developer in a large code base, you know that a lot of code is auto-generated either by template or by various forms of enabled autocomplete type functions. Just saying AI is now generating 25% of code doesn’t actually capture a delta in a way that on the surface it might seem to, of how much things have changed or how much they’ve improved.
I think, as you said, it’s going to be a while before the consumer side—the big organizations trying to use this—actually have the data at the back to see if it has made a meaningful impact on efficiency and productivity. I think we’re ways from that.
Tal: This is so true. If you look at a success-based business model, which is starting to loom, you don’t need to prove the ROI. You only pay if there was success. If you look at systems, for example, that solve tickets for users—Atera is going to launch this in a couple of months—let’s say you have this kind of system. If you pay me only for auto-solved tickets, a ticket that was solved with no intervention from a technician, that means you have nothing to lose. I don’t need to prove the ROI; just implement it and pay for whatever it took. You know what the value is for you, and if it costs less than the value, it’s a no-brainer.
I would say that systems and business models that start this way would probably be the first to succeed because it eliminates the barrier to entry for CIOs that are worried about the ROI.
Ryan: Agreed. It’s a lower-risk way to dip your toes into it and see the effects.
Tal: Yeah, definitely.
Anna: Okay, maybe that’s a nice segue. Tell me, will companies and teams that adopt AI earlier see those gains coming in earlier? Are they going to be able to prove ROI before others, or would that still remain out there? What are the gains for early AI adopters?
Tal: Tough question. I think that past experience shows that it’s not always the first company to jump on the bandwagon that will succeed in the end. We’ve seen it in multiple examples. Ryan, you and I talked about it earlier. There was the dot-com boom, where lots of companies—who remembers Pets.com or various other companies that everyone thought were going to be the next big thing? I don’t know if anyone thought that a bookseller would become one of the biggest companies in the world today. It’s very hard to know who the winners are going to be. Since then, there have been a lot of companies that came later on, such as Facebook, that started after the dot-com boom and succeeded very well. It’s very hard to know.
I would argue that if you look at operations, there’s a kind of Darwinian approach to it. If you manage to lower your operational costs, then you have an edge over your competition in whatever field you are in. If you have that edge, it accumulates over time. If you start with AI and do get a performance boost, that means you have this edge over your competition. This allows you to drive better prices, capture the market better, be more efficient, and ultimately win your market. I would say it’s a risk, but I’m not sure you can afford not to take that risk.
Ryan: I think the answer has to come with the context of where you are as a business. If you’re an early-stage startup, a one or two-person operation where everyone has to wear many hats and you’re going from zero to one in every function, early adoption of AI technologies is such a good force multiplier. At a point where you’re good with draft-stage, prototyping-stage capabilities and functions, the first-mover advantage is very real. It will help you get something meaningful to market quicker and cover roles that you can’t have the luxury of dedicated resources or people for. For established organizations, it tends to be more nuanced. This goes back to first principles around buying vendor-driven solutions in any context. Organizations often struggle with defining technical requirements for build versus buy situations. In an IT org working with business partners and different teams with use cases that are not technology experts, it’s often very hard to resource that well.
When you take AI technologies, it’s the same thing. Teams see their SaaS vendors offering AI-enabled product features or new startups leading with that. They are appealed by the product message and use cases. An important role of IT leaders is to center them back on their requirements, the problems they’re trying to solve, the technology they have today, and where it’s falling short. Think about the problem from requirements and not from features. This always steers you in a good direction and helps organizations have some governance that avoids accumulating tech debt and cost from piling up too many SaaS point solutions too quickly, AI or otherwise.
Tal: I can’t agree more. It’s about looking at the need and finding the right solution for that need. Do experiment; don’t say it’s too early and you’re not going in. Experiment with lower budgets at the beginning to see if it provides the value that answers your needs. Go full-on only if you establish that it answers the needs that you have. # Practical Considerations for AI Adoption
Ryan: I’m probably jumping ahead to the next topic here that Anna has queued up, but what does that mean practically speaking if you’re in an IT organization? I’ll take a personal example. In the past year, I’ve had our legal teams ask about evaluating legal AI tools that help with looking at contract data to process things quickly. Procurement teams looked at AI tools to help with the procurement process and the privacy, security, and legal data that come with all the paperwork involved in procurement. Marketing asked about AI tools for content generation. In all of these scenarios, the questions you have to ask are: One, if each of these teams is already using SaaS tools to fulfill these functions, are those tools going to be bundling AI features already? Is it in the roadmap, or is this really a case where you need a separate tool? Two, are those needs so distinct and unique that they require a specialized tool, or if you have entitlement to something like Google Gemini or ChatGPT, is the generalized AI product set good enough to fulfill those without needing it integrated with your purpose-built tool?
The third consideration that’s now emerging, which is super interesting, is that device and OS makers are also starting to commoditize this. With the latest version of iOS, Apple has started this where things like text generation and summarization are built in, done locally, and available for free. You can go into the Notes app or email, highlight text, and do summarize or generate. That’s something you previously had to pay for or use a third-party service. What happens when more and more of those functions move into the devices or OSs and become available at no extra cost?
I think it forces the commercial solutions and the SaaS solutions to either improve and specialize or they’re going to die. A lot of them will consolidate or die on the vine because they just won’t be able to justify their existence. As an IT leader, you have to skate where the puck is going and think, if this is the trend, then in 6 months or 12 months, is buying another product that just focuses on this use case plus AI really the right investment? Or should I wait it out a bit, make sure I have good requirements, and see where the existing products and platforms are going?
Tal: Yeah, definitely. And maybe chiming in on this, what we see today is that there’s so much AI capacity built in. It’s no wonder that Nvidia is one of the largest companies today. The reason is that there’s so much AI capacity built in. The cloud centers are full of H100 and H200 GPUs that AI runs on. You can actually go to ChatGPT and have free AI, which is amazing considering the amount of investment that went into this. It should have been costing us a lot of money.
If I go back again, it’s kind of similar to the broadband revolution. In the early 2000s, a lot of fiber optics cables were laid, and many big companies thought they would make a lot of money. Many of them went bankrupt, but the infrastructure stayed. Today, we have a very big infrastructure of fiber due to that. It’s a similar situation, and we can enjoy that as customers. We have all this good for free.
I would also chime in on the use case and give an example from Atera. Let’s say you have a ticket and need to answer it. You need a suggestion on what to answer. At Atera, we have a button that allows you to create an auto-reply, which is nice. But honestly, you can do the same thing by copying the ticket, going to ChatGPT, providing the ticket, and asking it to draft a response. You might have a better prompt or a little less, but you can do more or less the same, and you have it for free.
However, what ChatGPT cannot do is what we’re doing with our agent. When you have a ticket, we go to our agent, which sits on the machine, conduct several health checks associated with the complaint in the ticket, and then our AI engine can create an action to remediate the issue. Closing the loop with ChatGPT is almost impossible in other means. Having this complete solution requires a point solution to do that, not just using ChatGPT. It’s the context you get with integrating AI features as part of a broader set of data and access and integration that your platforms have. That’s always the key.
Ryan: Exactly. I’ll summarize it as smart adoption versus early adoption. Right use cases from where we started from.
Anna: Thinking about smart adoption, smart hiring, and looking at your day-to-day as IT leaders within your organizations, can you speak a bit about the changes you’re seeing in terms of helping IT teams adopt AI? Do you feel like it brings up challenges in terms of recruiting new talent? Do you feel like AI is creating a whole new skill gap in the market? What are your takes on that?
Ryan: I think it’s interesting. If you’re in an engineering role, it absolutely has created the need to develop some AI-native skills, like prompt engineering. We actually posted a blog at Wiz about the under-the-hood for the natural language text to security graph query function we released. We talked about prompt engineering, our mix of zero-shot and few-shot learning, how we use retrieval-augmented generation (RAG), and the architecture of the solution. There’s definitely a lot behind that, and it’s become an interesting new track for engineering development and growth.
What’s really nice about this is that AI-enabled products and technologies have been developed out in the open and accessible to consumers from day one, even before they were contemplated as enterprise solutions. Access is really broad, and it’s a smart pattern for the platform developers. If you can hook and engage practitioners as individual consumers and get them using and playing with these technologies, you’re building an ecosystem of future developers and people who bring that interest and experience to their job. This serves them well.
If you’re a consumer of a product with an AI-enabled feature, the skill set needed is not necessarily a dedicated specialization but more of a technology foundation. It’s understanding where these systems tend to fall apart, recognizing where a hallucination might happen, or recognizing the type of problem where an AI-enabled product is probably not going to help. Knowing the rough edges—like someone who relies on text content generation and doesn’t proofread it or someone who uses auto-generated code and doesn’t check it properly for correctness—is important. People in experienced roles will learn to do that through mistake and trial and error, but from a user perspective, that’s where you’ll see a lot of development over time.
Tal: I agree with Ryan. The way I look at it, there are two types of roles associated with AI. One is developing AI systems, and this is where you need expertise around things like RAG (retrieval-augmented generation) and prompt engineering. Sometimes, it even involves creating your own LLMs (large language models) through open source or other means and training them. That’s a skill set that is in high demand, and I’m pretty sure there’s a shortage in the market. I haven’t checked it, but I would say that’s probably the case.
On the other side, there are the AI users. For the AI users, I would argue that the main barrier is becoming AI-default. What I mean is, I’ll take you back to a story. When Google first came out, it wasn’t natural for me to look up everything on Google. It took me a while to get to the recognition that by default, I’m going to Google every time I have a question. Google worked very hard to make this the case, but it came in the end. I think this is what’s happening right now with AI. It’s there, people are using it, but most people are not defaulting to AI solutions.
Once they do, it will entail the learning that Ryan talked about—learning when the results are okay, how to write the prompt, how to use the context window, how to use your own context. These are not complicated things, but you need to use the system to get to know them. I would say that this is the main hurdle for adopting AI right now.
I’ll give another example from my organization. The product managers at Atera write what is called a Product Backlog Item, which is a description of what we want the dev team to develop. There are a lot of fields that need to be addressed, such as security and data events, to provide a good productized feature. When ChatGPT came out, I told my team, “Let’s build a skeleton with AI.” It was very hard to convince them to do that. Only when we wrote the right prompt and showed the result did it create adoption. People started experimenting, changing the prompt according to their needs, and began working with it. It does need a little nudge from users to start using it and default to that solution, but I think we’re going to get there.
Anna: Great, thank you both. I want to go back to something you mentioned earlier that relates to hiring and people’s sentiment towards AI. Tal, you mentioned some capabilities with AI, including potentially not having any technician interfere with a ticket-solving process. Are you still feeling that people are nervous about AI replacing them and taking away their jobs, especially with junior tech roles? I would love to hear from both of you.
Tal: I would say a couple of things on this. People are nervous, for sure. I don’t think they have a reason to be, but they are nervous. If we look at the IT domain today, the same McKinsey research I mentioned earlier shows that the number of skilled people is about one-third of the open positions in that field. That means you have about two-thirds to fill with open positions, which is a lot. Most of you probably feel it; it’s very hard to recruit people for those IT jobs. Wages are rising as well, which is natural, and it’s kind of difficult.
If we manage to reduce the need for human intervention in tickets, for example, that’s probably going to alleviate the severe shortage we have today with IT personnel. The second issue is history lessons. History has taught us that every time we introduce a new productivity gain, people don’t lose their jobs. Instead, they go on to do other jobs that they didn’t have time for earlier, increasing overall productivity. IT professionals often say they drown in day-to-day tasks and don’t have time to be proactive. With AI solutions, more time can be invested in avoiding security issues, handling infrastructure, and being preemptive rather than reactive.
Ryan: Really well put, Tal. It certainly varies by industry and role. We’ve seen effects in certain markets around content creation and moderation, where the cost considerations of more entry-level positions versus the good-enough output from LLM-enabled products have started to have some effect. In technology and engineering, it has been more insulated. As Tal said, there’s such a skill shortage across all tiers, both entry-level and more experienced, that even in cases where there are efficiency gains, there’s so much other need and opportunity. It’s more of a shift in where resources are put rather than an outright replacement of people with systems.
Tal: I would also add that some professions might dwindle or change, but those people will do other things. A friend of mine mentioned that in the content world and image generation, we’ve gone from being creators to being curators. You need someone to curate. You go to MidJourney and create images, but you still need to select the right image and change the prompt a bit. The results are better and faster, but you still need that person to do that.
Anna: That’s right. I’m watching the chat and just wanted to shout out Brent, who said he’s been using Co-Pilot. That’s how you amplify your resources without hurting anyone. He’s been using Co-Pilot to create cleanup scripts and many others. If anyone here has been giving Co-Pilot a try, please let us know what you’re doing with it. We’d love to hear from you.
With Co-Pilot, I also saw a question coming in the Q&A. I’m reminding everybody, any questions you have, you can add them to our Q&A tab. One of the questions had to do with hallucinations. Ryan, you touched a bit on it, but maybe let’s move into this conversation about limitations and security risks because obviously, everybody cares and worries about that.
For security leaders managing tech within companies, how do you handle third-party vendor risks when you have team members bringing different tools they may use at home to work? How do you go about that?
Ryan: It’s a great question. I’ll say this very candidly: third-party risk and vendor risk management is probably one of the hardest problems to solve for in enterprise security today. Unfortunately, a lot of the solutions and approaches over the last decade or two have led to a lot of paperwork and unnecessary friction that provides very little value. Some of this is rooted in well-intentioned compliance and legal work that just goes off the rails.
I’ll give an example. A year or so ago, we introduced a search feature powered by AI in our product documentation, provided by a third party we use for our doc solution. The moment that was in the release notes, we had compliance teams from enterprise customers sending 200 to 300 questions about AI safety and risk, including questions about physical harm, human harm, and bias. This was triggered because some organizations have built up scaffolding to say AI usage creates risk. Technology has AI; therefore, we address this by asking the vendor 300 questions, which then goes into some sort of action and milestones in a giant system.
As a consumer, you can’t introspect and monitor every single one of your vendors and their environments. The downside is it creates friction without benefit. Some friction in security is good, but when you’re a consumer of new technology, the questions you should be asking about AI risk need to be rooted in the same questions you should already be asking about data and access management. You’re handing your data to another vendor, who might have their own subprocessors or vendors. Do you understand who it’s being shared with? Do you understand the effective permissions they have to your data versus what they claim they will use? Do you understand the lifecycle of the data and the security controls on your side of the shared responsibility model? If you’re capturing those things, you can build a clear risk assessment and ask the hard AI safety questions where it’s actually justified, rather than taking a simplified naive approach with huge questionnaires.
Everyone’s working through this now; it’s a real challenge. Organizations are also struggling with the fact that there are no standards for assessing AI-enabled products and solutions in the context of risks. There’s no uniform third-party risk assessment standard to rely on. Until it is developed, a lot of organizations will spend many cycles trying to figure this out.
Tal: Wow, I couldn’t agree more. The compliances around this are crazy, and it’s a breath of fresh air to hear that from a CISO. One of the reasons is that regulators created compliances that require documenting everything and researching every vendor, leading to time-wasting, energy-wasting, and money-wasting activities.
A bit on how we handled this in Atera: we went in two directions. One is partnering with Microsoft, knowing they are the enterprise queen with a lot of investment in compliance. We go by their rules to keep it safe. Second, we’re not using generative AI as something free to run; it’s limited in what it can do. It has several actions, such as health checks written by Atera, which are safe. The AI cannot go wild; it’s limited to safe procedures, and technicians must approve others that need approval.
Ryan: What’s also interesting is that in the earliest days of ChatGPT, there was a lot of hyperbolic media coverage about the risk of data leakage, like the system somehow being trained on one question from one user and then sending sensitive data to another user. The reality is, if you look at evidence-based sources, yes, academically, leakage is a risk, but the bigger risk is other things.
For instance, in multi-tenant systems, prompt history and segregation are crucial. The risk of model leakage leading to sensitive data being disclosed to another user is far lower than prompt leakage because prompts often contain sensitive information. Protecting prompt history and access is important. If the system is an agent with access to data and can violate trust boundaries, those are the areas to focus on. A lot of focus on model leakage is misplaced due to early understandings of how these systems work.
Anna: Definitely. We’re seeing a comment from Eddie Kin speaking on that, saying the big challenge is how AI and LLMs protect company data. The reason they’re not using ChatGPT at their company is to protect users from leakage. Ryan, you just chimed in on that, and that’s super interesting. There’s another question… I’m watching the chat and just wanted to shout out Brent, who said he’s been using Co-Pilot. That’s how you amplify your resources without hurting anyone. He’s been using Co-Pilot to create cleanup scripts and many other things. If anyone here has been giving Co-Pilot a try, please let us know what you’re doing with it. We’d love to hear from you.
With Co-Pilot, I also saw a question coming in the Q&A. I’m reminding everybody, any questions you have, you can add them to our Q&A tab. One of the questions had to do with hallucinations. Ryan, you touched a bit on it, but maybe let’s move into this conversation about limitations and security risks because obviously, everybody cares and worries about that.
For security leaders managing tech within companies, how do you handle third-party vendor risks when you have team members bringing different tools they may use at home to work? How do you go about that?
Ryan: It’s a great question. I’ll say this very candidly: third-party risk and vendor risk management is probably one of the hardest problems to solve for in enterprise security today. Unfortunately, a lot of the solutions and approaches over the last decade or two have led to a lot of paperwork and unnecessary friction that provides very little value. Some of this is rooted in well-intentioned compliance and legal work that just goes off the rails.
I’ll give an example. A year or so ago, we introduced a search feature powered by AI in our product documentation, provided by a third party we use for our doc solution. The moment that was in the release notes, we had compliance teams from enterprise customers sending 200 to 300 questions about AI safety and risk, including questions about physical harm, human harm, and bias. This was triggered because some organizations have built up scaffolding to say AI usage creates risk. Technology has AI; therefore, we address this by asking the vendor 300 questions, which then goes into some sort of action and milestones in a giant system.
As a consumer, you can’t introspect and monitor every single one of your vendors and their environments. The downside is it creates friction without benefit. Some friction in security is good, but when you’re a consumer of new technology, the questions you should be asking about AI risk need to be rooted in the same questions you should already be asking about data and access management. You’re handing your data to another vendor, who might have their own subprocessors or vendors. Do you understand who it’s being shared with? Do you understand the effective permissions they have to your data versus what they claim they will use? Do you understand the lifecycle of the data and the security controls on your side of the shared responsibility model? If you’re capturing those things, you can build a clear risk assessment and ask the hard AI safety questions where it’s actually justified, rather than taking a simplified naive approach with huge questionnaires.
Everyone’s working through this now; it’s a real challenge. Organizations are also struggling with the fact that there are no standards for assessing AI-enabled products and solutions in the context of risks. There’s no uniform third-party risk assessment standard to rely on. Until it is developed, a lot of organizations will spend many cycles trying to figure this out.
Tal: Wow, I couldn’t agree more. The compliances around this are crazy, and it’s a breath of fresh air to hear that from a CISO. One of the reasons is that regulators created compliances that require documenting everything and researching every vendor, leading to time-wasting, energy-wasting, and money-wasting activities.
A bit on how we handled this in Atera: we went in two directions. One is partnering with Microsoft, knowing they are the enterprise queen with a lot of investment in compliance. We go by their rules to keep it safe. Second, we’re not using generative AI as something free to run; it’s limited in what it can do. It has several actions, such as health checks written by Atera, which are safe. The AI cannot go wild; it’s limited to safe procedures, and technicians must approve others that need approval.
Ryan: What’s also interesting is that in the earliest days of ChatGPT, there was a lot of hyperbolic media coverage about the risk of data leakage, like the system somehow being trained on one question from one user and then sending sensitive data to another user. The reality is, if you look at evidence-based sources, yes, academically, leakage is a risk, but the bigger risk is other things.
For instance, in multi-tenant systems, prompt history and segregation are crucial. The risk of model leakage leading to sensitive data being disclosed to another user is far lower than prompt leakage because prompts often contain sensitive information. Protecting prompt history and access is important. If the system is an agent with access to data and can violate trust boundaries, those are the areas to focus on. A lot of focus on model leakage is misplaced due to early understandings of how these systems work.
Anna: Definitely. We’re seeing a comment from Eddie Kin speaking on that, saying the big challenge is how AI and LLMs protect company data. The reason they’re not using ChatGPT at their company is to protect users from leakage. Ryan, you just chimed in on that, and that’s super interesting.
We have another question in our Q&A tab asking you, as a CISO and CIO, how do you explain to your employees and colleagues how to use generative AI from that perspective?
Ryan: I look at this as an extension of what you should already be working with your employees on, which is acceptable use for third-party services, regardless if they’re free or for pay. The reality is it doesn’t matter if a SaaS product that your employees decide to use for company business is AI-enabled or not. If I have employees using company data with an unsanctioned SaaS platform or solution, that’s a data security and privacy risk that I need to manage regardless.
What we’ve done is try to extend the conversation in two ways. One is to make sure users understand that part of acceptable use and part of what we monitor for as part of our governance is we provide a set of tools that are authorized, reviewed, and managed. That’s what we provide for you to do your work. If you need something that’s not being covered by those, we have a process for that.
The flip side is that IT leaders have the responsibility to be proactive in finding those use cases and spotting where existing products are not filling the need. If you don’t do that, people will work around you. They will use unmanaged devices and unsanctioned services where they can get away with it. If you don’t work with people and give them good paved road defaults that you can manage, they will try to bypass you at every turn. It’s a shared responsibility between IT leaders and employees.
Tal: I’m not a CISO, so I would say sure, you have to rely on the system to limit them when it’s not possible. I just urge my people to use AI to the greatest extent they can.
Anna: Maybe I’ll jump back into the conversation about productivity because I’m seeing a great question from Lars asking how can companies ensure that the implementation of AI tech not only leads to short-term productivity gains but also promotes long-term innovation without disrupting existing processes?
Tal: I think we touched on this earlier. One of the ways to de-risk the implementation of AI is to look at your needs, see if there’s something that can answer your need, and then maybe look at the business model to see if it’s per success. If it succeeds, you’ve achieved your needs, and if it does not, you pay nothing. That would be a good balance. You start to see these kinds of companies and solutions in the market, and I think that guarantees that if there are productivity gains to be had, you’ll have them, and if there aren’t, at least you didn’t pay anything.
Anna: Great. I want to thank Brian for taking on the question about AI certificates from our Q&A tab. I love it when people help each other in our chat. Maybe you guys want to add to Hudson’s question: are there any certificates available in AI for someone already working in IT, or maybe any other courses you would recommend?
Ryan: I’m not actually familiar with any certifications available. I would say from the usage perspective, just try it. It’s not very complicated, and you gain a lot by having hands-on experience. From compliance and other measures, maybe Ryan has a more detailed response.
Not for certifications. I’d agree that with open models like LLaMA available, it’s easier than ever to get started and experiment on your own time. With security and IT alike, I’ve seen a lot of great engineers and practitioners build up that experience, as people have done with their home labs for decades now. Come up with some projects and problem ideas, experiment, and learn by doing. That’s always the best way.
Tal: You can use AI to learn how to do it as well. Ask for some ideas.
Anna: If that’s okay with everyone, we’re going to go to our final question. I want to talk a bit about the future. What do you guys look forward to, or what can you share with others that we can expect in the future in terms of productivity and additional gains when it comes to integrating AI?
Tal: As I said earlier, I’m a big believer in a shift in productivity that will be provided by using generative AI. But it’s very hard to know when this is going to come. It might be that we’re entering the trough of disillusionment, and it makes sense that we are. The inclination of the curve is yet to be seen—how fast is it going to be and how fast we are going to reach maturity. It’s very hard to know. We might start seeing productivity gains in a couple of months from now; it could be years from now. It’s very hard to know.
Ryan: I would agree. There are some experts who feel like the current approaches are going to hit a wall and that we won’t be able to overcome some of the things needed for the true vision of fully autonomous agents and reasoning to reach what we think it could. But that may also not prove to be the case. In the meantime, what I’m looking forward to is more and more AI technology features fading into the background of how things just work.
Taking the analogy from mobile, I was reflecting on the fact that with the latest iOS update, you have things like summarization of your text messages and notifications as a built-in feature. As a user, you don’t really think about it as an AI feature; it just happens. I think more and more of how these systems and technologies will get integrated will just fade into the background like that. That’s the best outcome for a technology. When it reaches a point of maturity, the selling point is the outcomes it enables and the user experience.
I look forward to seeing more of that because centering on the value, outcomes, and user experience is how you get great technology in security, IT, and any domain.
Anna: Thank you so much. This is bringing us up to a minute. I want to say thank you to everybody that joined. If we missed any questions, do not worry. We will go over them, and you’ll be able to get answers to all the unanswered questions in our community through all the other channels, our social media channels. Thank you, Tal. Thank you, Ryan. This was super insightful, and I hope everybody had a great time.
Tal: Thank you so much. I had a great time.
Ryan: Thanks, everyone. Take care.
Read more