Skip to main content
AI in IT
Future of IT
Thought leadership

Autonomous IT is here. Are you ready?

Atera’s CEO and Channelholic’s founder offer an insider perspective on navigating autonomous IT.

60 min

By submitting this form I approve that Atera may contact me and that I’ve read the Privacy Policy.

In this webinar you’ll learn about:

  • The evolution from traditional automation to autonomous IT
  • What autonomous IT means for IT leaders—and the industry
  • How Atera's AI Copilot and Autopilot are leading the shift
  • Key industry perspectives on AI adoption, trust, and integration
  • Challenges, opportunities, and more
Autonomous IT is here—are you ready? Join Gil Pekelman, CEO of Atera, and Rich Freeman, founder and executive editor of Channelholic, for a live session on the rise of Agentic AI and its impact on IT management.

Featured next-gen speakers:

Gil Pekelman
Gil Pekelman
Chief Executive Office
Rich Freeman
Rich Freeman
Founder and Executive Editor, Channelholic

You might also like:

AI in IT
Cybersecurity
Thought leadership
IT efficiency unlocked: Management meets SASE
March 12, 2025

Join Atera CTO and co-founder Oshri Moyal and Cato Networks VP of product marketing & strategic alliances Eyal Webber-Zvik for a live session that will help you boost efficiency, strengthen security, and prove your business value.

February 26, 2025

Join us for a dynamic 30-minute session with Marco V. Salazar, IT Supervisor at Trinity Valley Electric Cooperative, as he shares how his team harnesses Atera's all-in-one IT management platform and Agentic AI to enhance efficiency, simplify troubleshooting, and boost employee satisfaction.

February 20, 2025

Lo guiaremos a través del proceso de configuración inicial y le brindaremos consejos prácticos y ejemplos para optimizar su flujo de trabajo.

January 29, 2025

Use AI Copilot to help accelerate first response time, streamline ticket resolution, boost productivity

January 15, 2025

Learn how to integrate AI Copilot into your daily operations

Webinar transcript

 

Anna: Hi everybody, I’m Anna. If you don’t know me, I handle content and communications for Atera, and I’m very excited to kick off this webinar with two incredible guests. Gil Pekelman, our co-founder and CEO—I’m sure you’ve seen him around—and today you’re going to hear some very exciting things coming from him and Atera. And Rich Freeman, whom some of you probably know from Channelholic; he’s the founder and the executive editor of Channelholic and also a journalist with many years of writing about managed services. Between these two amazing guests, I’m sure we’re going to have a great conversation. 

# Housekeeping Rules 

Anna: Before we do, I’d really love to go into a few housekeeping rules that we have here. This is all to make it completely accessible. Whenever you need it in terms of content, this webinar is being recorded. I know you guys ask that a lot, so right after the webinar is done, within 24 hours, you’re going to get a recording of the session sent to anyone who registered. We encourage you, as always, to place your questions or comments for Gil and Rich in the Q&A section. It makes it easier for us to follow the questions, so try to keep it in the Q&A tab if you can, please. Per usual, at the end of this webinar, we’re going to host a live Q&A, so get those questions ready. 

# Autonomous IT and Agentic AI 

Anna: I’d like to kick this off with a part of the title, “Autonomous IT.” But before we get into autonomous IT, I’d like to ask about agentic AI. I know everybody has been throwing around the term. I’d really love to know, when being compared to general AI that we commonly see, what is the difference? Can you help us define this to kick the session off? Gil, this is for you. 

Gil: This is for me? Okay. First of all, the term itself is a very new term; it’s from mid-2024, so from about eight months ago. The premise of agentic AI is the understanding that ChatGPT or the LLM models are actually able not only to talk to you and play back text or voice but they’re actually able to do real things in the real world. From that understanding, development started to try to take an LLM and, through its reasoning, give it hands and tools to do actual things in the real world. Specifically in Atera, we started working on it in June of 2022, so that’s about six months before ChatGPT was released to the world. It was a version of ChatGPT that was called ChatGPT 3 Private Beta. We’ve been working on AI for a long time, so the first thing we tried to do was make it do things—not only talk back to us but actually do actions. We looked for a name for a very long time, Anna, and we didn’t, you know, we tried to find because we felt we were pioneers. But agentic is the name that actually caught on. Today, the meaning of agentic AI is a technical architecture, but it’s a technical architecture that lends itself to AI that does things in the real world and not just replies to you with an answer. Maybe I’ll add one more point on that: you can actually think of agentic AI or agents as people. A person that’s able to do something—it’s a way to think of how it works. So, you can think of an agent, an agentic agent, that can fix your computer because it’s slow, reset your password, or add you to an email group. All these actions are actually created by developing an agent that has these capabilities. 

# Perspectives on Agentic AI 

Gil: Rich, from your perspective—because I have this perspective as a developer at a technology company—what do you see in the world at large about the talk about agentic AI? What are people thinking agentic AI is or what they’re doing with agentic AI, etc.? 

Rich: Well, like you say, I do come at this from a slightly different perspective in that, as a journalist, I’m writing about it. We’ll get into this later on in the webinar. The agentic AI market is an emerging one, but I think everybody expects it to be enormous, and therefore everyone and their uncle says that they’re doing it already, and they all define it differently. So, I need to kind of figure out what this really means. I’ve spoken to a lot of analysts and a lot of experts, and I basically arrived at a list of criteria. So, for me, agentic AI is any technology that meets this particular set of criteria. It’s got to be a technology that can remember what it did earlier, plan what it’s going to do next, and interface with your applications. It’s supposed to be doing stuff in the real world for you; therefore, it’s got to be able to connect to your email, your CRM, and whatever else it needs access to in order to get things done for you. It’s got to be able to collaborate with other agents because if your agent is booking travel for you, it’s going to be talking probably to a travel agent’s agent—they’ve got to be able to work together. Because it can remember, it’s got to be able to learn over time and, based on that learning, get better at what it does. The single most important thing that really defines agentic AI is that it’s got to be able to do all of this fully autonomously. I would underscore the word “fully.” Unless you have trained it or instructed it to consult with you at some point in a process or if it runs into some set of conditions, it’s got to be able to do work on your behalf, start to finish, entirely on its own. If it can and it meets these other criteria, then from my perspective, it qualifies as agentic AI. 

# Differences Between Automation and Agentic AI 

Rich: Now, that’s my perspective. Gil, as you know, I’m dealing with all these companies that say they’re doing agentic AI, but they mean different things. You might run into a slightly different one. What I see certainly as well is that when people are talking about agentic AI, they’ll say agentic AI, they’ll say AI, they’ll say automation, and they’ll kind of talk about these things as if they’re all the same thing. Automation is something that has been around in this industry; it’s something that companies like Atera have been doing, something that MSPs have been taking advantage of forever. But it is different from AI or agentic AI. From your perspective, from an Atera perspective, what are those differences? 

Gil: That’s a great question. I’ll share with the forum my experience for a minute. I’m sure you—and then I’ll talk about the technical side of it—but if you remember your first interaction with ChatGPT where you gave it a question and suddenly it answered intelligently, it understood you, you wrote a follow-up question, and you got that same kind of experience. I had the same experience with agentic AI, in this case, Atera’s agentic AI, because what we did is we gave it an IT problem, it did all kinds of tests on its own, it made its own decisions, and had reasoning.

# Reasoning and Problem Solving 

Gil: So within your list, Rich, one of the things is that it has reasoning. It understood and then it solved the problem. When I saw that for the first time, this was late 2022, I had the same type of experience that a lot of people had when they first interacted with ChatGPT and said, “This thing is smart, it’s like a person.” It’s almost a religious experience because it’s like meeting AI for the first time, for lack of better words. The difference between automation and agentic AI is huge; it’s actually not really comparable. Automation, when you think about it, is an algorithm that has a threshold, and when that threshold is crossed, a script or something deterministic happens. Agents don’t behave like that. They understand. I’ll talk about, in this case, Atera’s multi-agentic architecture. It gets a problem, understands the problem, and then has all kinds of things it can do, very similar to what an IT professional would do. It can then decide what it wants to check. Let’s say if somebody has a problem with their computer being slow, it can run these checks autonomously. The big difference is that automations are fixed; they have a threshold, they run something, and it’s very limited. If it’s out of these limitations, it doesn’t do anything. You can’t make automations for everything; you can only make them for very specific things. Agentic architecture is open. It understands and has, first of all, as a base, all the data in the world because we know these LLMs have all the internet and a lot of information about that. So it has not only all the data but also reasoning capability. It’s flexible, it learns, it grows, and it’s able to execute things all the time. It’s very different; you should imagine it as an employee that gets an IT problem and is able to solve it. It’s not comparable to automation, which is very straightforward: if this happens, do this, and that’s it. 

# Transformative Impact 

Gil: Rich, from your perspective, this is really transformative. We know we are into it; we have customers running what we call Autopilot, which is our agentic architecture. We are seeing that it’s solving around 30 to 40% of all their tickets autonomously. Just think about the impact—the load that is taken off the shoulders of the IT department and the end users who have a 24/7 technician at their service, answers always immediately, solves these problems in minutes. What are you seeing in the world in terms of adopting these kinds of transformative technologies? 

Rich: Well, let me talk about that a little bit. That transformative potential you’re talking about is something that a lot of people, in contexts well beyond IT management, are very excited about right now. Both in terms of the companies producing the technology and the potential consumers of the technology. Let me just share a few slides. These are lines grabbed off the internet from just within the last few months. I sort of alluded to this before: every technology company that counts out there right now is talking as often and as loudly as possible about agentic AI at all times because this is regarded as a massive market opportunity. Salesforce, Google, Microsoft, Meta—you name it—they are all talking constantly about that right now, and for good reason. Even though the technology is in its infancy and a lot of people are talking about it without actually delivering it, there is enormous interest in it from businesses out there. Boston Consulting Group did some research: 67% of the executives they surveyed are considering autonomous agents as part of their AI transformation. Two-thirds already, before they even really have seen this technology at work. Deloitte predicts that this year in 2025, a quarter of companies that use AI will launch some kind of agentic pilot or proof of concept. By 2027, you’re looking at half of companies that are actually going to be doing that. 

# Strategic Importance for MSPs 

Rich: Now, that’s kind of broad, surveying IT managers. Let’s get a little more zeroed in on our industry here. This is data from a company called OpsRamp, and they went out specifically to MSPs. This is research they did last year, but they were asking them, “What are technologies that are strategic to you this year in 2024?” 

You can see there are a lot of basic, bread-and-butter technologies on this list: infrastructure management, network performance monitoring, application performance monitoring—basic stuff. What is at the top of the list, though? It’s AI and using that to automate IT operations. It’s very top of mind specifically for MSPs right now. I’ll put that in a slightly larger context again right here. This is very recent survey data from a company called Tray.io. They went to technology professionals and asked them, “What, from your perspective, are the top business use cases for AI agents?” They gave them a whole list of options. You’re seeing data processing automation, customer service—Salesforce is talking a lot about AI agents right now. Those are two scenarios that get a ton of attention and mindshare out there right now. But you go to the technology professionals, and far and away, the top use case from their perspective is service desk automation. There’s a lot of excitement out there, a lot of interest in the potential that agentic AI can have for MSPs, for IT departments, for anyone dealing with service desks. It’s for good reason. A few months ago, I was interviewing an analyst from Forrester who specializes in agentic AI. His name is Leslie Joseph. In the course of the interview, I asked him, “What kind of an impact do you expect agentic AI to have specifically on IT management?” He said, “IT management is ground zero for disruption with agentic AI. If you’re looking for the place where it’s going to have the biggest, earliest, most meaningful impact, IT management is the place to look because it is so well set up to do that job very effectively and very cost-effectively.” 

# Challenges and Adoption 

Rich: What are some of the issues that might slow all of that down? First of all, hype. Like I was saying before, everybody says they’ve got it or they’re going to have it tomorrow or this evening. There are going to be a lot of promises made that maybe people don’t deliver on, and that’s going to be a bit of a turnoff for some folks. Data is a big issue. You were talking about having access to all the data in the world. It’s got to be high-quality data, not just a lot of data, but high-quality data. It’s got to be consolidated and organized for an agent to really do good things with it. That’s going to take a little time for a lot of companies to do. I think businesses are going to want to understand, in a clear dollars-and-cents sort of way, what the ROI is. If I invest in this, if I purchase a technology of this kind, where, when, and how big of a return am I going to get on that investment? Then there’s what I call the perception of risk. Obviously, we are in the context of generative AI. We have all not only heard of but experienced on a regular basis the phenomenon of hallucination. If we’re asking MSPs in particular to set an agent loose on an end-user environment autonomously, there’s going to be some trepidation about that—a perception of this being a risky thing. It’s going to take some time for folks to overcome that fear factor, that perception of risk. Last but not least is the perception of threat. By that, I don’t mean a security threat necessarily, although that is something that will be on people’s minds. I mean the idea that if I’m an IT professional, is this thing coming for my job? Is it a threat to me professionally? I’m in Austin attending an event, and I was interviewing the CEO of a holding company that oversees about 80 MSPs. He was kind of saying, “I don’t know what some of my people will be doing five years from now.” There’s some uncertainty there. But if you look near term, here’s some data again from Boston Consulting Group. They asked IT professionals, “What kind of an impact do you expect agentic AI to have on your IT workforce?” Some of them, as you can see at the top there, are anticipating they’re actually going to hire more full-time human employees to capitalize on the potential of this technology. 68% of them said that they’re going to capitalize on the productivity that agentic AI produces in other ways. You’ve got maybe a quarter of folks there who are sort of saying it’s all going to kind of wash out the same in terms of staffing and so on. Only 7% of the people who were surveyed here anticipate that they will actually be letting people go. There is a perception of threat that this technology might put technicians out of work. There really isn’t very much evidence that I’ve seen yet to suggest that that’s happening or that it’s imminent.

# Customer Experience and AI Applications 

Gil: Yeah, to add some color to that from the field, what we’re seeing with our customers is that we divide the AI—though the architecture is the same, it’s agentic AI or agent agents—into two types of applications. One is the Autopilot, the autonomous part, and then the Co-pilot, which is the assistant to our customers. From the field, we are seeing them being relieved from all the pain of dealing with the mundane and repetitive tasks, actually freeing up time for projects and things that they’ve been wanting to do for a long time. So far, we have not seen any case where headcount has gone down; only the opposite, just like your slide shows. 

# AI Initiatives at Atera 

Rich: Well, let’s bring the agentic conversation and ground it a little bit more in the real world by talking about your AI initiatives at Atera. You’ve actually got two AI-based technologies: one called Co-pilot and the other called Autopilot that you were talking about before. Co-pilot is more akin to Microsoft Co-pilot or the kind of AI technology we’re seeing from some of the other IT management vendors out there. Autopilot, from my perspective, just writing about the industry, is sort of one of a kind, at least for right now. But from your perspective, talk a little bit about those two technologies, what they do, how they differ from one another, and the pain points and thinking behind each of them. 

# Development and Capabilities 

Gil: Just for perspective for a minute, we started to work on AI in general in 2014. Our first patents were in 2017, and our biggest breakthrough was in 2020 when we got access to ChatGPT-3 Private Beta. We’ve been working on it very hard for the last two and a half years, so we’re really into it and very experienced. We’ve built something that has a lot of traction and a lot of experience in the field. The architecture is the same; we have one architecture, this multi-agentic architecture, with all kinds of agents that know how to do all kinds of things. This capability of reasoning, data collection, and running IT tests interacts with two types of groups. 

# Co-pilot: The Assistant 

Gil: One group is IT professionals. Using all these tools, that’s the Co-pilot. It’s an assistant to IT professionals that does a very large number of things, but at the end of the process, a person makes the final decision. For example, you can ask it in free language to create a script to do some very complex procedure that you want to do every Tuesday or every morning. It will create the script in a second. It’s like having a shared script library with all human knowledge of how to write a script at your fingertips. With another press of the button, you automate it within Atera, and it runs. Co-pilot works as a helper, an assistant to IT professionals. I’ll give a couple more examples of its capabilities. You go through a whole process of interacting on a ticket and on a problem, press a button, and it takes all that dialogue—which could have gone on over a few days—and creates a knowledge-based article in a second. That knowledge-based article is then added to your system. Not only can your other technicians access it the next time this type of problem happens, but Co-pilot and Autopilot also learn from it. So the next time the same thing happens, they’ve learned how you solved it. 

# Autopilot: Full Autonomy 

Gil: Autopilot is what you call the full autonomous system. It interacts with the end user; it doesn’t interact with the IT professional. It actually is like another employee. The interaction is done either through email, Slack, or Teams. The user will open a ticket just like they open a ticket today, and it will communicate with them just like a very sophisticated LLM would. It’s always there, it answers in a second, and it knows its limits. If the problem is within its boundaries, it will solve it then and there. If the problem is outside of its boundaries, it’ll escalate it to an IT professional, to one of your people. But when it escalates it, it’s a smart ticket. A smart ticket is a ticket that includes all the information and context that the agent has gathered, making it easier for the human technician to take over and resolve the issue efficiently.

# Diagnostic and Problem Solving 

Gil: It has already run all the diagnostics that you would have run yourself to try to solve this problem, with the results summarized. It will outline the problem and what it thinks is the solution given the input from the diagnostics. You’ll get a button that it recommends you use in order to solve the problem, but you can keep the dialogue going. It’s a ticket; you make the decision. This is the escalation, and this is the Autopilot. So we have two products: one is the Autopilot, the fully autonomous part that knows how to solve IT problems on its own, and the Co-pilot, which is an assistant to IT professionals. They work hand-in-hand, so it escalates from one to the other. Autopilot at this point is able to solve 30 to 40% of all tickets that come at it, which is, for IT people, a religious experience. It’s not just technical; seeing something like that happen suddenly is something you cannot expect until you really see it. 

# Adoption and Concerns 

Gil: There is an issue, though. We are aware that what we’ve done in the last two and a half years is not really out there in the field. How do you see people embracing it? How do you see the process happening? We know people might be a little bit concerned or hesitant because they have never experienced this before. It’s not like comparing different products and saying, “I’ll try this one now.” This is trying something you’ve never tried before. What are you seeing there, and what is your opinion? Rich: I’ll ask about the whole Co-pilot side. So, the Autopilot, in a sense, is autonomous; it’s like hiring another three employees now to help the team. The Co-pilot is a decision-making process or an assistant in the day-to-day work. How do you see people actually adopting that as well? 

# Co-pilot Adoption 

Rich: Let me start there, because that’s the technology that is most available to MSPs and has been available in the IT management world for a little while. I think there is still a lot of room for adoption and growth. The barrier there is simply that it’s new. Like anything else in AI, you start out by figuring out what it is, how you can use it, and how to put it to work. I think that’s what’s going on with Co-pilot and technologies like Co-pilot. 

# Autopilot Concerns 

Rich: It’s very different on the Autopilot side of things. I’ll answer that by calling out a comment I’m looking at in chat here from Adam Ly. He pointed out that it’s not a perception of risk that’s the issue; it’s the real risk that an autonomous agent does something that causes a serious issue. You are exactly right, and thank you for pointing out that it is not just a perception of risk; this is a real and legitimate concern. This is the biggest question and issue that I encounter with MSPs right now. As an MSP, you are paid to keep end users up and running, and you absolutely do not want to deploy a technology that breaks stuff. If you break stuff for a customer, you have an unhappy customer, maybe an ex-customer. You don’t want to deploy anything that’s going to make your life harder instead of easier. This is a very novel new kind of technology out there. That’s the main concern. You are familiar with this issue, Gil, because every time I interview you about Autopilot, I always ask you the same question, and you have to be really tired of it by now. I always ask, “Has it broken anything? Is it hallucinating, or have there been issues?” Even though I write about this all the time, relatively speaking, I’m kind of bullish about it, and I still have these questions and concerns. It’s not my livelihood to unleash this technology on an end-user environment. I think that’s where we’re at now. People are trying to develop the trust to actually start using this stuff and believe that it’s going to make their life easier rather than harder. 

# Security and Boundaries 

Gil: Rich, let me address the security issue. The autonomous part does things that are very specific; it doesn’t go and do whatever it wants. There’s a set of things that it can solve and is allowed to solve, and that’s it. You see the list before you install it, and you can tweak the list. None of those things can cause harm. That’s how it works. The worst-case scenario is it doesn’t solve the issue, and the ticket is escalated to a person. But the way it works is that it’s a fixed set of problems it can solve, and that’s it. It can’t tweak or do anything outside of that.

# Diagnostic and Escalation Process 

Gil: The list and those boundaries—anything outside of that list, it will take, just like John said, and escalate it to a human. The human will make all the decisions but will have all the information at their fingertips already tested and diagnosed. So the time to solve it is much faster. I’ll add one more thing, Rich, if I can. Atera’s built this on top of something called Microsoft Azure Open AI. Azure Open AI, we can discuss that at length. The biggest thing around Azure AI is privacy and security. This one concern is one of the biggest things we’ve been dealing with in the two and a half years we’ve been working on it—to make sure it cannot cause any harm. 

# Tool Belt Analogy 

Rich: Back to a conversation we had a few months ago, you likened this issue to having tools on a tool belt. You’ve equipped Autopilot with a set of tools on its tool belt. If there is no tool on the tool belt for deleting that very important virtual server, it can’t do it and won’t do it. That risk doesn’t exist. It can only do the things it has a tool to do, and you’ve obviously only put tools on the tool belt for safe work. 

# Risk and Reward 

Rich: We’re talking a lot about risk, but what an MSP has to think about is the risk-reward equation here. Let’s talk a little bit about the reward side of that. You’ve got MSPs who are actually using Autopilot now. You’ve got IT departments who are using it. You’re working with design partners. Talk a little bit about some of the real-world use cases and benefits that companies you’re familiar with are experiencing right now. 

# Real-World Use Cases 

Gil: First of all, interaction with the Autopilot is done through email, Slack, Teams, etc. Your users or your customers, if you’re an MSP, interact just like they would with you anyway. The big difference is they can do it 24/7. You don’t have to pay for it extra. You don’t have to have shifts in place to do that. It answers immediately, so the level of service you’re providing, whether as an internal department or an MSP, is fantastic. Immediate answer. We actually see in the data we have from our customers that once they implement Autopilot, the time to first response goes from minutes or hours to zero. You don’t understand the numbers until you see them for the first time. The things it knows to do are the mundane, repetitive tasks—the things you don’t want to do. There’s very little risk there, and we’ve been working on the risk issue all the time. For example, it can handle password resets. It can detect that a user has a problem with their password—forgot it, mistyped it, etc.—and it can communicate with the user through their phone, asking if they need help resetting it. It uses a TFA mechanism to identify the user and can reset the password without any human intervention. It can also handle tasks like unlocking accounts for users who have locked themselves out. It can install software, but only predefined software that you’ve approved. For example, it can install Zoom version X or Teams version Y. If there’s a problem with Zoom, it can conclude that the way to solve it is to uninstall the current version and install a newer version, handling all that automatically. This happens 24/7 with immediate replies and solutions without any of your team having to deal with it. It can also handle tasks like releasing spam from various spam engines. You can decide whether or not to give it that capability. Many of our customers do because it’s a very safe task, and it saves the hassle of dealing with spam-related issues. It can also solve problems with slow PCs by performing the necessary actions automatically, just like a technician would. It can handle calendar settings, such as linking shared calendars, and provide Excel support, including Visual Basic in Excel. # Impact on Workload Gil: The end game of this whole list is that between 30 to 40% of the load is transformed into an environment that is available 24/7, with immediate responses and solutions within minutes. This takes the load off your team, allowing them to focus on more important tasks. I hope that’s clear. I’m seeing some questions here that maybe I can address in a second. Adam is asking if it is semi-AI.

# Fully Autonomous Agentic AI 

Gil: No, it’s not semi-AI; it’s fully autonomous. It will solve the problem and understands when it’s not able to solve it or it’s not allowed to solve it. Then it will escalate it to a person. That’s actually Autopilot doing the escalation to Co-pilot. 

# Live Demo Request 

Gil: There’s a question for a live demo. Actually, let’s show an example of the reset password where the AI understands the problem on its own. The user doesn’t even have to say, “I have a problem with my password.” It identifies the issue and runs the whole process to get them out of it without involving the IT department. It can be done at any time. If you’re an MSP and you have a customer outside of your working hours, there is no outside of your working hours anymore. It’ll get them out of this problem, whatever time zone they’re in. 

Rich: I’ll jump in just for a second. Maybe we can run the video; it’s recorded so you all can see what that looks like. Then we’ll go straight into answering the questions. Although I know we have a lot of different things we wanted to cover, I’m seeing the chat being very lively, and I think there are some great ones here for you guys to take on. So, let’s run the video and then jump straight into the questions. 

# Empathy in AI Responses 

Gil: You’re limited to a templated response. No, the response in the dialogue—I’ll give you another example. We have this customer who called me up to tell me the story that one of his employees opened a ticket and wrote, “Can you please reschedule a meeting I have with a colleague of mine? She’s sick, and I have to move it to next week.” The response of Autopilot was, “No problem, I will reschedule it to next week.” Technically, it’s a relatively simple thing, but it then said, “But the most important thing is that your colleague will feel better. Don’t worry about rescheduling; I’ll handle that, but your colleague should feel better.” So it’s also empathetic, and the EQ is there. It’s never tired, never cranky, and it never quits either. You never have to retrain it again. It also never eats, just so you guys know. 

# Q&A Session 

Anna: Do we want to go into some Q&A? 

Gil: Yeah, let’s do that. So first of all, thank you for engaging with the chat. Let’s start with one of the first ones. This was early on. The question was, “Does Autopilot integrate with existing RMMs and other technologies?” 

# Integration with Existing Systems 

Gil: Yes, so Autopilot itself has three components. One is the interface with your end users. You can choose whatever you want—it could be email, Slack, Teams. I don’t think there’s anything else in the universe at this point that is not one of those three or all three, etc. On the ticketing side, it can be Atera’s system, and there are benefits to that, or it can be a different ticketing system. It knows how to interface with that. The third component is the RMM itself, which, in the case of Atera, those who work with Atera know it’s an all-in-one solution: RMM, PSA, ticketing. But you can also only use the RMM side. So you can choose between Atera’s email, Slack, Teams, Atera’s RMM, ticketing, PSA, or if you want to use a different ticketing system, you can split the RMM and the ticketing, and then you’ll have the three systems in place.

# Naming of Co-pilot 

Anna: One of the questions is actually about Co-pilot and how we’re using the name Co-pilot because it’s a Microsoft product. Maybe you can touch on that. 

Gil: We were first. That’s a good question. I’m not sure it’s the best choice of name, I’ll say that. But in terms of concept, I think the forum understands how it works. It’s like your Robin; you’re Batman, and this is your Robin. Autopilot is autonomous, so that’s your robot. You have a robot that takes care of things autonomously, and then you have Robin here that helps you all the time and assists you. Maybe we’ll just call it Robin. We work very closely with Microsoft, and they haven’t said anything to us in the last two and a half years, but you guys have a good point there. It’s not the best choice of name. We also haven’t been the only ones since. I think Co-pilot is very descriptive. I think that was the idea. Remember, we put it out there before we had the concept, even before it was public—Co-pilot. 

# User Interaction with Autopilot 

Anna: There’s a question here about how users are interacting with Autopilot, through what app. 

Gil: Actually, I forgot one. There’s Slack, Teams, email, and we have a portal. I forgot the portal before. So there’s four applications that the user can interact with. It’s just like you would open a ticket; they’ll interact just like any normal ticket they open today. It understands them. They’ll write with spelling mistakes, grammar mistakes—it understands. Not only does it understand, it also knows how to keep the dialogue going. Maybe somebody will write something and it won’t be sure exactly what they mean, so it’ll say, “Can you explain to me what is slow? Is it your computer or a specific application?” It continues the dialogue just like anybody that’s chatted with ChatGPT knows that it can continue the process. It doesn’t stop where you stop necessarily. 

# SMS Communication 

Anna: There was also a question about SMS. Joe said that’s the preferred way to communicate. That’s how my clients prefer to communicate with me. Are we looking at SMS? 

Gil: SMS, as in, they send you an actual SMS, not WhatsApp, not old-fashioned SMS? We haven’t received that request, but it shouldn’t be too complicated. # Impact on MSP Business Models 

Anna: I will jump to a question about how agentic AI from Caroline will shake up the business models for MSPs. Maybe Rich, you can touch on that. 

Rich: Yeah, the potential implications. As you said before, I’ve been writing about managed services for a long time. I wrote my first article about managed services almost 20 years ago. A lot has changed in the industry in that time, but one thing that hasn’t is that the central key to profitability is productivity. When you can scale your workforce without actually having to hire and pay people, when you can scale your customer base without having to hire and pay people because you have technology that can handle that workload for you, the implications are enormous for MSPs out there. Another thing from a business model standpoint: if you think about the work that your people, your technicians, are doing today and then imagine AI doing a bigger and bigger share of that—starting at level one and sort of working up from there—you’re now freeing up resources that you can apply in other areas. Something that every MSP has to think about all the time is how to avoid commoditization, how to avoid becoming just another company doing what everybody else is doing. The more you can get into actual outcome-based solutions and strategic consulting, really helping businesses get leaner and more productive by using technology, that’s how you generate profit and sticky customer relationships. You’re going to have more resources on the team now for that kind of engagement with the client that AI won’t be able to do anytime soon. Those are the two big business model shifts that come to mind immediately. 

Gil: That’s awesome. Just to strengthen what Rich is saying, you have the efficiency side, and then you have, as an MSP, the advantages of the service side that you’re giving services that nobody else can give: 24/7, immediate, etc. Just for the forum, we have customers already using it, and they’re charging extra. They come to their customer and say, “Look, we’re going to add this agent capability. You can open a ticket anytime, 40% of the cases will be solved immediately, etc.” They charge extra for it. So it’s also an increase in efficiency, competitive service that nobody else can give, and an increased revenue stream. We’re seeing that people are doing that. 

# Compliance and Security 

Anna: There was a question about compliance, so if I can grab that one. This is something we’re doing with Microsoft. In two and a half years of working very closely with Microsoft on what’s called the Microsoft Azure Open AI platform, Microsoft gives a very strong framework for privacy and security. For example, there’s an LLM, there’s an AI, but it learns from your data. Your data, your customer’s data, stays with them. It doesn’t go out, it doesn’t go to the LLM, it isn’t used by other companies. It’s secure, closed, and doesn’t go anywhere. In terms of privacy, it also doesn’t go out of the box that’s built around your instance of Autopilot and Co-pilot. It’s also ISO 27001, SOC 2, HIPAA, FedRAMP compliant.

# Robust System and Ethical AI 

Gil: So it’s also a system that is very robust in terms of all the certifications around it. Then there’s a whole world of responsible and ethical AI. Talking about hallucinations, etc., these systems are limited; they are not open systems. These are systems built to solve IT problems. You can’t develop a relationship with it; it won’t go there. It will say, “I’m an IT professional; that’s not what I do.” We know already today of people who literally fall in love with ChatGPT, but it won’t do that. So you have those layers of privacy, security, compliance, responsibility, and ethical behavior. 

# Customization and Policies 

Rich: There’s a really great question that came in that kind of talks to some of the customization we’re going to be able to offer around Autopilot. The question is from M. Raymond, asking, “In the password reset scenario like the one you showed, his team would have had a few extra steps involving identity verification, reason for lockout, and forcing the user to change the password. Would we be able to update the workflow and require specific policies and procedures, including documentation?” 

Gil: Absolutely. What I showed was a demo of a certain scenario, but there’s a backend to this system where you can define all kinds of processes. For example, people don’t want it to do more than once this exercise, so if it’s a hacker, he cannot keep on doing it. It’ll do it once or have other means of verification, etc. Yes, there’s a backend within our RMM where you can define and configure how you want it to work. 

# Philosophical Question on AI 

Anna: As time is coming up, I think it would be great to look at a question—I think it was Adam—maybe slightly philosophical. He’s asking if it would be fair to call it semi-AI. If we’re talking about the Autopilot, the AI is about understanding the issue and then just leads to a set of defined allowed actions. Or do you see it another way? 

Gil: First of all, you should come to our labs to see how it works. No, it doesn’t work like that. It thinks, reasons, and decides. It’s iterative, so it can do some tests and get data, then say, “Okay, if this is the data I got from these tests, I’ll do two or three other tests.” It makes its own decisions on its own. In many cases, it makes better decisions than humans because it has all the information and data. It sees all the historical tickets and understands what fixes you’ve done in the past. There is no part of it that is structured or a decision tree; it’s literally like having a person behind it, except it has much more data at its fingertips than a person would. I’ll add another technical, geeky thing: if an agent learns something, not on your data but learns some trick or process, all the agents learn it immediately. As opposed to humans, where we need to teach someone else, here it’s immediate. The power of it is also that as time goes by and it fixes more IT issues, it improves all the time.Rich, do you see it the same way?

Rich: Yeah, and I think I know where you’re coming from, Adam. What maybe feels semi-AI about it is because there are limits and guardrails in terms of what it can and can’t do. But to Gil’s point, specifically in terms of Autopilot, there is an artificial form of intelligence, there is software-based reasoning happening. It is doing what AI does. It’s just that because human beings created the technology, they can constrict what it does and doesn’t do. But listening to what Gil was saying, it’s full-blown AI. It feels a little semi because it can’t actually go out and do something crazy if it was left to its own devices. 

# Closing Remarks 

Anna: Wrapping up, any last sentences? 

Gil: I want to thank you all so much, first to our audience for joining and asking all these amazing questions. There’s even a question about a trial. Maybe just before you wrap that up, I’ll say my last comment for the forum. A lot of the people on this call are customers that I know personally. AI is happening, and it’s happening fast. No other technology in history has been adopted so quickly. I’m sure you know that OpenAI has 500 million active monthly users now. No other technology has ever been adopted so quickly. Fear is not an option. You need to embrace it, test it, and use it as a tool to elevate yourself and not be afraid. Any technology that is not AI—I’m not talking about using AI—is not AI.

# Final Thoughts on AI Adoption 

Anna: That’s core is already actually obsolete if you think about it now. Leaving you to Rich for final words too. We have one minute more. 

Rich: The biggest obstacle that autonomous AI and managed services are going to face is that trust issue. But over time, as people become more familiar with it, that trust will develop. To Gil’s point, you don’t want to be the last MSP to climb on the agentic bandwagon because as the technology gets more and more mainstream, it is going to define end-user expectations. The great thing about agentic AI for an MSP is that, like Gil was saying, it’s working 24/7. What I want as a consumer of IT services is an immediate resolution to my issue at any hour of any day. I’m going to start getting that from MSPs that have access to this technology. I totally get the trust issue, and I’m not advising anyone who is concerned to just leap in without thinking about it. But just know that fear is not a long-term strategy here because there will be a competitive deficit if you get stuck there. 

# Contact and Additional Resources 

Anna: Great, thank you both. With that, anybody can contact us for more information, demos, and trials, etc. There are a bunch of questions in the chat that we’re going to take on and get in touch with you. Also, to all the people who asked about trials, we have your data, and we’re going to talk to you. Just one last thing: we’ve attached 10 principles of responsible AI for IT, covering all the different principles both from Microsoft and Atera. If you want to check that out, it’s in the tab of docs. This will again touch on compliance, trust, and everything else we’re doing together with Microsoft. 

# Upcoming Webinar 

Anna: And a last, last thing: next week we’re hosting another incredible webinar. You can just grab the barcode on this slide right here. It will be with our CTO and Oshri, and we would love to see you there. For now, I just want to wish everybody a great rest of your day. Thank you for joining. 

Gil: Thank you everyone. 

Rich: Thanks, goodbye. 

Anna: Bye-bye.