Generate summary with AI

While many people seem optimistic about AI, not everyone feels that way, not by a long shot.

In fact, according to a global survey by KPMG and The University of Queensland, 48% are worried and 24% are even outraged by AI.

Like other technologies that have burst onto the scene, AI is not inherently good or bad. Rather, it comes down to the way it is used and for what purpose. 

Even so, AI is an undeniably powerful tool, with incredible potential for the IT industry that could go either way. That’s why the concept of “responsible AI in IT” is critical.

What is AI-powered IT?

IT professionals are already using AI to accomplish rote tasks that were once done manually, leaving more time for impactful and innovative work. The Atera platform, developed in partnership with Microsoft, leverages AI to take a range of labor-intensive, routine tasks off the plate of IT departments, including:

  • Password resets
  • Router restarts
  • Setting up “out of office” emails
  • Identifying and solving printer problems

Yes, AI does all that and more.

But how do you incorporate responsible AI in IT, while maintaining the efficiency that AI is famous for?

Defining responsible AI and its importance in IT

Before we can answer that question, we need to understand the ethos behind responsible AI and the role it plays in IT management.

These issues are addressed succinctly by Sarah Bird, chief product officer of responsible AI at Microsoft, in a fireside chat with Yoav Susz, Atera’s US general manager. According to Bird, “We need the technology to work in a way that we can trust, and ensure that it’s not misused.” Generative AI, for example, can create a range of issues, from “fairness and bias to new types of cybersecurity attacks,” said Bird.

But during a discussion with Atera CEO Gil Pekelman, Lee Hickin, Microsoft’s AI technology lead in Asia, emphasized that the technology won’t wreak havoc on its own. The thing that most ensures that the impact of IT is ethical and safe is how we manage responsible AI.

So there are two points to remember: first, that it is the responsibility of IT professionals to make sure that technology is not used for negative impact. And the way to do that is with the framework of responsible AI in IT.

The role of IT in promoting ethical AI

AI is about “understanding the world, understanding human language, human information better,” said Bird. “Responsible AI”, however, “is about making technology work for humans.” 

IT professionals stand as the vanguard of new technology, directing how it is used in our day-to-day lives. Therefore, it is the IT industry who must work towards the responsible dissemination of AI in the IT organizations and systems that keep our world running.

Promoting responsible AI in IT isn’t easy, but it’s essential. Here are three ways to do so:

Create guidelines

IT departments must collaborate with the rest of the organization to figure out a range of guidelines that will steer the responsible use of AI. These can include:

  • Definitions of fair, bias, or stereotyping usage of the system
  • Reliability and consistency in predictions and operations
  • Privacy and data protection
  • The required guardrails for cyber, physical, and emotional safety
  • Compliance with laws and regulations
  • What human supervision looks like

Learn what the system can do and change it accordingly

Here, too, cross-company collaboration is key. For example, when Microsoft started working with GPT 4 technology, it brought all types of risk experts to assess its capabilities and patterns — both beneficial and harmful.

According to Microsoft’s Sarah Bird, who led the development of responsible AI for Bing Chat, “If there’s a behavior [we] never want the system to do, we fix that in the system… but if it’s something that is contextual… then my team focuses on building controls that make it easy for people to customize.”

Automate and test

Microsoft has a 20 page document for hate speech guidelines categorizing different forms of hate speech according to expert linguists. Based on this, the team was able to build a prompt for GPT 4 that automatically scores the Bing Chat conversation, taking the human element out of the testing field. In fact, the team trained the technology “to score almost as well as those experts. Now, every time we make a small change to the system, we can actually test [and tweak] it,” Bird stated. This is what automated testing looks like in a world of responsible AI in IT.

Challenges at the intersection of AI and IT

During the recent fireside chat with Atera, Microsoft’s Bird discussed some of the challenges and dilemmas that responsible AI in IT presents. For example:

Tension between privacy, safety, and fairness

When you know how data is used, it is easier to ensure that it is not misused. 

Similarly, understanding demographic factors is key for testing system fairness. But that could hurt customers and partners’ privacy. However, “we usually don’t have that information in our systems for privacy reasons,” Bird said.

System fairness

Bird talked about three types of fairness challenges related to responsible AI in IT.

  • Service fairness: This involves ensuring that “the system is equally accurate for all groups of people…all types of voices, accents, languages, dialects.”
  • Allocation fairness. Imagine an AI model that decides who is approved for loans. Should resource distribution “be based on historical information, on what we aspire for the allocation to look like in the world, or somewhere in between?” Bird asked.
  • Representational fairness: As Bird puts it, “Is the system producing stereotyping or demeaning content? Is it over-representing [or under-representing] one group?” 

And, of course, all this is impacted by the inevitable human bias that builds the AI systems.

Ensuring cybersecurity

As industry analysts agree, AI brings a whole new set of cybersecurity threats with it:

  • “Threat actors… are looking at AI… to enhance their productivity and… advance their objectives and attack techniques,” discovered Microsoft and OpenAI.
  • 80% of US professionals surveyed told Ernst & Young (EY) in 2024 that they’re “concerned about the use of AI in carrying out cyberattacks.”
Source: EY

Strategies for implementing responsible AI in IT

Let’s get practical. What can your department do to promote responsible AI in IT?

Here are the key guidelines to follow:

1. Set goals and ethical guidelines — then turn them into practice

According to Bird, responsible AI starts with understanding how we want the system to behave, then testing the way toward making it a reality.

However, she added, “a lot of the art and practice of responsible AI is that it’s not a one size fits all answer — we always prioritize privacy or we always prioritize safety — but in a particular situation, for a particular technology, for a particular context, what is the right way to make that trade off?”

Therefore, Bird recommends “build[ing] the technology to have [context-specific] controls” based on your ethical guidelines.

2. Get robust data and operations governance at the top level

Responsible AI in IT requires robust data governance and continuous monitoring. In organizations that experience greater risk, it’s important to create ownership in the C-suite level.

Lee Hickin, Microsoft’s AI technology lead in Asia, noted that he is seeing “chief AI officers, chief AI risk officers, chief data ethics officers. I’m seeing that in financial services, in healthcare, the education sector — where the risk of AI going wrong has a much bigger impact on society.”

Another option is to establish an office for responsible AI in IT, as Microsoft has done. At Microsoft, it’s “an office that drives governance, policy, education, and mechanisms for us to behave in certain ways, and then they empower the responsible AI [RAI] champs across the business to be the leaders and advocates for that,” Hickin said.

3. Build a reward and purpose mechanism

Hickin recommends rewarding your team for creating responsible AI in IT, especially when it requires them to constantly learn new and complex tech developments.

He suggested they quickly apply their new skills to real problems. “They can build something that the business can see. And you create that sense of purpose.”

Responsible AI in IT makes your goals possible

Every IT team lead wants 10X efficiency, happier customers, and more fulfilled technicians. AI enables that, as we’ve seen firsthand with companies that use Atera’s AI-powered IT management platform. But it can all backfire if AI is not handled responsibly, if your staff feels insecure in their work, or if data breaches become more frequent due to AI developments.

To avoid these issues, focus on responsible AI. Set ethical guidelines. Then, test your way toward turning these guidelines into efficient practices. Seek out C-suite ownership, use smart tools, and keep your technicians accountable and rewarded.

To get started with responsible AI, address one challenge or implement one step detailed above first. Gradually add more, until you too are walking the talk of responsible AI in IT. The result will be long-term IT efficiency that helps both your organization and your customers thrive.

Frequently Asked Questions

Was this helpful?

Related Articles

Boosting IT efficiency with Atera’s AI Copilot

Read now

Build a Strong Foundation in AI Literacy

Read now

How AI Upskilling Transforms Career Paths

Read now

Adopting AI for Limitless IT

Read now

Endless IT possibilities

Boost your productivity with Atera’s intuitive, centralized all-in-one platform