Keith Power spends his time helping companies make sense of artificial intelligence. 

As Partner and Responsible AI Leader at PwC Ireland, he works with organisations that are experimenting with the technology, embedding it into their operations and, increasingly, relying on it to drive their businesses forward. 

But the more that adoption of AI accelerates, the more Power sees a consistent problem emerge: the systems needed to control AI are not keeping pace with how quickly it is being deployed. 

That is where Responsible AI comes in, he says. 

“What is Responsible AI?” Power asks on the latest edition of The Tech Agenda podcast. “The simplest way of looking at it is trying to get the benefits out of AI without losing control. 

“So, AI is useful, but at the same time, it needs to be safe and secure.” 

That gap between adoption and control is borne out in PwC Ireland’s latest research on responsible AI. 

The survey finds that while most Irish organisations have begun engaging with Responsible AI, far fewer have embedded it as a core part of how they operate. Many understand the principles. Fewer have translated them into real action. 

The survey shows that while 77 per cent of Irish respondents have begun implementing responsible AI and governance practices (compared with 82 per cent in the US), fewer than one in five (19 per cent) say these efforts are prioritised across the entire organisation  – in the US, that number is 28 per cent.  

Part of the issue is timing. As companies move from experimenting with AI to deploying it at scale, governance often follows behind. 

But there is also a more fundamental challenge—how responsible AI is framed at the leadership level. Notably, Irish businesses also lag their US counterparts in embedding responsible AI at the leadership level. 

“If you take the view of the leaders in an organisation, they get multiple different asks of them for investment,” Power says. “They are being asked to invest in new systems, new markets, new operations and lots of other things, and somebody comes and says, ‘We need to now invest in Responsible AI.’” 

Based on his experience, it is about convincing boards and senior executives of the business case of Responsible AI.  

“Is responsible AI simply used to mitigate risk? Or is it helping us to just comply with the regulations? Or is it giving us something extra?” Power asks. 

If it is the latter, Power says the conversation really changes. “That’s when you make real progress,” he says. 

Trust becomes central

At the heart of Responsible AI is trust. 

That word appears repeatedly in PwC’s research and in Power’s own conversations with his clients. 

“Trust is not just for the leaders in the business, but your employees, your customers, your shareholders,” Power says. “You have to think of all of these dimensions when it comes to thinking about trust.” 

The way Power sees it, building that trust requires more than policy documents or compliance frameworks. 

“At the end of the day, humans trust on the basis of reason,” he says. “So, what basis of reason are we giving people to have trust in AI and the outcomes of AI? 

“Are we able to demonstrate what the AI is doing, how it’s doing it, that it has been tested to make sure that it’s going to do the right thing? That’s how you build up that level of trust over time.” 

Trust, however, is not uniform. 

Different stakeholders have different concerns. Employees may worry about job displacement. Customers may worry about fairness or security. Boards may focus on liability or compliance. 

“It’s not a one-size-fits-all approach,” Power says. “You need to think about what those motivations are and how you can try and address them as best you can.” 

AI has been around for decades, but until recently, it wasn’t very visible to the general public or widely impactful in a direct way. What’s changing now is both the speed of technological advancement and how organisations are using it. 

This, according to Power, is why trust is paramount. “When you start moving out of experimentation, that’s when you really need to start having the control come along with that,” he says. 

Ireland versus the US

Keith Power of PwC

PwC’s study compares Irish organisations with their US counterparts. The results suggest Ireland is still earlier in the maturity curve. 

“We are probably just a little bit earlier in our journey in Ireland,” Power says, adding that the gap is particularly visible in execution. 

“One particular question which we ask was around the effectiveness of the controls that are in place,” he says. 

While many Irish firms say they have policies or governance structures in place, fewer believe those systems are genuinely effective. 

“About a quarter of Irish entities are saying, ‘Look, what we’re doing is effective when it comes to training our people, having the right policies, applying those appropriately,’” Power says. 

“That’s about half the response rate in the US.” 

The difference is not simply one of ambition. 

“That’s just a maturity curve that we need to kind of move ourselves up along,” Power says. “It’s recognising this is important. We just need to up our game slightly.” 

The resource gap

One of the clearest themes emerging from both the survey and Power’s own advisory work is resourcing. 

“I see it in those who are directly involved in it,” Power says. “They really get it – they get the principles and they understand it.” 

The trouble is getting buy-in from those in the organisation not directly working with AI or not directly seeing its benefits. That manifests itself in a predictable way. 

“They come to me and they say, ‘Look, we know what we need to do, we have the good policy in place, we’re starting to put the frameworks in place but I don’t have enough people, and I don’t have the right people,’” Power says. 

This is reflected in the survey, where a large proportion of organisations say resources allocated to Responsible AI are insufficient. 

A reason for this, according to Power, is that, for many organisations, it is still viewed primarily as a defensive exercise: risk mitigation, regulatory compliance, avoiding reputational damage. 

But Power argues that this is too narrow. Rather than thinking defensively, he says companies need to be more offensive to get real benefits. 

“It comes back to the value proposition,” he says. “My own view on this is, if you are an organisation that can demonstrate to your customers and to your shareholders that, actually, you’ve got this right, you’ve got it controlled, you’re going to be able to do two things.  

“One, you’ll get a lot more buy-in from those other stakeholders. So, they’ll support what you want to do. But also, the people in your organisation charged with doing that innovation can accelerate because they now know what the guidelines are, and they can move at speed and at pace.  

“They know they are going to stay within those guardrails, and they are not going to get an overly controlled environment that they have to try and innovate within.” 

Responsible AI, in Power’s view, can become an enabler rather than an obstacle, an offensive tool rather than just a defensive play. 

Turning principles into practice

Knowing the principles is one thing. Embedding them is another. This is something Power has witnessed first-hand on many occasions. 

“The biggest challenge is about the scaling,” Power says. “So, we know the principles, but how do we scale it and operationalise it?” 

That challenge often comes back to day-to-day execution. There may be training programmes in place, but Power says that does not always translate into behavioural change. 

“There’s probably a lot less education,” he says, adding that employees may understand what Responsible AI is in theory but not how it applies to their role. 

“What we’re not really doing is helping them understand why it’s important to them in their day-to-day activity,” he says. 

That matters, Power says, because human oversight remains central. 

“A lot of this control actually comes back to what the humans involved here are going to do to monitor the AI solutions,” according to Power. 

“How do they interpret the outcomes? How do they know when they need to intervene?” 

The EU AI Act looms large in discussions around governance. 

Awareness is growing, but preparedness remains mixed. 

“In the survey, we see about 70 per cent of organisations say that they’re partially prepared for the EU AI act and only 14 per cent are actually fully prepared,” Power says. 

The regulation itself is complex and still evolving. “That doesn’t help organisations’ understanding,” he says. 

But Power is clear that compliance alone is not enough. 

“Going the regulatory compliance route will only deal with part of the problem,” Power says, adding that companies need to understand their own use cases and risk profiles. 

“You need to think about, ‘Well, from my organisation’s perspective, what are my uses, what are my risks?’ And then you have to manage those,” he says. 

Unclear ownership and AI agents

Power argues that governance is often hindered by unclear accountability and by a “siloed approach” in many organisations.  

AI governance may sit in data, technology or engineering teams. But Power says that business teams are often the ones actually deploying and using AI. That can create a disconnect. 

“If we think about the use of AI is sitting very squarely within the business functions… so why are they not more heavily involved in governance?” he says 

It requires broader ownership, Power says. 

Some organisations are getting this right. 

Power says they typically share two characteristics. 

First, they are using AI strategically. 

“They are using AI to transform their business, and they are using it to transform specific parts of their business,” he says. 

Second, they apply governance proportionately. 

“It is not about saying, ‘Well, let’s have a single policy where we have a single risk appetite and we’re going to make sure we comply with that.’ That just simply doesn’t work,” he says. 

Instead, he says they should focus on where the real risks lie. 

“Identify where your real risks are, the high risk, that’s where you concentrate, low risk stuff, get that out of the way and let innovation take over,” he says. 

Power accepts that governance challenges are likely to intensify as AI systems become more autonomous. 

AI Agents are emerging as the next major focus. 

“There is again in the survey, quite an interesting finding,” Power says. 

“We asked the question around what impact are AI agents going to have on governance and about 56 per cent of respondents said, ‘Yes, it’s going to have an impact.’” 

In the US, the response was far stronger – nine of out ten respondents said it would have an impact. 

Based on the data, Power believes some Irish organisations may be underestimating the speed of change. 

What companies should do now

Asked what advice he would give organisations, Power returns to two themes: ownership and capability. 

“Have someone in charge, but also make sure they know what they are in charge of,” he says. 

Ownership needs to be practical, he says, not symbolic: “What are the systems I own? What are we using AI for?” 

The second is investment. 

“You need to invest, but make sure you’re investing in the right things,” he says, adding that that means capabilities, not just frameworks. 

“The best policies and the best frameworks in the world will fall apart if you don’t have the right people and tools to support it,” he says. 

The organisations that succeed will not simply be those that adopt AI fastest. Instead, Power says they will be the ones capable of controlling it while still moving at speed. 

“The people and organisations who will win are the ones that lean into that evolution and change, and build the guardrails that allow them to move with pace.” 

PwC New branding The Tech Agenda 23.09.25 (1)

The Tech Agenda with Ian Kehoe podcast series is sponsored by PwC.