Round table discussion: Trusted data, trusted AI: Powering smarter decisions across education and government


Version 1 hosted digital leaders from across Northern Ireland’s public and academic sectors for a round table discussion on how AI can power smarter decisions across education and government.
How can AI’s ability to predict demand and allocate resources improve policymaking?
Helen McCarthy
AI does not possess the ability to predict demand. To suggest that it does is to anthropomorphise the technology. What it is critical to emphasise is that humans develop the AI systems. Humans also produce the data that goes into those systems, and predictions are based on the quality of that data. So there is significant potential to use AI to predict demand and allocate resource, but first, we need to categorise our data across the public sector to a level that can be used in those systems. Basically, the quality of the output is dependent on the quality of the data, and also on the quality of the system itself. If all of this is set up correctly, AI can provide evidence to help policymaking.
Paul Grocott
Policymaking is about problem-solving. Data and evidence are integral to understanding those problems facing policymakers. AI can help inform that process, although other softer skills rather than just the technology are important in that process. We still need people who are creative and curious, think about problems differently, and can use that data to come up with solutions to a particular problem.
John McShane
There are a lot of things that we do that are termed AI but are actually just better uses of data, and getting to the data much more readily. What AI is really giving us is easier access to data, and if that data can be trusted, we can use that data to understand what is going on under the hood and then to simulate different policy decisions and the impact they might have. One example is understanding the reasons why students drop out of university courses. AI is used to get predictive insights and then put in place interventions to prevent students dropping out. AI, in policy terms, is just another level of data reporting that previously was not available.
David Crozier
When looking at the projects we do at the AICC with SMEs, two particularly come to mind. Both projects were IoT focused and addressed similar challenges. One of those companies had two years of very well understood, governed, structured data, and the other company took a ‘wing and a prayer’ approach to deploying AI. In terms of outcomes it was the company that had a good understanding of its data and the business problem to be solved that had much better outcomes. Having a good understanding of the data and the governance around that data is critical to getting predictive outcomes from any AI technology. It is also important in policymaking to take citizens along with you, particularly where AI is being used for decisions affecting them. In order to build trust, there also needs to be an appropriate means for citizens to challenge any such decisions.
Stephen McCabe
The question is, what can AI do that we are not already doing ourselves, and what value can AI add to the policymaking process? AI systems synthetise large disparate data sets across administrative records, economic data, satellite imagery, and all sorts of different layers of data, and combine and analyse those in ways we have not been able to do before. With this kind of analysis, we could predict demand for public services. Looking at the issue of creativity, AI should not be just used for scraping whatever is already available on the internet but should be used for hypothesis formulation. This could be of great value to policy formulation. Looking at some practical examples, we could use AI to predict hospital admissions in real time and patient flow through the health care system. This would help greatly with staff planning. In Momentum One Zero, we have worked with the health regulator, RQIA, to develop a natural language process tool to synthesise patient feedback at scale. The charity Care Opinion records patient experiences and we can synthesise that data at scale and feed it back to hospital trusts at board level to facilitate decisions of the quality of service.
Liam Maguire
The benefit of AI is the complexity and the scale. AI is used to create different models and by doing sensitivity analysis we can see how these models are affected by external factors. AI can thus provide information models for others to develop policy and make decisions. AI will be able to give the insight information for politicians to make the actual decisions; it is not the AI system making those decisions.
How can we break down silos between schools, universities, and government agencies?
Liam Maguire
I would ask whether it is just as simple as breaking down silos. There will always be data silos in different organisations. What matters is knowing the format of the data and how it is structured so you can have federated data. As long as you know how to access each silo, the data can interact. Rather than breaking down silos, we should accept they exist and focus on how datasets can work together. It is about interoperability and access, not removing ownership or control of data from organisations.
John McShane
There will always be data centres and silos because organisations want to own and secure their data. Interoperability is largely solved as modern systems can communicate with each other. The real challenges are data-sharing agreements and policy. This is a mindset issue. Organisations are often trained to lock data down early, but data should be treated as an asset to be held securely and shared widely where appropriate. The question should be how to store data securely while maximising its value through sharing, rather than restricting access unnecessarily.
Stephen McCabe
There are technical and governance answers, but governance is usually more important. This is about getting silos to interact and providing access to the right people at the right time. Ultimately, it requires political will and mature partnerships between organisations that trust one another and share the same societal goals. AI should inform decisions, not replace democratic processes. Technical safeguards like transparency, human oversight, and accountability help, but trust between organisations and sectors is essential to enabling data sharing and collaboration.
David Crozier
Collaboration is essential for the greater good of Northern Ireland. Projects like the AICC show that cooperation between universities, industry, and partners works when governance and sharing agreements are established upfront. Removing competition and enabling free flow of people, knowledge, and data improves outcomes. Shared infrastructure and facilities allow better returns for organisations and citizens. When government is willing to take managed risks and enable collaboration, it creates better outcomes. Success comes from shared learning, shared data sources, and a willingness to work together constructively.
“Baking governance, ethics, and accountability into projects from the start is critical.”
David Crozier
Paul Grocott
Most data sharing gateways already exist, and where they do not, agreements can be created. The bigger issue is mindset and culture rather than legislation. There is also a major challenge with legacy systems. Making data accessible is rarely prioritised over new policies or initiatives. We need to find ways to deal with legacy data and systems without creating expensive IT projects that fail. Investment is needed, but the focus should be on mindset change and practical solutions that unlock existing data rather than overhauling everything.
Helen McCarthy
For a start, we need to move away from a ‘me, myself, and I’ mindset and focus on what is best for Northern Ireland. Sharing data improves decision-making, leads to better citizen outcomes, and delivers fiscal benefits. That is because data is a valuable resource. We are all highly connected in Northern Ireland, so we can use that to bring those who are not yet on board into that shared vision. Mechanisms such as transformation programmes, innovation boards, and collaboration between government, industry, and academia are all key here. Above all, cultural change, supported by governance and legislation, is essential to breaking down silos.
How do we build trusted data ecosystems that meet data governance and ethical AI standards?
David Crozier
Trust must be earned, not assumed. Citizens are open to transformation but still have concerns about AI making decisions. Trust comes from transparency in how data is captured and used, clear accountability, and ethical principles applied early. Governance and ethics should be considered at the planning stage, not retrofitted later. There must be routes to redress where decisions negatively impact people. If mistakes are not addressed quickly, they become political problems. Baking governance, ethics, and accountability into projects from the start is critical.
Liam Maguire
One of the challenges is that historical data may contain bias that we are not aware of until later. The issue is not just the data itself but how it is interpreted and queried. We no longer have a single source of truth; we have many interpretations. Trust depends on how data is used and how conclusions are drawn from it. Interpretation is as important as data quality, and multiple sources of truth make it harder to ensure consistency and trust across systems.
Helen McCarthy
Trusted ecosystems start with trust between teams working with the data. In the AI strategy, we recommend AI oversight teams rather than single individuals. These teams monitor bias, drift, accountability, and transparency. When issues occur, organisations must be open about them and explain what has happened. That transparency builds trust. The fact that the AI strategy has been co-created across sectors is also important as it means that everyone understands and owns it. In essence, trust begins with collaboration among key players and is reinforced through governance, redress mechanisms, and openness throughout implementation.
Paul Grocott
Trusted systems are built around clear principles that set expectations for behaviour and use. Transparency is critical. Organisations must understand and manage AI’s limitations, including bias, privacy, intellectual property, and misinformation. Building trust also means starting small, piloting, and scaling successful projects. Large failures quickly undermine confidence. Trust grows when people see tangible service improvements and personal benefits from sharing data. Investment in people and education is essential so they understand how to operate responsibly within these ecosystems.
“Innovation should be piloted responsibly, with security by design and ethical by design principles.”
John McShane
John McShane
Trust is built by removing the mystery around AI. Many people already use AI without realising it. Problems arise when information generated is inaccurate and unexplainable. Data quality remains central. Systems should be designed so outputs can be traced, errors identified, and corrections made. Segregating processes within AI systems allows better auditing and explainability. AI will make mistakes, but trust depends on being able to understand why errors occur and how they are addressed, rather than relying on black box systems.
Stephen McCabe
There are technical and governance elements to building trust. Technically, this includes data hygiene, security by design, privacy by default, anonymisation, and zero-trust environments. Governance is equally important, including public data panels, standardised data-sharing agreements, and aligned governance frameworks. Federated and privacy preserving access models allow sharing without direct data transfer. Healthcare provides a strong example through trusted research environments. Knowing who accesses data and why, combined with strong governance and investment, is key to building trusted ecosystems.
How can we balance innovation with ethical responsibility and AI use?
Paul Grocott
At an organisational level, the UK approach is to be ambitious and pro-innovation, underpinned by clear principles. We need to invest in the people using AI, encouraging them to transform public services while understanding how to deploy technology responsibly and ethically. This includes managing bias, respecting privacy laws, understanding limitations, and being able to explain decisions. For example, if AI supports grading or decision-making in education, it must be explainable to those affected. Supporting people to operate responsibly within a risk hungry, pro-innovation environment is essential.
“We need to invest in the people using AI, encouraging them to transform public services while understanding how to deploy technology responsibly and ethically.”
Paul Grocott
Helen McCarthy
Innovation means different things in different contexts, whether that is advanced research, industry applications, or practical tools for charities, schools, or the public sector. For the public sector, innovation must deliver positive use, positive inclusion, and positive outcomes. Ethical responsibility begins with how AI is used, so it is a false dichotomy to suggest innovation and ethics are opposing forces. Both are needed, with different guidelines for different contexts.
David Crozier
Ethical and moral implications must be considered alongside innovation.
A ‘move fast and break things’ approach is not appropriate in public service contexts. Governance, ethics, and regulatory guidelines need to be incorporated alongside innovation efforts. Policymakers and officials are accountable for delivering public services in an ethical, responsible, sustainable, and repeatable way, not for commercial metrics like funding rounds. Innovation must be balanced with responsibility so that outcomes can be stood over publicly and politically.
Helen McCarthy
For this balance to work, the public sector must have trust in what the private sector provides. So, for example, some innovation can happen in sandboxes that are not suitable for public deployment, but there are also developments that the public sector needs industry to bring forward. Getting this balance right requires partnership. Ethical standards determine what can be adopted into public services, while innovation continues elsewhere. That balance depends on trust, clear expectations, and shared responsibility between public and private sectors.
Liam Maguire
Innovation naturally pushes boundaries, but not all boundaries are ethical ones. AI can make innovation more accessible, particularly for startups, and accelerate development. That is not inherently an ethical issue. Ethical concerns arise depending on how systems are governed. Guardrails such as governance, audit trails, and controls are needed, but ethical considerations should not stifle innovation. Innovation has to happen, with appropriate guardrails applied to manage risk rather than prevent progress.
Paul Grocott
Using AI to support startups can replicate existing biases if the underlying data discriminates against minorities, women, or other groups. That is where ethical considerations matter. There is nothing inherently unethical about startups or innovation; the issue is who benefits and who is excluded. Ethical responsibility requires recognising and addressing bias in data and systems so innovation does not reinforce inequality.
Stephen McCabe
The key question is whether innovation is anchored in public value. A risk-based approach is needed, applying safeguards where stakes are highest. Ethical responsibility does not preclude experimentation. Sandboxes allow innovation to move quickly and even fail without exposing the public to harm. This enables learning and progress while protecting citizens. Responsibility is shared, and experimentation should be encouraged in controlled environments.
John McShane
We owe it to society in Northern Ireland to innovate because of real challenges such as an ageing population and a skills shortages. AI can help address these issues. Innovation should be piloted responsibly, with security by design and ethical by design principles. While extreme cases of ethical and unethical use are often clear, there is less consensus in the grey areas. These middle ground cases need further exploration. Over time, as more use cases emerge, there will be greater clarity on ethical boundaries and how to prevent misuse of AI tools.
Helen McCarthy
Tools built in the private sector should not be adopted in the public sector unless they meet required ethical standards. Citizens and services must be brought along on that journey. For example, there are many existing, ethically-sound use cases that could already deliver major public sector benefits. Applying those first will help build confidence; and from there, we can progress to adopting more advanced innovations. Ethical checkpoints should also be in place throughout the technology readiness lifecycle to ensure harmful uses are filtered out before deployment.
“We need to show that AI works and can be trusted in a meaningful way that the public can understand.”
Stephen McCabe
How can we build public trust in data and AI among teachers, students, and employees?
Stephen McCabe
Through demonstration. We need to show that AI works and can be trusted in a meaningful way that the public can understand. At Momentum One Zero, we bring cybersecurity and AI together and in doing that we are increasingly working with companies to verify that their AI systems are trustworthy.
Liam Maguire
Public trust is a real challenge. We need to address the different levels of trust in terms of gender, age, or educational attainment. We have to build out from where the trust is strongest, from the younger, digitally-enabled generation. From a teaching perspective, the challenge is around how we assess students. AI will change how we not only teach students but, more importantly, how we assess them. It used to be about testing rote learning and now we need to identify students’ understanding of a subject. That will be the big challenge in education.
“Rather than breaking down silos, we should accept they exist and focus on how datasets can work together.”
Liam Maguire
David Crozier
I would put teachers and employees in one group and students in the other. If we want to communicate the benefits of AI to the sector leaders we need to actually deliver the benefits of AI that will automate much of the administrative parts of their jobs. This will free them up to teach and inspire their students. Children need to be taught how to use AI to solve problems and not just as a short cut to do their assignments.
John McShane
There is a generational gap between the two groups. Some of the principles we applied when moving people from paper-based systems to digital systems 30 years ago are very relevant. As with that transition, it is key that we bring people along with us. It will also require a multi-faceted approach, starting with debunking some of the myths about AI. The most successful IT projects involve the stakeholders from the start and they take ownership of the system. In order to achieve that, you need to bring people with you right through the process.
“Sharing data improves decision-making, leads to better citizen outcomes, and delivers fiscal benefits.”
Helen McCarthy
Helen McCarthy
Enhancing training and literacy is a key principle, as there are significant gaps between groups in terms of understanding of AI. That is why we are advocating that, for every AI project, there is a team member tasked with training those using the system. In addition to that, most senior leaders are in roles that are not focused on the workings of AI, so we need bite-sized training for them too. At the other end of the spectrum, AI literacy for students should be a given and will be a key skill into the future. On building public trust, we are advocating a citizens’ forum.
Paul Grocott
To build trust in using AI we need to ensure we are not cementing in existing biases. We should not reinforce existing biases that will leave disadvantaged students, employees, and communities behind. We should deploy AI to make public services better for everyone.






