Artificial Intelligence (AI) has grown exponentially, transforming industries worldwide. As its use cases expand, concerns around ethics, data transparency, and regulatory compliance have emerged. Chloé Wade, VP at IDA Ireland, explores the importance of ethical AI frameworks, regulatory guidelines, and internal strategies to ensure responsible AI implementation.
Artificial Intelligence (AI) and its many use cases have seen exponential growth in the last few years, becoming one of the most popular and debated technologies of the decade. Chloé Wade, VP of International Financial Services UK at IDA Ireland explores the importance of implementing internal guidelines and adhering to new government regulations, with a view that ethical AI must be prioritised.
The latest advancements in AI and its popularity have captured global attention, creating headlines and sparking discussions around the world. Over 100 million weekly users are flocking to Open AI’s Chat GPT, and new use cases are continually emerging as the potential of this technology continues to be explored – from its use in medical diagnosis, manufacturing robotics and self-driven cars. A study conducted by Office for National Statistics last year, found that one in six UK organisations have implemented some form of AI, contributing to the market valued at over £16.8 billion.[1]
This rapid growth is raising questions about the ethical implications of this technology. Another study by Forbes Advisor revealed that over half of the UK population is concerned about the use of AI, particularly regarding misinformation, privacy, transparency and displacement effects. [2]
What are these concerns, how are regulatory bodies responding and what are the three key considerations to ensuring an ethical AI framework?
Regulatory guidance coming from the EU
A recent YouGov survey revealed the two main concerns with AI, with 50% of UK business leaders focused on future AI regulation and 46% on the use of invalid or biased data. [3]
New measures are being established to ensure AI is ethically-oriented, most notably the EU Artificial Intelligence Act 2024, which officially came into force on August 1st 2024. Despite their rigid nature, several nations are developing frameworks similar to those of the European Commission to safeguard the public while still encouraging organisations to realise AI’s many benefits.
The UK has adopted a ‘pro-innovation approach’ to AI regulation but has yet to introduce a statute of its own. Although a regulatory bill was proposed in March 2024, it is still under review. The EU AI Act does, in fact, affect some UK businesses: those who “develop or deploy an AI system that is used in the EU”, according to CBI. However, instilling moral and ethical values in these models, especially in significant decision-making contexts, presents a challenge. Codes of ethics in companies and regulatory frameworks are two main ways that AI ethics can be implemented.
Thoroughly addressing the ethics and responsibility in AI software development can provide a competitive advantage over those neglecting these matters. Reporting and evaluations are becoming essential as regulations like the EU AI Act become effective in helping companies manage the risks associated with AI. The ethos is to ensure that AI systems aid rather than replace human decision-making. AI lacks the ability to make ethical decisions or understand moral nuances, making human oversight necessary, especially in critical applications that affect well-being and social justice. The use of AI as a tool should be encouraged to improve workers’ efficiency and productivity while maintaining alignment with new legislation and ethical codes, such as the BCS Code of Conduct. [4]
Key steps for internal implementation of ethical AI
Ireland is one country that has established a substantial number of foundational processes to prepare for the AI market’s expected long-term, exponential growth. With the publication of the National AI strategy ‘AI—Here for Good’ [5]. The Irish Government expects civil and public service organisations to responsibly and innovatively embrace AI to enhance the delivery of current public services and new ones. Ireland has mandated that all AI applications within the Public Service adhere to seven ethical AI guidelines, as outlined by the European Commission’s High-Level Expert Group on AI in their Ethics Guidelines for Trustworthy AI. [6] But what should companies do internally?
- Understand the role of AI within the company and how data is used
Companies should first recognise the cooperative nature of AI and how positive impacts can be created. During preliminary implementation stages, business leaders must address how they process, store and extract data within their value ecosystem. With organisational objectives and corporate strategies differing between firms, the capabilities of specific AI models – including Machine Learning (ML) Models and Generative Models – should be explored to determine the optimal use of this technology within operations.
Several strategies can increase the trustworthiness of AI software. Risk evaluation is a fundamental aspect of these processes, as a tool for developers and prompt engineers to determine whether use cases are high-risk. This measure reinforces ethical considerations and the role of the individuals driving consequential processes. For example, product-specific approaches should be used in firms that internally deploy or sell advanced AI B2B software solutions, as risks associated with data and technologies can vary. A set of responsible AI guidelines is then developed using these metrics to outline the key steps for mitigating risk and controlling outcomes, specifically in terms of interpretability, bias, validity and reliability. In addition to diverse internal perspectives, companies will greatly benefit from collaboration with peers, researchers, and government agencies as they build ethical AI frameworks.
- Implementing change management processes and building trust
Trust continues to be at the centre of all ethical AI challenges. Although few full jobs will be automated in the near future, a growing range of tasks are becoming automated. The risk of displacement effects and digital transformation has made professionals fearful about their own careers, meaning trust building is a core principle in any ethical AI program. Companies may, therefore, want to consider providing resources and opportunities to help their workforce familiarise themselves with AI technology, regardless of their specific responsibilities. Identifying new roles, upskilling and retraining are all employee growth plans and enrichment methods that can be harnessed to lessen potential anxieties in the long-term.
As well as internal beliefs, trust is paramount in the new business environment. Companies that market and sell AI technologies must ensure their clients are fully trusting in the knowledge that models are built responsibly. In the digital economy, there are now practical and commercial reasons for advocating for trust during digital transformation, in addition to those both ethical and moral. Firms must focus on building trust in their AI products and software, as well as throughout their organisation, to ensure they are not forced out of the market from failure to embrace radical innovations and their challenges. For example, Ireland’s National AI Strategy is deeply rooted in trust and transparency, with “ensuring good governance to build trust and confidence for innovation to flourish” being a core principle.
- Up-skilling and building specialised teams
With legalisation being newly introduced, companies will need to strategically organise their business functions in response. The commitment and participation from multiple organisational levels – such as engineering developers, product managers, legal counsel and senior leadership – is needed to perform essential, continuous practices, such as collaboratively enhancing the company’s AI governance framework.
Having a set of responsible, specialised AI experts is a necessity in the digital economy, yet the development of young professionals to fuel talent acquisition is needed more than ever. The changing nature of work with regard to role and responsibility has highlighted challenges in skills mismatch, education and redeployment. Despite these externalities, Science Foundation Ireland (SFI) centres dedicated to AI – ADAPT [7] and Insight [8] – are committed to producing skilled graduates in this field. Ireland was also the first country in the world to develop a Postgraduate MSc in Artificial Intelligence in collaboration with industry. These opportunities demonstrate Ireland’s European and potentially global ethical AI leadership, having been recognised as the EU Centre for AI Ethics with organisations such as the Idiro AI Ethics Centre based in Dublin, supporting businesses with compliance, innovation and responsible practices.
By Chloé Wade, VP of International Financial Services UK at IDA Ireland.
[3] https://business.yougov.com/content/47618-risks-and-opportunities-around-ai
[4] https://www.bcs.org/media/2211/bcs-code-of-conduct.pdf
[5] https://enterprise.gov.ie/en/publications/publication-files/national-ai-strategy.pdf
[6] https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai