The concept of ‘Intelligent Machinery’ was first introduced by Alan Turing in 1948. Computer programs with the ability to play chess followed in the 1950s. Fast forward another 70 years to the present, where we interact with computer intelligence several hundred times a day. From smart home setups that gradually brighten our homes to social media algorithms that define what we’re shown in our feeds; AI programs have become ingrained in the lives of the general population . The release of ChatGPT in 2022 catapulted the concept of AI to the forefront of the public’s awareness. ChatGPT is an example of a chatbot that makes use of generative AI (AI that can create content, e.g. text, images, videos in response to user prompts). Now, not only can our devices be used to help us carry out tasks, but they can also create new material for us. Thus, interest and usage of AI is continuing to rise exponentially within daily life.
As the use of AI technology has become ingrained in our lives, the importance of ensuring these technologies are governed, and don’t lead to unintended negative consequences, has become increasingly crucial. The amount of reliance and trust in these systems is growing, therefore so are the risks that go along with them. This is where the concept of AI ethics comes into play. According to www.gov.uk, AI ethics is defined as ‘A set of values, principles and techniques that employ widely accepted standards to guide moral conduct in the development and use of AI systems.’
Some examples of the kinds of dangerous impacts AI can have include cases where these systems have demonstrated inherent biases. Facial recognition software has faced criticism for being racially discriminatory, with the accuracy for identifying faces varying drastically between Caucasian and those of other ethnic backgrounds. In addition to providing phone security, this software is used for surveillance and screening - which could result in racial harassment and exacerbate inequality. In the health sector, AI models have been shown to miss detection of diseases in women. As well as these issues of bias, AI also produces a great deal of CO2 emissions. Therefore, whilst it is used a great deal in the fight against climate change, the technology itself is also adding to the global carbon footprint.
AI ethics can broadly be grouped into two categories – carbon emissions and bias/fairness. We’ll now dive into the reasons these issues can occur.
Human activity that produces excessive emissions of carbon dioxide into the atmosphere is one of the leading causes of climate change. Whilst activities such as travelling by plane often come to mind, all activities produce carbon emissions to a greater or lesser extent. The creation and maintenance of AI products is no exception. There are two types of emissions from AI:
As discussed, a large amount of carbon is produced through the AI technology itself. However, another huge contributor comes from the data required to run the technology. AI models learn from the data they are trained on. As these models are becoming more sophisticated, they require larger amounts of data, which needs to be stored somewhere.
Data centres are large buildings containing thousands of servers that store the data. The servers produce vast amounts of heat; therefore, these buildings need cooling to prevent the servers from overheating, which requires both water and electricity (in addition to the electricity that powers the servers themselves). The amount of data generated and stored has increased exponentially over the last decade and continues to do so, thus increasing carbon emissions.
As mentioned above, AI is trained on data. Therefore, this data dictates the outputs of the AI. If there are any biases that exist in the data, the AI will inherit these. Some examples of biases include:
In addition to the data used, biases can creep into AI during the development of the models. AI models are created by humans who have innate biases. Data used in AI is rarely used in its raw form. The data needs to be ‘cleaned’ and manipulated to be used correctly by AI. For example, a person’s age is often calculated from their date of birth. In some systems if a date is not available, it will appear in the data as ‘1900-01-01’. The person handling the data can sense that this is not a real date of birth and act accordingly, for example ensuring the age for those people is labelled ‘Missing’. Each developer will be influenced by their background and perspectives when preparing the data for AI models. For example, depending on the developer’s culture and upbringing they will have had exposure to certain foods. Therefore, if there are foods missing from a dataset that the developer is unfamiliar with, they will not realise that the data is unrepresentative. If an AI that classifies food groups is presented with one of these missing foods, it will also be unfamiliar with it, leading to these foods being misclassified.
Whilst AI can lead to some very serious negative consequences, there are several strategies that can be implemented to avoid these.
AI products are used and consumed by the public. It is, therefore, crucial that information on the development, outputs and usage of these products is openly available to reduce the risk of ethical violations. Diverse perspectives during development and bias measurement tools can also help prevent bias in models and underlying data.
There are also measurement tools to check how eco-friendly AI models are. This helps identify issues and inform business strategies to reduce emissions. For example, understanding how much CO2 is being generated by a company’s AI model can help them allocate an appropriate carbon footprint budget to keep this in check.
Ensuring continuous monitoring of AI products is also important. Data shifts, so a model's fairness and footprint can too. Businesses need to be aware of any issues that come up as data evolves so they can mitigate these in time.
There are several strategies that AI developers leverage to reduce the environmental impact of AI products. Here are some:
This blog post has outlined some of the ethical considerations of AI, how they come about and how they can be avoided by those who create these technologies. Unfortunately, AI bias and carbon emissions cannot be completely avoided; however, we can minimise these by employing ethical frameworks when creating these dynamic products. Striving to be as transparent as possible throughout the lifecycle of an AI tool can help identify potential biases and will allow us to monitor its carbon footprint. Being vigilant around the way the underlying models are developed and ensuring they are running as efficiently as possible can reduce carbon emissions. It is important those creating AI are mindful of its development and the data that is being used to train it. AI can be extremely powerful and with great power comes great responsibility!