AI and Energy: What does the future hold?

AI poses many questions for our work at Citizens Advice: how will AI affect the way services are delivered? Will AI break down or reinforce unequal power relations in essential markets? And what does it mean to be a consumer advocate in an AI-enabled world?
Over the last few weeks, we’ve been focusing on what these questions might mean in the energy market. What we quickly realised is that many of these questions don’t yet have clear answers, partially due to the novelty of the technology, and partly because the regulatory environment in the UK is still emerging.
So instead of concrete answers, we want to share some key findings from our recent review of AI in the energy retail sector and some key questions that we’ll be thinking about as we develop this work further.
Finding 1: For many energy consumers, AI is already here
Whether they know it or not, AI is already a feature of many consumers’ experiences of the energy market. Consumers are most likely to interact with AI via customer service. In addition to more obvious uses such as chatbots, AI is also being used for a variety of backroom purposes: for example, analysing call transcripts and indicating where a customer might be vulnerable.
Alongside reducing costs and changing the way customer service works, AI also offers potential solutions to some very thorny energy problems. Chief among these is the question of how consumers adapt to benefit from using electricity when it’s cheapest in a system dominated by renewable energy generation..
AI home energy management systems, while marginal at this point, could provide a solution by automating decisions in homes with many devices that need control for consuming and providing energy based on a wide variety of inputs.
Finding 2: AI raises some potential risks for consumers
Many potential risks to consumers emerged in our review, including AI’s impact on peoples’ ability to access ‘traditional’ services like phone lines (if suppliers shift towards more digital methods of contact and reduce the provision of phone lines), privacy, discriminatory pricing, potential for misuse and scams and competition impacts. However, two major themes stood out:
There are numerous high-profile examples in other sectors of AI’s tendency to work in discriminatory ways. In an energy market which is already marked by diverging outcomes for groups of customers, these kinds of risks could be a real issue. For example, AI models have been used to recommend personalized debt repayment plans, and could be used to do this for consumers with energy debt. While there are no official statistics on racially minoritised consumer’s experiences of energy debt collection, our data shows that some BAME groups come to us with issues with energy debt at a higher rate than other groups. There is a risk that AI systems could incorporate) and amplify these existing biases. A particular focus on ensuring that AI works for people in vulnerable circumstances and with protected characteristics will be needed from both regulators and market actors.
AI systems can lack transparency both in the narrow technical sense of how and why AI systems make decisions, but also in terms of whether and why organisations use them, and who is responsible for the impacts of the decisions made. This lack of transparency, as well as the technical complexity of AI, could pose particular problems for consumers seeking redress when something goes wrong. This lack of transparency is one reason why outcomes-based regulation may be the best approach to regulating AI in the energy market: by defining what outcomes they want the market to achieve, regulators will be in a much strong position to tell whether or not AI is delivering for people.
It’s also worth remembering that, despite all the claims made for them, AI systems still frequently get things wrong. It remains to be seen whether AI will deliver on promises to improve services or whether AI should be relied upon to make decisions in ‘high-risk’ scenarios (e.g. when dealing with affordability issues) where the health and wellbeing of consumers is at risk.
Finding 3: The UK’s regulatory approach is beginning to take shape
Up to now, many of the more high-profile aspects of the UK’s regulatory approach to AI have been aimed at the potential ‘existential’ challenges that AI poses. But in the background, the government has been setting out its approach, with central government taking up a coordinating role and sectoral regulators doing much of the legwork in their respective areas.
As part of the government’s response to the AI White Paper, the Department for Science, Innovation and Technology has asked regulators including Ofgem to produce an analysis of AI-related risks and an action plan by the end of April.
We’re already engaging with both DSIT and regulators on many of the issues laid out above. Moving forward, we’ll be focussing on developing our approach to AI at Citizens Advice, leveraging the organisation’s unique position and expertise to ensure that AI is used to empower consumers.