AI and battery management systems

AI is with us and it’s being incorporated into battery management systems (BMS), but its implementation faces challenges, writes Peter Donaldson.

Machine learning (ML) is applied to improve the accuracy of state estimation, with neural networks and support vector machines analysing historical data, and driving patterns and environmental conditions to estimate, predict and optimise battery states of charge, health and power. It is also used in thermal management for predictive cooling and dynamic control, fault detection, charging optimisation, energy management and more.

ML models rely heavily on high-quality, extensive datasets for training and validation, but obtaining these for batteries is far from easy. Battery performance is affected by many variables, including temperature, charge/discharge cycles, ageing and environmental conditions, and collecting data on these is complex and expensive.

What’s more, inconsistencies in collection methods across the various manufacturers and models can lead to biased or incomplete datasets, limiting the effectiveness of AI models.

Understanding how a battery is used and degrades over time is vital, but the degradation process is highly non-linear and therefore hard to predict. There are also significant differences between battery chemistries, manufacturing batches and operating conditions.

Creating algorithms that can take these variations into account is a huge hurdle for AI developers, while the lack of standardised testing protocols for ageing complicates the construction of reliable predictive models.

While BMSs must be capable of real-time monitoring and decision-making, AI algorithms – especially deep learning models – are computationally intensive and may struggle with the limited processing power and memory available in embedded systems.

Lightweight AI models that can run on edge computers may offer solutions, but with current technology their accuracy and functionality are limited.

Battery management is a safety-critical function, and most safety-critical software is deterministic, meaning the same inputs always produce the same outputs. However, AI systems are inherently probabilistic, meaning they deal with uncertainty and variability, and can therefore produce uncertain or incorrect outputs.

This uncertainty comes from several sources, including data limitations, model complexity and environmental variability. Training data might not cover every possible scenario, leading to gaps in the knowledge available to the model. Highly complex models, such as deep neural networks, can capture intricate patterns, but may also over-fit to the training data, thereby reducing their ability to generalise situations they haven’t encountered before.

Further, temperature fluctuations and changes in driving habits can introduce variabilities not fully accounted for in the model. This is why AI models often provide confidence levels or probability scores alongside their predictions.

AI models can make mistakes, however, especially in situations not well represented in the training data. They may fail to predict a sudden voltage drop in extreme heat or cold if this data does not cover those conditions adequately. Manufacturing defects or unexpected wear may also affect predictions.

There is also an interpretability problem, particularly where deep learning is concerned, as it can be difficult to understand why a model made a particular prediction.

If an AI-driven BMS predicts an imminent battery failure it might be tough for engineers to work out whether that prediction is based on good data or some artifact of the model’s training.

One approach taken by researchers and engineers involves the development of methods to measure and communicate the uncertainty in AI predictions, allowing for more informed decision-making. Another aims to make the training of models more robust through the use of more diverse and comprehensive datasets.

They are also exploring combinations of AI and traditional, rule-based systems; a hybridisation intended to act as a safety net against uncertainty and wrong decisions.

Continuous learning complements all these approaches by implementing systems that can update and improve their models as new data becomes available.

ONLINE PARTNERS