How do data challenges change as new battery technologies are introduced?

December 8, 2020

A Q&A with Peaxy Chief Scientist John Ervin.

What are some of the more important attributes of batteries that differ widely across battery types? How do these drive changing needs for data collection?

A typical battery degradation curve will often be modeled in a semi-empirical manner. There are of course efforts to do full physics modelling, starting for instance with transport equations and various models for interfaces, but these models are often very expensive and increasingly irrelevant when accounting for things like manufacturing variability, handling, electrical transients and so on. Fully empirical models can perform reasonably well, but often these predict degradation which is too smooth. Real life batteries often go off a cliff after a certain amount of usage. So we are seeing these semi-empirical models quite often. The part of the model which describes the rapid degradation phase is highly technology-dependent and can usually only be approximated within a range using laboratory data.

What do you see as the #1 pain point that companies face from a data perspective, particularly in deploying different types of batteries?

We see different problems with different organizations. Sometimes these aren’t always technical. We have integrator customers that have enough cachet that OEMs are willing to share their test cell data with them. We have spoken to operators that have now been in business long enough that they are starting to realize keeping all of their field data is expensive. We have some military folks that eschew networked edge data collection entirely because of security concerns. But the number one issue I see is the need for a highly reliable predictive battery lifecycle model — one that can dynamically provide bottom line metrics for projected battery health by serialized asset. Again, these models are highly dependent on the batteries being used, and to a large extent one has to start from scratch when deploying a new technology.

As new battery technologies become available, a common scenario will be that they will augment existing battery installations, and over time make up a larger portion of the grid, based on cost and deployment capabilities. How will this impact a company’s ability to normalize their battery data into a format that can drive insights?

The battery technology per se need not affect the data normalization problem, because that is mostly an interaction with the local battery management system (BMS). The issue is creating a degradation curve that interacts with an intelligent dispatch strategy. As an example, if I use my installation for load shifting where I charge up my batteries at one time (say during the day with PV) and discharge later (say at night when there is no sun), I might expect my degradation to be affected by my minimum state of charge after discharge, which is reasonably true. This insight affects the optimal sizing of my installation. Let’s say I get the best degradation if I only go down to 50% state of charge (not really true) so having a larger installation dispatched to only 50% might make more economic sense than an otherwise rightsized installation that I dispatch down to 0% SOC at every cycle. These optimization problems, as they are called, can get complicated pretty quickly even with a single technology, let alone with a typical battery grid installation that has different types of technologies at work simultaneously.

How does scalability of the digital infrastructure come into play? What are some of the signposts companies should look out for in order to ensure that their digital capabilities grow along with their business?

Many companies these days are quite comfortable with cloud deployments, microservice stacks and elastic storage and compute. I wouldn’t have said that five years ago, but I think it is true today. The details of battery data do come into play as operators scale. When choosing a data partner, battery companies, whether R&D labs, manufacturing houses, system integrators or field operators should look for experience with battery-specific challenges and ask vendors questions about cost and quality.

  • First regarding cost: Data management for wind has, to a large extent, boiled down to a cost per MWh metric, which can be readily understood. Battery data management isn’t there yet but consumers of these services should push vendors such as Peaxy to speak in these terms, even if we can only provide a range.

  • Then quality: Good data management around batteries impacts important business metrics. It improves warranty compliance, lease pricing, the ability to add metrics to long-term service agreements, and perhaps most importantly, safety. A quality vendor will provide full access to all battery data threaded down the serial number level, over the entire lifecycle of the asset. Also critical is full traceability over the battery lifecycle, including individual bill of materials level components and lot numbers. So an important quality signpost is whether the company you work with cares about these metrics and can explain to you not only how the data value chain will impact these metrics, but will ultimately increase your bottom line.

 

At Peaxy, we provide full access to all battery data threaded by serial number, over the lifecycle of the asset, including dynamic RUL and degradation computations by serialized battery asset. We also have experience with computing near-real-time digital twins for each serialized battery, over the life of the battery.

Our deployments are typically done in 120 days, often with far more speed and efficiency than an in-house development effort or a generalized analytics platform with extensive customizations.

Want to know more?

Contact us to learn more about Peaxy Lifecycle Intelligence for Batteries.