Battery cyclers Q&A: common pitfalls and solutions

August 11, 2020

Detail view of submarine engine
A conversation with Manuel Terranova, Peaxy’s CEO, and Joshua Gallagher, Peaxy’s senior director of software engineering.

 

Q: What do you see as the #1 problem battery companies face in dealing with battery cycler data?
A: Regardless of whether a company uses one or multiple battery cyclers, the biggest problem is it typically describes the performance of a single battery under test with little other context. Comparing the performance between two batteries is a manual, time-consuming exercise. When you try to do this across battery cycler solutions, it becomes an even bigger headache. While a cycler’s software can describe performance, it doesn’t take into account the rich metadata describing in detail the individual components that went into it.

Q: Do you find companies typically rely on a single battery cycler solution, or do they often end up relying on several?
A: We often see many companies that have several different solutions in use at the same time. We’ve worked with Neware, BioLogic, MACCOR, Arbin, PEC, MTI and others. This, of course, creates big challenges in correlating the data, since each solution allows views and analysis of its data only. During a project a fair amount of time is spent normalizing data from the various cycler solutions to create a single source of truth to allow in-depth analysis to take place.

Q: Why is that important?
A: By parameterizing a complete set of inputs, it’s possible to gain important insights into their impacts on battery performance. This isn’t easy though! We’re talking about data across multiple systems that don’t normally talk to each other, stored in different formats such as spreadsheets, databases, and binary files. In the Peaxy Lifecycle Intelligence (PLI) platform, battery manufacturers can bring this data mess into a single normalized view. Only then can they start to correlate all of the inputs including materials used during manufacture such as carbon felt, chemical ingredients, metals and even process values such as sintering temperature, all in a single view.

Q: Can you give an example of where this has been useful for a customer?
A: One of our customers bulk ordered 2,500 felts, which are glued onto plates as part of their manufacturing process. The felts were purchased from a number of suppliers to meet their supply and pricing needs. Once the details of that purchase were captured, including original manufacturer, batch or lot number, date received, and the associated purchase order number, they could start to do some interesting things, like compare the performance characteristics of batteries using felt purchased from different manufacturers. After discovering a manufacturing defect in a single batch of felt, they could easily query the entire in-field deployment for batteries built with the felt batch in question. That’s just one example of the magic that starts to happen when your data is all in one place. Think about recall situations for source materials used in manufacturing — how do you know which customers are affected? Well, now you do.

Q: Where does machine learning come into play?
A: A rich, normalized dataset enables more advanced outcomes, and one of the biggest areas we’ve seen in the last few years is with machine learning. We always encourage our customers to focus on machine learning efforts that result in a tangible result that directly impacts the bottom line or an established business strategy. This helps tie funding back to results, and make it easier to justify the entire effort.

For a battery manufacturer, for example, time to market is key. Machine learning is a powerful tool to shorten the design and test cycle by allowing the Research and Development (R&D) group to quickly analyze prior test results using trained algorithms to understand which features ⁠— from sintering temperature to length of sintering or electrolyte formula ⁠— are having the biggest impact on battery performance. Since this process needs to be repeated many times, costly delays are possible if the R&D group can’t analyze prior test results quickly in order to determine the next test to perform.

Q: What are some of the benefits of working with a software analytics company that specializes in battery analytics vs. trying to solve these problems in-house or with generalized analytics or AI solutions provider?
A: It really comes down to deep domain expertise. Peaxy has a lot of experience extracting time-series data from battery cycler systems in a variety of native formats, including cycle statistics, key performance indicators (KPI’s) such as energy efficiency, coulombic efficiency, and cycles to short.

Our deployments are typically done in 120 days, with far more speed and efficiency than an in-house development effort or a generalized analytics platform with extensive customizations.