The right technical assumptions: An interview with Davion Hill, DNV GL

Share

Editor’s note: Last week testing and certification giant DNV GL released its first Battery Performance Scorecard, looking at characteristics of different anonymized batteries – mostly different lithium-ion designs. pv magazine caught up with DNV GL Energy Storage Leader Davion Hill at the Energy Storage North America (ESNA) 2018 conference to ask a few questions about the scorecard and how we should look at the results.

 

pv magazine: DNV GL’s PV module scorecard is well known in the industry. Can you talk about how your new battery scorecard builds on that work, and what is different about this project?

Davion Hill: We started testing batteries in 2008 with only four channels. Back then it was an R&D effort and we were trying to understand what we could do with the data, and how can we offer a useful service that our customers would want. 

In that world batteries were $1200 per kilowatt-hour and long-duration energy storage from a battery was still something that was kind of a dream – it was something that was far away. So we continued to work on our test data, and test as long as we could. And we were paying for a lot of that ourselves, collecting that data.

And then as the market started to rapidly evolve and change around 2014, we started to see actual financed projects hitting the marketplace. We had done one of the first independent engineering reviews on energy storage for RES Americas in 2014.

We knew right then that we have to be able to verify the warranties on these things. After we acquired PVEL and we saw the way the PV Scorecard was received, we knew that we had to do that for batteries.

So it was just a question of: “do we have the right data for batteries?” We quickly took all the test data that we had collected over the prior six or seven years, and we just needed to tweak our test plan a little bit, and then we made a product qualification program just like we did for PV.

So that’s what we did. And we started going out and selling that to the marketplace, and what you see is the result of that work. 

 

pv magazine: Obviously having your products tested in an apples-to-apples comparison is a benefit to manufacturers. What about the end-consumers? What is the benefit of this scorecard to consumers?

Hill: You are going to have two types of people who are going to be affected by this. Really what the scorecard is aiming at is the owner of the system, who is taking a lot of financial risk – whoever has outlaid the capital to have it build. So they are at risk of whether or not the warranties that have been offered by the manufacturers are correctly calculated and/or commercially responsibly calculated. So we are trying to give them data to verify that.

For a behind-the-meter storage system, where a customer is reliant on an energy storage system to shave their peak demand charges or give them increased reliability,  they want to know that this system is going to be there. 

If it wasn’t based on the correct technical assumptions, then that function would be at risk. And so this data helps to inform that, and it helps us to back up reliability or availability guarantees that might be in the contract for the system: performance guarantees associated with whatever function they were supposed to be doing, they might have guarantees related to associated savings for behind-the-meter projects. And we would be able to calculate whether or not there would be a probability that it would fail and not meet those, so that is how it relates to the end-user.

 

pv magazine: In reading the battery scorecard, I noted a range of performance characteristics for the lithium-ion batteries tested. Can you comment on what in your mind causes such different performance in different lithium-ion products – whether it is chemistry or manufacturing – and what this means for use cases?

Hill: That’s a good question. I can say that we see some outliers associated with manufacturing practices. There are differences in manufacturing practices. 

You can manufacture lithium-ion batteries in an open warehouse kind of environment – a semi-clean environment, or you can manufacture them in a clean room environment. And we see a definite difference in performance because of that.

But in terms of the SOC (state of charge) sensitivities that we show in one of our charts: that is due to differences in the chemistries. All of the manufacturers have spent a lot of time in the prior 15 years of development of NCM chemistries doing a lot of lab testing and tweaking in terms of changing a little bit of cobalt, changing a little bit of manganese, changing a little bit of nickel – whatever it might be. And that has led to differences in performance. 

And they settled on the chemistry that they needed ultimately for some function that they thought might be needed in the market. Maybe they wanted high temperature resistance/high temperature performance. Maybe they wanted cold temperature performance. Maybe they wanted a better C rate. Whatever it is, that’s what’s led to the discrepancies in these batteries.

But the thing is, all of them have that engineering compromise built in. All this is showing is that this is the result of that. It doesn’t mean that a particular battery is good or bad. It just means that they are all a little different. 

You can see in the ratings in the scorecard: In some areas where a battery excels, it might not excel in others. It might be the opposite for another competitor in the scorecard. In areas where they didn’t excel, they might out-perform their competitors in another area. 

So that’s what we want to show. We are not trying to out any particular manufacturer. We just want to show that there are things to consider when batteries are deployed in a certain application, and whether or not those are correctly matched will determine whether or not it is fit for that service.

 

pv magazine: What was the most surprising thing that you have seen in testing the batteries from a technical perspective?

Hill: It is definitely the throughput sensitivity to SOC conditions. We suspected that was there. We came into this, even five years ago, expecting that there would be an SOC sensitivity at the upper and lower ranges. We weren’t sure that some batteries would have the lower range sensitivity or not, and we were really surprised to find out that it was just all across the map. 

And that has a direct impact on controls. There are two factors that impact how a battery cell is getting recycled. Number one is how it is built into the system – whether or not it was a series or parallel configuration, and how many cells were in parallel, and how much of that energy range you are actually cycling through electrochemically. 

And then the other one is the controls. The controls are the market-driven function that asks the system to perform in a way that maximizes its revenue for the project, or its value to the project.

So for example, if you have a battery that degrades quickly if it is resting at a higher SOC range, that battery might not be well-suited in an infrequent peak-shaving function, where it would be sitting at high states of charge waiting to discharge. So being able to identify that, and also verify the system design appropriately, is important, and that’s what those results show us.

 

pv magazine: How do you expect this information to be used by manufacturers, and what are the next steps for the battery scorecard? 

Hill: What we would like to see is more direct participation from manufacturers. We’ve had a lot of participation in our PQPs – our product qualification programs-  by developers who are trying to get this data, asking for it from the manufacturers, and the manufacturers would not share it with them. So the developers initiated the testing.

So what we hope is that more manufacturer participation can occur. If that occurs, that means that when a buyer comes from us, asking for our review for a product for an application, rather than waiting for testing results to come to fruition, or for the test to complete, we can instead just call up the manufacturer, ask their permission to share the data with the developer and the lender, do a quick review and we can close that independent engineering review cycle very quickly.

Which means more deals for them, more transactions for everybody, and more growth in the market. We hope that this opens the bottleneck in the market and speeds up transactions. It should accelerate the rate of transactions. This isn’t meant to put up a red flag or slow anybody down. It is meant to speed everybody up. And that’s really what we hope this will do.

 

This interview was conducted by pv magazine Americas Editor Christian Roselund at the ESNA 2018 trade show.

This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: editors@pv-magazine.com.