The challenge of perfect operating data

Share

The purpose of a solar monitoring system is to characterize the operational performance of the plant’s operating equipment so we can ensure the equipment is performing well and in a way that is consistent with the financing model.

Model developers attempt to solve this problem by creating physical, statistical or algorithm-based models of the equipment to estimate how the equipment should perform given the actual operating conditions of the plant. Monitoring systems then feed these equipment performance models with actual operating data from the plant sensors, attempting to use the models to calculate how they can expect the equipment to perform under real-world operating conditions.

Therein lies the problem. The introduction of high volumes of noisy plant operating data into the model often causes these theoretical performance models to fail. These models—which worked in the lab—return inaccurate results when faced with real-world operating data. “Inaccurate” can mean no results at all or it can mean results that have low statistical confidence and thus should be ignored.

To add to the problem, operating events or alarms can be generated by these inaccurate results, triggering downstream workflow problems such as creating work orders and rolling trucks to fix equipment that turns out not to be broken.

Why is this? Why is it the norm in the solar industry that operators and performance engineers often lose confidence in the output of their monitoring systems? Are the developers of the performance models incompetent? Didn’t they test their models to ensure they would work?

Yes, they’re competent. Of course they performed extensive testing of the models. But it has been my experience that many well-intentioned developers spend too much time in the “lab” working with high quality, cleansed operating data and not enough time in the field dealing with the real-world operating data their models will actually consume.

Laboratory-fed performance models usually generate reasonable and predictable results. Unfortunately, real-world operating data often generates inaccurate and misleading results from these same monitoring system applications.

Dealing with real-world data

So, what are we to do? Should we just throw up our hands in defeat because the problem is too difficult to solve? No. We can’t let noisy operating data keep us from reliable and accurate performance monitoring of our solar power fleets. The stakes are too high. Running a power plant without accurate performance data is like driving a car with no dashboard—you can do it for a while, but eventually you’re going to overheat your engine or wind up with no gas in the middle of nowhere.

In the absence of a detailed analysis of where plant performance losses are occurring, we estimate that the typical owner is forgoing 2-5% of recoverable energy production per year. For a 1 GW portfolio with an average PPA rate of $0.05/kWh, that translates to about $6M USD of additional positive cash flow for the portfolio per year.

So, what should we be doing differently to go after this incremental portfolio performance gain? We think the answer lies in the old adage, “You need to play with the cards you’ve been dealt.” Since the cost of delivering laboratory quality data to our data models is too high, we need to let go of the dream for perfect data and deal with the messy data we have. Our performance models need to work well despite the noisy plant operating data that comes from our solar power facilities.

Is this possible? Can we develop performance models that can work with the good, the bad and the ugly of our real-world operating data? We believe the answer is “yes.” But to do this, we need to think differently about how performance monitoring systems need to work with the solar power asset class. Solar power performance models need to be resilient and they need to inform their users of what they’re capable of reliably identifying in solar plant performance shortfalls.

Summary

The problem of perfect operating data has hampered good solar performance analysis throughout the history of the industry. Until this core problem is resolved, users will continue to be frustrated with spotty and sometimes misleading results from their monitoring tools.

***

Steve Hanawalt is an EVP and Co-Founder at Power Factors, a SaaS company that equips solar power owners and operators with a real-time monitoring, event management, reporting and field service management software software platform for maximizing the profitability of their solar power portfolio.

The views and opinions expressed in this article are the author’s own, and do not necessarily reflect those held by pv magazine.

This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: editors@pv-magazine.com.

Popular content

Rural electric co-ops receive $4.37 billion in clean energy funding
23 December 2024 Funding from the U.S. Department of Agriculture’s Empowering Rural America Program is available to rural electric cooperatives in Arizona, Colorado, F...