Sandia Labs benchmarks PV software providers in first ever blind comparison analysis

Share

From pv magazine Global

A group of scientists from the U.S. Department of Energy’s Sandia National Laboratories has conducted a comprehensive assessment of seven PV modeling software tools – 3E SynaptiQ, PlantPredict, PVsyst, RatedPower, SAM, SolarFarmer, and Solargis Evaluate – and has found that their performance diverges as system complexity increases.”

“This is the first ever blind and independent comparison of commercially used PV software, with predictions submitted directly by the software providers,” the research’s corresponding author, Marios Theristis, told pv magazine. “We did not rank tools, instead, we focused on how do different modeling features and assumptions affect predictions.”

“We compiled summary tables of software features and then compared the predictions made by the providers,” he went on to say. “We observed that results align closely for simple systems, by which we mean fixed-tilt, flat-terrain, small-scale, monofacial systems, while differences increase as systems become more complex and are linked to specific modeling choices and software features.”

In the study “Feature review of photovoltaic modeling software utilizing blind performance assessment,” published in Solar Energy, the Sandia group explained that, unlike earlier studies that focused on small systems, single locations, or anonymous participants, their work presents a transparent, feature-level comparison of widely used PV modeling software supporting both pre-construction post-construction activities. 

The scientists categorized software features into weather and irradiance, DC system modeling, AC system modeling, and derates. The software tools were tested using one year of data from from two fixed-tilt, monofacial, south-facing systems in Albuquerque, United States and an undisclosed site in Germany, with a capacity of 15.4 kW and 14.5 MW, respectively.

Measurements were performed independently, with instrument details withheld from the software providers, enabling an unbiased blind comparison of PV modeling performance across software platforms, according to the research team. Weather and irradiance data were also filtered before distribution to software providers.

The analyis showed that PV modeling results vary significantly across software due to differences in weather handling, system modeling, inverter assumptions, and user-specified derates, while highlighting the critical influence of both software design and user choices on predicted energy outcomes. Moreover, the blind modeling comparison revealed differences across software in plane of array (POA) irradiance transposition, module temperature, DC/AC power, and derates.

Weather modeling, for exampled, varied due to different libraries, transposition models, and assumptions about air mass, albedo, and location, with median POA residuals ranging from 14.65 to 6.06. DC system features were generally consistent, but shading and temperature models varied. By contrast, AC system modeling differed in inverter efficiency, clipping handling, and curtailment adjustments. Furthermore, shading approaches were found to vary by stringing, irradiance decomposition, and terrain assumptions, creating uncertainty.

“Our findings underscore the need for continuous, independent, and rigorous validation of modeling methods, comparing software tools against complex, real-world systems,” Theristis said. “Ultimately, the ‘right’ tool depends on project complexity, workflow, and the surrounding software ecosystem.”

This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: editors@pv-magazine.com.

Popular content

Residential solar to decline 33% year-over-year, said Roth Capital Partners
17 March 2026 The U.S. residential solar market faces immediate pressure as tax credits expire and FEOC challenges mount.