In our opinion, the same code that runs your simulations for energy optimization should also run in production, when you actually control your flexible assets in real time.
This will decrease your time-to-market as well as your chances of failure, and it will give you more strategic options during operations.
Why?
1. After the simulation phase, you don’t want to start a new project (writing and/or integrating real-time operation code). You want to start operating as quickly as possible.
2. Mismatches from modeling twice. Re-writing the asset and flexibility model can lead to mismatches ― another possible source for project failures.
3. After operating for a while, you might consider re-doing simulations (maybe to compute current what-ifs, or for designing the next steps). Any improvements and adaptions which have been made in the operation code now have to be back-ported to your simulation code.
How?
This is the approach we take with FlexMeasures. It’s designed for real-time operation first. Simulations run step-wise through the data, where our data model allows to only feed those data to the algorithm which would have been available at the simulated moment (e.g. weather forecasts, energy market prices & incentives).
When the simulation phase leads to a GO decision, we have almost everything in place to move to the pilot phase immediately. Get in touch if you want to learn more.
Below is a visualization from our simulation project with Heijmans Energie. See how we walk through time, and distinguish what is measured from what is forecasted at each step ― this extra-effort we made for realism pays off when we can apply the code into operation!
How does FlexMeasures handle real-time operation and ensure that only relevant data is fed to the algorithm during simulations? Could you provide more details on how the data model works in terms of incorporating factors such as weather forecasts, energy market prices, and incentives at specific simulated moments?
I agree that going deeper would be useful. I don’t have the time right now, but I hope to put together a web page here on exactly this question.
The short answer is that our data model (https://github.com/SeitaBV/timely-beliefs/) keeps track of the time a data point was known, and therefore simulations in FlexMeasures can step through data step by step (hour by hour) and only ever use the parts from data sets which can be assumed would have been known at the time (e.g. weather forecasts with the correct horizons, market prices published at the time).