Skip to content

Thermal model updates: feasibility & unit tests #374

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

werdnum
Copy link
Contributor

@werdnum werdnum commented Nov 13, 2024

Three significant changes:

  1. Add an option (to become the default?) to use penalties rather than constraints for thermal conditions. This will replace a bunch of hacks I'm using to avoid trying to ask for something impossible from the solver.
  2. Rework overshoot temperature - instead of "never exceed this temperature", we want "if we do exceed this temperature, it shouldn't be our fault" - implemented as "load must be off if we overshoot"
  3. Added relatively comprehensive unit tests to cover the above behaviours plus more.

Summary by Sourcery

Enhance thermal management in optimization model by introducing more flexible temperature constraint handling and comprehensive unit tests

New Features:

  • Add support for penalty-based thermal constraints instead of hard constraints
  • Implement more nuanced temperature overshoot handling for both heating and cooling scenarios

Enhancements:

  • Refactor thermal load optimization to support different modes of temperature management
  • Improve flexibility in handling thermal load constraints

Tests:

  • Add comprehensive unit tests for thermal management scenarios
  • Implement test cases covering heating and cooling temperature constraints
  • Create tests for both constraint and penalty-based thermal management modes

1. Add option to operate on a penalty basis instead of a constraint basis. This means that we will provide a suboptimal solution rather than fail out and produce nonsense.
2. Rework overshoot constraint such that instead of saying "the temperature shall not exceed this value", we are now saying "if the temperature goes past the overshoot point, the load must be off the previous timestep". In other words, we can overshoot, but if we do, it can't be because of us.
…milar are not specified for all deferred loads (treat them as the default)
@davidusb-geek
Copy link
Owner

Hi @werdnum, I finally gathered some small spare time to look at this. It seems like a very nice contribution, this may help with unexplainable infeasible results we get some time. I should probably think about moving to this type of reasoning and implementation: penalizing instead of constraining. On downside of this is that if you ask the "imposible" to the algorithm then it will converge but the result may possible have no real physical meaning.

What about this PR, it is in a Draft version? Do you think that you could package this into a meargeable PR?

Copy link

sonarqubecloud bot commented Feb 1, 2025

@Micr0mega
Copy link

Any updates on dropping the draft status? I'd love this, since I'm using the thermal model to control my heat pump, which is connected to a solar boiler. The temperatures will, in summer time, for sure rise beyond the overshoot temperature, without running the heat pump itself. This currently will mean my model will be infeasible very often.

@GeoDerp
Copy link
Contributor

GeoDerp commented Apr 19, 2025

@sourcery-ai review

Copy link

sourcery-ai bot commented Apr 19, 2025

Hi @GeoDerp! 👋

Only authors and team members can run @sourcery-ai commands on public repos.

@davidusb-geek
Copy link
Owner

@sourcery-ai review

Copy link

sourcery-ai bot commented Apr 19, 2025

Reviewer's Guide by Sourcery

This pull request introduces significant updates to the thermal model, including the option to use penalties instead of constraints, reworked overshoot temperature handling, and comprehensive unit tests. The test forecast setup was also refactored for better maintainability.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Added option to use penalties instead of constraints for thermal conditions.
  • Added a 'mode' option to thermal_config to switch between 'constrain' and 'penalize'.
  • When penalizing, a penalty_factor can be specified to adjust the impact of temperature deviations.
  • Introduced a penalty_var to represent the penalty value, ensuring it's non-positive.
  • Modified the objective function to include the penalty_var, influencing the optimization towards desired temperatures.
src/emhass/optimization.py
Reworked overshoot temperature handling for thermal loads.
  • Introduced an is_overshoot binary variable to indicate if the temperature exceeds the overshoot_temperature.
  • Added constraints to ensure is_overshoot is correctly set based on the predicted temperature.
  • Added a constraint to ensure that the load must be off if an overshoot occurs, preventing further temperature increase/decrease.
  • Added sense_coeff to handle both heating and cooling scenarios.
src/emhass/optimization.py
Added comprehensive unit tests for thermal load behaviors.
  • Created a run_thermal_forecast method to streamline thermal forecast testing.
  • Added tests for thermal management with constraints, including overshoot scenarios for both heating and cooling.
  • Added tests for thermal management with penalties, covering both heating and cooling.
  • Added tests to verify that the overshoot temperature is not exceeded and that temperature requirements are met.
  • Added tests to verify that the load is turned off when the overshoot temperature is exceeded.
tests/test_optimization.py
Refactored the test forecast setup.
  • Consolidated forecast setup into a single run_test_forecast method.
  • Made the prediction_horizon configurable.
  • Simplified the passing of data to the forecast.
  • Updated the optimization configuration to use num_def_loads instead of number_of_deferrable_loads.
tests/test_optimization.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @werdnum - I've reviewed your changes - here's some feedback:

Overall Comments:

  • The new tests look good, but it's a bit hard to understand what they're testing without more context.
  • Consider adding a docstring to the new run_test_forecast method to explain its purpose and parameters.
Here's what I looked at during the review
  • 🟡 General issues: 2 issues found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@@ -402,7 +408,7 @@ def create_matrix(input_list, n):
rhs = 0)
for i in set_I})

elif "def_load_config" in self.optim_conf.keys():
elif "def_load_config" in self.optim_conf.keys() and len(self.optim_conf["def_load_config"]) > k:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Improved safety check for def_load_config indexing.

The additional check on the length of def_load_config prevents out-of-range errors. Verify that this logic aligns with your intended configuration behavior for deferrable loads.

Suggested implementation:

            # Improved safety check for def_load_config indexing to prevent out-of-range errors.
            elif len(self.optim_conf.get("def_load_config", [])) > k:

This change ensures that if "def_load_config" is not present (or is not a list), the default empty list is used so that the length check reliably prevents indexing errors. Be sure that this behavior aligns with your intended configuration handling for deferrable loads.

@@ -642,6 +688,8 @@
})
opt_model.constraints = constraints

print(repr(opt_model))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Remove or guard debug print statements from production code.

The print(repr(opt_model)) aids debugging during development, but it might be better to remove or conditionally execute it (e.g., under a debug flag) in production.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants