Peak load management heuristic control#641
Peak load management heuristic control#641jaredthomas68 wants to merge 41 commits intoNatLabRockies:developfrom
Conversation
elenya-grant
left a comment
There was a problem hiding this comment.
just left some initial comments/questions - haven't done a deep dive yet (so some of my questions/comments may be silly or I'll be able to answer during a deep-dive) but plan to do a deeper review by Thursday morning. I only looked at the changes and additions to the control classes but will review the tests in the second review I do.
Overall looks like a great start - most of my comments were small or were questions!
h2integrate/control/control_strategies/storage/openloop_storage_control_base.py
Show resolved
Hide resolved
h2integrate/control/control_strategies/storage/plm_openloop_storage_controller.py
Outdated
Show resolved
Hide resolved
| # determine demand_profile peaks using defaults of daily peaks inside peak_range | ||
| # for the full simulation but respecting the peak range specified in the config | ||
| self.secondary_peaks_df = self.get_peaks( | ||
| demand_profile=self.condig.demand_profile, |
There was a problem hiding this comment.
should this be inputs[f"{self.config.commodity}_demand"] instead of the demand from the config?
There was a problem hiding this comment.
Some of the reasoning for this is in my comment here: #641 (comment). I guess I can split up demand and time stamp as separate inputs so we can use the input like the other controllers.
There was a problem hiding this comment.
I have split up demand and date_time
h2integrate/control/control_strategies/storage/plm_openloop_storage_controller.py
Outdated
Show resolved
Hide resolved
h2integrate/control/control_strategies/storage/plm_openloop_storage_controller.py
Show resolved
Hide resolved
h2integrate/control/control_strategies/storage/plm_openloop_storage_controller.py
Outdated
Show resolved
Hide resolved
h2integrate/control/control_strategies/storage/plm_openloop_storage_controller.py
Outdated
Show resolved
Hide resolved
elenya-grant
left a comment
There was a problem hiding this comment.
Howdy! I gave this more of a deeper look! I think I'm a little confused on how this method works (I didn't try to understand it super hard yet) - so most of my comments were nitpicks or questions. My only blocking comment is about the error being removed from load_plant_yaml - I don't think that error message should be removed at this time.
I think a visual (or two) may be nice to explain some of the inputs to the controller - I think if a doc page with some visuals and explanation on the inputs would be super helpful in making it easier for users to understand how to change the control input parameters based on their use-case.
h2integrate/core/utilities.py
Outdated
| dt_seconds = int(simulation_cfg["dt"]) | ||
|
|
||
| # Optional start_time in config; default to a fixed reference timestamp. | ||
| start_time = simulation_cfg.get("start_time", "2000-01-01 00:00:00") |
There was a problem hiding this comment.
the start-time format in the plant config is defined as being: mm/dd/yyyy HH:MM:SS or mm/dd HH:MM:SS and defaults to 01/01 00:30:00 (doesn't include a year because it was initially going to be used with resource data and the year may change based on the resource year). The format here does not match - do you think we could make sure that the format is consistent mm/dd/yyyy instead of yyyy-mm-dd?
I made a similar function when I was starting on the resource models (it never made it in) but it handles whether a year was added or not:
from datetime import datetime, timezone, timedelta
def make_time_profile(
start_time: str,
dt: float | int,
n_timesteps: int,
time_zone: int | float,
start_year: int | None = None,
):
"""Generate a time-series profile for a given start time, time step interval, and
number of timesteps, with a timezone signature.
Args:
start_time (str): simulation start time formatted as 'mm/dd/yyyy HH:MM:SS' or
'mm/dd HH:MM:SS'
dt (float | int): time step interval in seconds.
n_timesteps (int): number of timesteps in a simulation.
time_zone (int | float): timezone offset from UTC in hours.
start_year (int | None, optional): year to use for start-time. if start-time
is formatted as 'mm/dd/yyyy HH:MM:SS' then will overwrite original year.
If None, the year will default to 1900 if start-time is formatted as 'mm/dd HH:MM:SS'.
Defaults to None.
Returns:
list[datetime]: list of datetime objects that represents the time profile
"""
tz_utc_offset = timedelta(hours=time_zone)
tz = timezone(offset=tz_utc_offset)
tz_str = str(tz).replace("UTC", "").replace(":", "")
if tz_str == "":
tz_str = "+0000"
# timezone formatted as ±HHMM[SS[.ffffff]]
start_time_w_tz = f"{start_time} ({tz_str})"
if len(start_time.split("/")) == 3:
if start_year is not None:
start_time_month_day_year, start_time_time = start_time.split(" ")
start_time_month_day = "/".join(i for i in start_time_month_day_year.split("/")[:-1])
start_time_w_tz = f"{start_time_month_day}/{start_year} {start_time_time} ({tz_str})"
t = datetime.strptime(start_time_w_tz, "%m/%d/%Y %H:%M:%S (%z)")
elif len(start_time.split("/")) == 2:
if start_year is not None:
start_time_month_day, start_time_time = start_time.split(" ")
start_time_w_tz = f"{start_time_month_day}/{start_year} {start_time_time} ({tz_str})"
t = datetime.strptime(start_time_w_tz, "%m/%d/%Y %H:%M:%S (%z)")
else:
# NOTE: year will default to 1900
t = datetime.strptime(start_time_w_tz, "%m/%d %H:%M:%S (%z)")
time_profile = [None] * n_timesteps
time_step = timedelta(seconds=dt)
for i in range(n_timesteps):
time_profile[i] = t
t += time_step
return time_profileThere was a problem hiding this comment.
Thanks for the extended function. I personally much prefer the international format "yyyy-mm-dd", but I understand their being an existing approach. I went ahead and changed the function to handle timezone and added a function to make the time series with direct inputs rather than a config. I do not have much use for the second value of a timestamp, but I did adjust to return python datetime format. We can continue to discuss exact desired format, but I would prefer to make the timeseries in a standard date-time format and allow users and developers to adjust to lists of integers or whatever other format they need from there.
h2integrate/core/utilities.py
Outdated
| dt_seconds = int(simulation_cfg["dt"]) | ||
|
|
||
| # Optional start_time in config; default to a fixed reference timestamp. | ||
| start_time = simulation_cfg.get("start_time", "2000-01-01 00:00:00") |
There was a problem hiding this comment.
if the plant config is loaded using load_plant_yaml(), then the start time should always be included. Aka - I don't think we should have default values in both the modeling schema and this function. But - I'm happy to see a function like this get in!
There was a problem hiding this comment.
Good feedback. I have added a default timezone and removed the default start_time in this function
| @@ -0,0 +1,11 @@ | |||
| name: plant_config | |||
| description: Demonstrates multivariable streams with a gas combiner | |||
There was a problem hiding this comment.
update description in plant config
| commodity: electricity | ||
| commodity_rate_units: kW | ||
| max_charge_rate: 2500.0 # kW/time step, 1, 2.5, or 5 MW | ||
| max_capacity: 10000.0 # kWh, 80 MWh |
There was a problem hiding this comment.
comment for max_capacity is wrong, should say 10 MWh
| max_supervisor_events: (int | None, optional): The maximum number of discharge events | ||
| allowed for the supervisor in the period specified in max_supervisor_event_period, | ||
| or across all time steps if max_supervisor_event_period is None. | ||
|
|
There was a problem hiding this comment.
could you add in the other attributes to the doc string? Like peak_range, advance_discharge_period, delay_charge_period, allow_charge_in_peak_range, and min_peak_proximity?
| }, | ||
| ) | ||
|
|
||
| def __attrs_post_init__(self): |
There was a problem hiding this comment.
should the dictionary inputs be checked I the __attrs_post_init__ method to check that they have the right keys?
|
|
||
| self.get_allowed_discharge() | ||
|
|
||
| @staticmethod |
There was a problem hiding this comment.
why is this a staticemethod rather than just a normal method? (same with _normalize_peak_range?)
There was a problem hiding this comment.
These are static methods because they need to be called with different attributes of the class rather than the exact same attributes in order each time. This also means the output does not have a consistent target.
| # Dispatch strategy outline: | ||
| # - Discharge: Starting when time_to_peak <= advance_discharge_period | ||
| # * Discharge at max rate (or less to reach targets) | ||
| # * Stop discharging only when SOC reaches min_soc |
There was a problem hiding this comment.
could these inline comments get moved closer to where that logic is represented in the code?
There was a problem hiding this comment.
I moved them to the docstring as I think that makes more sense.
|
|
||
| This method applies an open-loop storage control strategy to balance the | ||
| commodity demand and input flow. When input exceeds demand, excess commodity | ||
| is used to charge storage (subject to rate, efficiency, and SOC limits). When |
There was a problem hiding this comment.
The description of this compute method makes it seem really similar to the DemandOpenLoopStorageController and the HeuristicLoadFollowingControl - could you update the doctoring to explain the peak-shaving novelty of this?
|
Very exciting work @jaredthomas68! Thanks for putting this together in such a short time! I did a full pass through |
| dispatch_priority_demand_profile: str = field( | ||
| validator=contains(["demand_profile", "demand_profile_supervisor"]), | ||
| ) | ||
| max_supervisor_events: int | None = (field(default=None),) |
There was a problem hiding this comment.
Is this supposed to be a tuple?
There was a problem hiding this comment.
Nope, thanks for the catch. Fixed
| charge_efficiency: float | None = field(default=None, validator=range_val_or_none(0, 1)) | ||
| discharge_efficiency: float | None = field(default=None, validator=range_val_or_none(0, 1)) | ||
| round_trip_efficiency: float | None = field(default=None, validator=range_val_or_none(0, 1)) | ||
| demand_profile_supervisor: int | float | list | None = field() |
There was a problem hiding this comment.
I intentionally left off the default because I want the user to be very aware of how they are using this controller and if they are excluding a supervisory demand profile or not.
|
|
||
| self.max_discharge_rate = self.max_charge_rate | ||
|
|
||
| # make sure peak_range is in correct format because yaml |
There was a problem hiding this comment.
Same problem for advance_discharge_period, right?
There was a problem hiding this comment.
No, advance_discharge_period uses a unit, val paradigm instead of time stamps.
| ) | ||
|
|
||
| # Store simulation parameters for later use | ||
| self.dt = self.options["plant_config"]["plant"]["simulation"]["dt"] |
There was a problem hiding this comment.
Thanks. Removed
|
|
||
| # Store simulation parameters for later use | ||
| self.dt = self.options["plant_config"]["plant"]["simulation"]["dt"] | ||
| self.time_index = build_time_series_from_plant_config(self.options["plant_config"]) |
There was a problem hiding this comment.
I think it is worth adding a length check against self.n_timesteps somewhere.
There was a problem hiding this comment.
I don't think so. The time series builds on the same config that self.n_timesteps comes from, so they should be the same by default.
There was a problem hiding this comment.
I went ahead and added a check just in case. Won't hurt.
| day_df = supervisory_peaks_df[ | ||
| supervisory_peaks_df["date_time"].dt.floor("D") == day | ||
| ] | ||
| # If supervisor has peaks on the day, use supervisor's flags for all rows that day |
There was a problem hiding this comment.
Good to add check for when supervisor is None.
There was a problem hiding this comment.
Supervisor should never be none in this function because the first argument is always treated as the most important peaks. I changed the naming and doc string to make this more clear.
There was a problem hiding this comment.
I also added the check you suggested
| next_peak_time - self.peaks_df.loc[idx, "date_time"] | ||
| ) | ||
|
|
||
| def get_allowed_discharge(self): |
There was a problem hiding this comment.
Method name is misleading. It actually computes "allow_charge"?
There was a problem hiding this comment.
Great point. I've changed the name.
| soc_array[i] = deepcopy(soc) | ||
|
|
||
| # stay in discharge mode until the battery is fully discharged | ||
| if soc <= soc_min: |
There was a problem hiding this comment.
Note for future:
discharging is only set to False when soc <= soc_min. If the battery doesn't fully drain during the event duration, discharging will continue to stay True
There was a problem hiding this comment.
That is by design. The intended operation is to fully charge and then fully discharge, not try to meet a demand, so the battery should fully discharge. If you have suggestions for catching corner cases on this I'm all ears.
There was a problem hiding this comment.
I think this should be documented in the docpage on the peak load management, I would have expected the battery to stop discharging once the event is over.
There was a problem hiding this comment.
The event period is pretty loose. We find the exact peak, but discharge leading up to and after that peak. I will include more in a docs page.
| # start discharging when we approach a peak and have some charge | ||
| if time_to_peak <= advance_discharge_period and soc > soc_min: | ||
| discharging = True | ||
|
|
There was a problem hiding this comment.
Suggest adding charging = False here in case charging hasn't been set to False in the previous timestep.
| # Note: discharge_needed is internal (storage view), max_discharge_rate is external | ||
| discharge_needed = max_discharge_rate / discharge_eff | ||
| discharge = min( | ||
| discharge_needed, available_discharge, max_discharge_rate / discharge_eff |
There was a problem hiding this comment.
The first and third terms are the same.
There was a problem hiding this comment.
Yes, I was trying to lean into code reuse and hoping I could find a good way, but I went ahead and removed the duplicate.
genevievestarke
left a comment
There was a problem hiding this comment.
Really great PR, @jaredthomas68!
I agree with other comments that a docs page would be nice for users, but I know that's usually one of the last things in a PR!
The main blocking comment I have is the setting of the charging and discharging variables. Let me know if you want to discuss!
| commodity (str): Name of the commodity being controlled | ||
| (e.g., "hydrogen"). Stripped of whitespace. | ||
| commodity_rate_units (str): Units of the commodity (e.g., "kg/h"). | ||
| demand_profile (int | float | list): Demand values for each timestep, in |
There was a problem hiding this comment.
I believe these first three are included in the base class, so they aren't usually included in the inherited class.
| discharging the storage, represented as a decimal between 0 and 1 (e.g., 0.81 for | ||
| 81% efficiency). Optional if `charge_efficiency` and `discharge_efficiency` are | ||
| provided. | ||
| commodity_amount_units (str | None, optional): Units of the commodity as an amount |
There was a problem hiding this comment.
Same comment here (already in base class)
| provided. | ||
| commodity_amount_units (str | None, optional): Units of the commodity as an amount | ||
| (i.e., kW*h or kg). If not provided, defaults to commodity_rate_units*h. | ||
| demand_profile_supervisor (int | float | list | None, optional): Demand values for |
There was a problem hiding this comment.
I don't think the name of the variable here matches the doc string. Is it specifically for a supervising entity?
There was a problem hiding this comment.
Names have been changed and unified
|
|
||
| """ | ||
|
|
||
| max_capacity: float = field() |
There was a problem hiding this comment.
Many of these parameters are the same as those in DemandOpenLoopStorageController. Could this class inherit from that class?
There was a problem hiding this comment.
Possibly, but only in terms of parameters. Thoughts on this @johnjasa ?
There was a problem hiding this comment.
I added these repeat parameters to the base class as optional based on the value of a class flag that child classes need to set to True for the additional parameters to be required.
|
|
||
| # Detect daily peaks in secondary demand profile (always computed) | ||
| # Respects the configured peak_range time window for each day | ||
| self.secondary_peaks_df = self.get_peaks( |
There was a problem hiding this comment.
I agree! Secondary peaks being the only peaks object that is always computed is a bit confusing. I think this point it true throughout this section, where secondary_demand_profile is actually the main profile.
| demand_df.loc[daily_peak_idx, "is_peak"] = True | ||
|
|
||
| # Optional: Limit number of peaks globally or per period | ||
| if n_max_events is not None: |
There was a problem hiding this comment.
can n_max_events be zero, or should it be None if it's zero?
There was a problem hiding this comment.
n_max_events set to zero would mean the controller does nothing and the battery will never be used for the case when only one load is provided. When two loads are provided then the supervisory load would have no impact if n_max_events is zero.
| def merge_peaks(supervisory_peaks_df, secondary_peaks_df): | ||
| """Merge supervisor and secondary peak schedules with supervisor precedence. | ||
|
|
||
| Combines two peak schedules (primary and fallback) using day-level precedence: |
There was a problem hiding this comment.
I think the wording could be "original and override" maybe? Just trying to head off confusion about secondary being default behavior.
There was a problem hiding this comment.
I changed the names in this method to peaks_1 and peaks_2 since they don't always correspond to the outer scope.
| } | ||
|
|
||
| @staticmethod | ||
| def _normalize_peak_range(peak_range): |
There was a problem hiding this comment.
How does this normalize the peak range?
There was a problem hiding this comment.
It does not. The normalize was more of get in the normal format for computations. Bad naming. I changed it to _parse_peak_range
| if np.all(inputs[f"{commodity}_demand"] == 0.0): | ||
| msg = "Demand profile is zero, check that demand profile is input" | ||
| raise UserWarning(msg) | ||
| if inputs["max_charge_rate"][0] < 0: |
There was a problem hiding this comment.
Is this repeated code?
There was a problem hiding this comment.
Yes, it is the same as in DemandOpenLoopStorageController. I originally wanted to inherit, but I'm not sure there is sufficient code reuse for that. I'll give it another go.
There was a problem hiding this comment.
I also wanted to limit inheritance depth
There was a problem hiding this comment.
I have moved these checks to StorageOpenLoopControlBase
| if not discharging and soc < soc_max: | ||
| if self.peaks_df["allow_charge"].iloc[i]: | ||
| if (td - last_discharge) > delay_charge_period: | ||
| charging = True |
There was a problem hiding this comment.
I think discharging = False could also be added here
There was a problem hiding this comment.
Can discharging and charging be False at the same time? How is that set?
There was a problem hiding this comment.
Good question. The not discharging or charging is handled by rest of the logic. For example, if the battery is fully charged and we are not at a peak, then the battery does nothing.
There was a problem hiding this comment.
The flags are more of what action could be taken in order to make sure the battery fully discharges before charging and fully charges before discharging.
…ates, default time zone and other changes for PR feedback
…bug statement from test_utilities.py
Peak load management heuristic control
This PR adds Peak load management heuristic control to H2I. This does not do demand dispatch, but rather dispatches based on peaks in the provided load and rules defined by the user.
Section 1: Type of Contribution
Section 2: Draft PR Checklist
TODO:
Type of Reviewer Feedback Requested (on Draft PR)
I am primarily looking for high-level structural and implementation feedback at this point
Structural feedback:
Implementation feedback:
Other feedback:
Section 3: General PR Checklist
docs/files are up-to-date, or added when necessaryCHANGELOG.md"A complete thought. [PR XYZ]((https://github.com/NatLabRockies/H2Integrate/pull/XYZ)", where
XYZshould be replaced with the actual number.Section 3: Related Issues
Section 4: Impacted Areas of the Software
Section 4.1: New Files
path/to/file.extensionmethod1: What and why something was changed in one sentence or less.Section 4.2: Modified Files
path/to/file.extensionmethod1: What and why something was changed in one sentence or less.Section 5: Additional Supporting Information
Section 6: Test Results, if applicable
Section 7 (Optional): New Model Checklist
docs/developer_guide/coding_guidelines.mdattrsclass to define theConfigto load in attributes for the modelBaseConfigorCostModelBaseConfiginitialize()method,setup()method,compute()methodCostModelBaseClasssupported_models.pycreate_financial_modelinh2integrate_model.pytest_all_examples.pydocs/user_guide/model_overview.mddocs/section<model_name>.mdis added to the_toc.yml