pypsa-eur/scripts/cluster_network.py

537 lines
17 KiB
Python
Raw Normal View History

# -*- coding: utf-8 -*-
2023-02-16 10:50:55 +00:00
# SPDX-FileCopyrightText: : 2017-2023 The PyPSA-Eur Authors
#
2021-09-14 14:37:41 +00:00
# SPDX-License-Identifier: MIT
# coding: utf-8
"""
Creates networks clustered to ``{cluster}`` number of zones with aggregated
buses, generators and transmission corridors.
2019-08-11 09:40:47 +00:00
Relevant Settings
-----------------
2019-08-11 11:17:36 +00:00
.. code:: yaml
clustering:
cluster_network:
aggregation_strategies:
focus_weights:
2019-08-11 11:17:36 +00:00
solving:
solver:
name:
lines:
length_factor:
Introduce mocksnakemake which acutally parses Snakefile (#107) * rewrite mocksnakemake for parsing real Snakefile * continue add function to scripts * going through all scripts, setting new mocksnakemake * fix plotting scripts * fix build_country_flh * fix build_country_flh II * adjust config files * fix make_summary for tutorial network * create dir also for output * incorporate suggestions * consistent import of mocksnakemake * consistent import of mocksnakemake II * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/retrieve_databundle.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * use pathlib for mocksnakemake * rename mocksnakemake into mock_snakemake * revert change in data * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * remove setting logfile in mock_snakemake, use Path in configure_logging * fix fallback path and base_dir fix return type of make_io_accessable * reformulate mock_snakemake * incorporate suggestion, fix typos * mock_snakemake: apply absolute paths again, add assertion error *.py: make hard coded io path accessable for mock_snakemake * retrieve_natura_raster: use snakemake.output for fn_out * include suggestion * Apply suggestions from code review Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * linting, add return ad end of file * Update scripts/plot_p_nom_max.py Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * Update scripts/plot_p_nom_max.py fixes #112 Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * plot_p_nom_max: small correction * config.tutorial.yaml fix snapshots end * use techs instead of technology * revert try out from previous commit, complete replacing * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters II
2019-12-09 20:29:15 +00:00
.. seealso::
Documentation of the configuration file ``config/config.yaml`` at
:ref:`toplevel_cf`, :ref:`renewable_cf`, :ref:`solving_cf`, :ref:`lines_cf`
2019-08-11 09:40:47 +00:00
Inputs
------
- ``resources/regions_onshore_elec_s{simpl}.geojson``: confer :ref:`simplify`
- ``resources/regions_offshore_elec_s{simpl}.geojson``: confer :ref:`simplify`
- ``resources/busmap_elec_s{simpl}.csv``: confer :ref:`simplify`
- ``networks/elec_s{simpl}.nc``: confer :ref:`simplify`
- ``data/custom_busmap_elec_s{simpl}_{clusters}.csv``: optional input
2019-08-11 20:34:18 +00:00
2019-08-11 09:40:47 +00:00
Outputs
-------
- ``resources/regions_onshore_elec_s{simpl}_{clusters}.geojson``:
2019-08-11 20:34:18 +00:00
2023-03-09 12:28:42 +00:00
.. image:: img/regions_onshore_elec_s_X.png
2019-08-12 17:01:53 +00:00
:scale: 33 %
2019-08-11 09:40:47 +00:00
- ``resources/regions_offshore_elec_s{simpl}_{clusters}.geojson``:
2023-03-09 12:28:42 +00:00
.. image:: img/regions_offshore_elec_s_X.png
2019-08-12 17:01:53 +00:00
:scale: 33 %
- ``resources/busmap_elec_s{simpl}_{clusters}.csv``: Mapping of buses from ``networks/elec_s{simpl}.nc`` to ``networks/elec_s{simpl}_{clusters}.nc``;
- ``resources/linemap_elec_s{simpl}_{clusters}.csv``: Mapping of lines from ``networks/elec_s{simpl}.nc`` to ``networks/elec_s{simpl}_{clusters}.nc``;
- ``networks/elec_s{simpl}_{clusters}.nc``:
2023-03-09 12:28:42 +00:00
.. image:: img/elec_s_X.png
2019-08-12 17:01:53 +00:00
:scale: 40 %
Description
-----------
2019-08-12 17:01:53 +00:00
.. note::
2019-08-12 17:01:53 +00:00
**Why is clustering used both in** ``simplify_network`` **and** ``cluster_network`` **?**
Introduce mocksnakemake which acutally parses Snakefile (#107) * rewrite mocksnakemake for parsing real Snakefile * continue add function to scripts * going through all scripts, setting new mocksnakemake * fix plotting scripts * fix build_country_flh * fix build_country_flh II * adjust config files * fix make_summary for tutorial network * create dir also for output * incorporate suggestions * consistent import of mocksnakemake * consistent import of mocksnakemake II * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/retrieve_databundle.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * use pathlib for mocksnakemake * rename mocksnakemake into mock_snakemake * revert change in data * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * remove setting logfile in mock_snakemake, use Path in configure_logging * fix fallback path and base_dir fix return type of make_io_accessable * reformulate mock_snakemake * incorporate suggestion, fix typos * mock_snakemake: apply absolute paths again, add assertion error *.py: make hard coded io path accessable for mock_snakemake * retrieve_natura_raster: use snakemake.output for fn_out * include suggestion * Apply suggestions from code review Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * linting, add return ad end of file * Update scripts/plot_p_nom_max.py Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * Update scripts/plot_p_nom_max.py fixes #112 Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * plot_p_nom_max: small correction * config.tutorial.yaml fix snapshots end * use techs instead of technology * revert try out from previous commit, complete replacing * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters II
2019-12-09 20:29:15 +00:00
Consider for example a network ``networks/elec_s100_50.nc`` in which
``simplify_network`` clusters the network to 100 buses and in a second
step ``cluster_network``` reduces it down to 50 buses.
In preliminary tests, it turns out, that the principal effect of
changing spatial resolution is actually only partially due to the
transmission network. It is more important to differentiate between
wind generators with higher capacity factors from those with lower
capacity factors, i.e. to have a higher spatial resolution in the
renewable generation than in the number of buses.
The two-step clustering allows to study this effect by looking at
networks like ``networks/elec_s100_50m.nc``. Note the additional
``m`` in the ``{cluster}`` wildcard. So in the example network
there are still up to 100 different wind generators.
In combination these two features allow you to study the spatial
resolution of the transmission network separately from the
spatial resolution of renewable generators.
2019-08-12 17:01:53 +00:00
**Is it possible to run the model without the** ``simplify_network`` **rule?**
No, the network clustering methods in the PyPSA module
2023-06-29 13:37:29 +00:00
`pypsa.clustering.spatial <https://github.com/PyPSA/PyPSA/blob/master/pypsa/clustering/spatial.py>`_
do not work reliably with multiple voltage levels and transformers.
2019-08-11 20:34:18 +00:00
.. tip::
The rule :mod:`cluster_networks` runs
Introduce mocksnakemake which acutally parses Snakefile (#107) * rewrite mocksnakemake for parsing real Snakefile * continue add function to scripts * going through all scripts, setting new mocksnakemake * fix plotting scripts * fix build_country_flh * fix build_country_flh II * adjust config files * fix make_summary for tutorial network * create dir also for output * incorporate suggestions * consistent import of mocksnakemake * consistent import of mocksnakemake II * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/retrieve_databundle.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * use pathlib for mocksnakemake * rename mocksnakemake into mock_snakemake * revert change in data * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * remove setting logfile in mock_snakemake, use Path in configure_logging * fix fallback path and base_dir fix return type of make_io_accessable * reformulate mock_snakemake * incorporate suggestion, fix typos * mock_snakemake: apply absolute paths again, add assertion error *.py: make hard coded io path accessable for mock_snakemake * retrieve_natura_raster: use snakemake.output for fn_out * include suggestion * Apply suggestions from code review Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * linting, add return ad end of file * Update scripts/plot_p_nom_max.py Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * Update scripts/plot_p_nom_max.py fixes #112 Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * plot_p_nom_max: small correction * config.tutorial.yaml fix snapshots end * use techs instead of technology * revert try out from previous commit, complete replacing * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters II
2019-12-09 20:29:15 +00:00
for all ``scenario`` s in the configuration file
the rule :mod:`cluster_network`.
2019-08-11 20:34:18 +00:00
Exemplary unsolved network clustered to 512 nodes:
2023-03-09 12:28:42 +00:00
.. image:: img/elec_s_512.png
:scale: 40 %
:align: center
Exemplary unsolved network clustered to 256 nodes:
2023-03-09 12:28:42 +00:00
.. image:: img/elec_s_256.png
:scale: 40 %
:align: center
Exemplary unsolved network clustered to 128 nodes:
2023-03-09 12:28:42 +00:00
.. image:: img/elec_s_128.png
:scale: 40 %
:align: center
Exemplary unsolved network clustered to 37 nodes:
2023-03-09 12:28:42 +00:00
.. image:: img/elec_s_37.png
:scale: 40 %
:align: center
"""
import logging
import warnings
from functools import reduce
import geopandas as gpd
2019-09-23 12:32:51 +00:00
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pyomo.environ as po
import pypsa
2019-09-23 12:32:51 +00:00
import seaborn as sns
from _helpers import configure_logging, get_aggregation_strategies, update_p_nom_max
2023-06-29 13:37:29 +00:00
from pypsa.clustering.spatial import (
busmap_by_greedy_modularity,
busmap_by_hac,
busmap_by_kmeans,
get_clustering_from_busmap,
)
warnings.filterwarnings(action="ignore", category=UserWarning)
2019-09-23 14:44:48 +00:00
from add_electricity import load_costs
idx = pd.IndexSlice
logger = logging.getLogger(__name__)
def normed(x):
return (x / x.sum()).fillna(0.0)
def weighting_for_country(n, x):
conv_carriers = {"OCGT", "CCGT", "PHS", "hydro"}
gen = n.generators.loc[n.generators.carrier.isin(conv_carriers)].groupby(
"bus"
).p_nom.sum().reindex(n.buses.index, fill_value=0.0) + n.storage_units.loc[
n.storage_units.carrier.isin(conv_carriers)
].groupby(
"bus"
).p_nom.sum().reindex(
n.buses.index, fill_value=0.0
)
load = n.loads_t.p_set.mean().groupby(n.loads.bus).sum()
b_i = x.index
g = normed(gen.reindex(b_i, fill_value=0))
l = normed(load.reindex(b_i, fill_value=0))
w = g + l
return (w * (100.0 / w.max())).clip(lower=1.0).astype(int)
def get_feature_for_hac(n, buses_i=None, feature=None):
if buses_i is None:
buses_i = n.buses.index
if feature is None:
feature = "solar+onwind-time"
carriers = feature.split("-")[0].split("+")
if "offwind" in carriers:
carriers.remove("offwind")
carriers = np.append(
carriers, n.generators.carrier.filter(like="offwind").unique()
)
if feature.split("-")[1] == "cap":
feature_data = pd.DataFrame(index=buses_i, columns=carriers)
for carrier in carriers:
2022-03-17 16:38:30 +00:00
gen_i = n.generators.query("carrier == @carrier").index
attach = (
n.generators_t.p_max_pu[gen_i]
.mean()
.rename(index=n.generators.loc[gen_i].bus)
)
2022-03-17 16:38:30 +00:00
feature_data[carrier] = attach
if feature.split("-")[1] == "time":
feature_data = pd.DataFrame(columns=buses_i)
for carrier in carriers:
2022-03-17 16:38:30 +00:00
gen_i = n.generators.query("carrier == @carrier").index
attach = n.generators_t.p_max_pu[gen_i].rename(
columns=n.generators.loc[gen_i].bus
)
2022-03-17 16:38:30 +00:00
feature_data = pd.concat([feature_data, attach], axis=0)[buses_i]
feature_data = feature_data.T
2022-03-17 16:38:30 +00:00
# timestamp raises error in sklearn >= v1.2:
feature_data.columns = feature_data.columns.astype(str)
feature_data = feature_data.fillna(0)
return feature_data
def distribute_clusters(n, n_clusters, focus_weights=None, solver_name="cbc"):
"""
Determine the number of clusters per country.
"""
L = (
n.loads_t.p_set.mean()
.groupby(n.loads.bus)
.sum()
.groupby([n.buses.country, n.buses.sub_network])
.sum()
.pipe(normed)
)
N = n.buses.groupby(["country", "sub_network"]).size()
assert (
n_clusters >= len(N) and n_clusters <= N.sum()
), f"Number of clusters must be {len(N)} <= n_clusters <= {N.sum()} for this selection of countries."
if focus_weights is not None:
total_focus = sum(list(focus_weights.values()))
assert (
total_focus <= 1.0
), "The sum of focus weights must be less than or equal to 1."
Introduce mocksnakemake which acutally parses Snakefile (#107) * rewrite mocksnakemake for parsing real Snakefile * continue add function to scripts * going through all scripts, setting new mocksnakemake * fix plotting scripts * fix build_country_flh * fix build_country_flh II * adjust config files * fix make_summary for tutorial network * create dir also for output * incorporate suggestions * consistent import of mocksnakemake * consistent import of mocksnakemake II * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/retrieve_databundle.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * use pathlib for mocksnakemake * rename mocksnakemake into mock_snakemake * revert change in data * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * remove setting logfile in mock_snakemake, use Path in configure_logging * fix fallback path and base_dir fix return type of make_io_accessable * reformulate mock_snakemake * incorporate suggestion, fix typos * mock_snakemake: apply absolute paths again, add assertion error *.py: make hard coded io path accessable for mock_snakemake * retrieve_natura_raster: use snakemake.output for fn_out * include suggestion * Apply suggestions from code review Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * linting, add return ad end of file * Update scripts/plot_p_nom_max.py Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * Update scripts/plot_p_nom_max.py fixes #112 Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * plot_p_nom_max: small correction * config.tutorial.yaml fix snapshots end * use techs instead of technology * revert try out from previous commit, complete replacing * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters II
2019-12-09 20:29:15 +00:00
for country, weight in focus_weights.items():
L[country] = weight / len(L[country])
remainder = [
c not in focus_weights.keys() for c in L.index.get_level_values("country")
]
L[remainder] = L.loc[remainder].pipe(normed) * (1 - total_focus)
logger.warning("Using custom focus weights for determining number of clusters.")
assert np.isclose(
L.sum(), 1.0, rtol=1e-3
), f"Country weights L must sum up to 1.0 when distributing clusters. Is {L.sum()}."
m = po.ConcreteModel()
def n_bounds(model, *n_id):
return (1, N[n_id])
m.n = po.Var(list(L.index), bounds=n_bounds, domain=po.Integers)
m.tot = po.Constraint(expr=(po.summation(m.n) == n_clusters))
m.objective = po.Objective(
expr=sum((m.n[i] - L.loc[i] * n_clusters) ** 2 for i in L.index),
sense=po.minimize,
)
opt = po.SolverFactory(solver_name)
if not opt.has_capability("quadratic_objective"):
logger.warning(
f"The configured solver `{solver_name}` does not support quadratic objectives. Falling back to `ipopt`."
)
opt = po.SolverFactory("ipopt")
results = opt.solve(m)
assert (
results["Solver"][0]["Status"] == "ok"
), f"Solver returned non-optimally: {results}"
return pd.Series(m.n.get_values(), index=L.index).round().astype(int)
def busmap_for_n_clusters(
n,
n_clusters,
solver_name,
focus_weights=None,
algorithm="kmeans",
feature=None,
**algorithm_kwds,
):
2019-02-06 11:06:32 +00:00
if algorithm == "kmeans":
algorithm_kwds.setdefault("n_init", 1000)
algorithm_kwds.setdefault("max_iter", 30000)
algorithm_kwds.setdefault("tol", 1e-6)
algorithm_kwds.setdefault("random_state", 0)
2022-02-04 16:19:23 +00:00
def fix_country_assignment_for_hac(n):
from scipy.sparse import csgraph
2022-02-04 16:19:23 +00:00
# overwrite country of nodes that are disconnected from their country-topology
for country in n.buses.country.unique():
m = n[n.buses.country == country].copy()
2022-02-04 16:19:23 +00:00
_, labels = csgraph.connected_components(
m.adjacency_matrix(), directed=False
)
2022-03-17 16:38:30 +00:00
2022-02-04 16:19:23 +00:00
component = pd.Series(labels, index=m.buses.index)
component_sizes = component.value_counts()
if len(component_sizes) > 1:
disconnected_bus = component[
component == component_sizes.index[-1]
].index[0]
2022-03-17 16:38:30 +00:00
neighbor_bus = n.lines.query(
"bus0 == @disconnected_bus or bus1 == @disconnected_bus"
).iloc[0][["bus0", "bus1"]]
new_country = list(
set(n.buses.loc[neighbor_bus].country) - set([country])
)[0]
2022-02-04 16:19:23 +00:00
2022-03-17 16:38:30 +00:00
logger.info(
f"overwriting country `{country}` of bus `{disconnected_bus}` "
f"to new country `{new_country}`, because it is disconnected "
"from its initial inter-country transmission grid."
2022-03-17 16:38:30 +00:00
)
2022-02-04 16:19:23 +00:00
n.buses.at[disconnected_bus, "country"] = new_country
return n
if algorithm == "hac":
feature = get_feature_for_hac(n, buses_i=n.buses.index, feature=feature)
2022-02-04 16:19:23 +00:00
n = fix_country_assignment_for_hac(n)
if (algorithm != "hac") and (feature is not None):
logger.warning(
f"Keyword argument feature is only valid for algorithm `hac`. "
f"Given feature `{feature}` will be ignored."
)
n.determine_network_topology()
n_clusters = distribute_clusters(
n, n_clusters, focus_weights=focus_weights, solver_name=solver_name
)
def busmap_for_country(x):
prefix = x.name[0] + x.name[1] + " "
logger.debug(f"Determining busmap for country {prefix[:-1]}")
if len(x) == 1:
return pd.Series(prefix + "0", index=x.index)
weight = weighting_for_country(n, x)
2019-02-06 11:06:32 +00:00
if algorithm == "kmeans":
return prefix + busmap_by_kmeans(
n, weight, n_clusters[x.name], buses_i=x.index, **algorithm_kwds
)
elif algorithm == "hac":
return prefix + busmap_by_hac(
n, n_clusters[x.name], buses_i=x.index, feature=feature.loc[x.index]
)
elif algorithm == "modularity":
return prefix + busmap_by_greedy_modularity(
n, n_clusters[x.name], buses_i=x.index
)
2019-02-06 11:06:32 +00:00
else:
raise ValueError(
f"`algorithm` must be one of 'kmeans' or 'hac'. Is {algorithm}."
)
return (
n.buses.groupby(["country", "sub_network"], group_keys=False)
.apply(busmap_for_country)
.squeeze()
.rename("busmap")
)
def clustering_for_n_clusters(
n,
n_clusters,
custom_busmap=False,
aggregate_carriers=None,
line_length_factor=1.25,
aggregation_strategies=dict(),
solver_name="cbc",
algorithm="hac",
feature=None,
extended_link_costs=0,
focus_weights=None,
):
bus_strategies, generator_strategies = get_aggregation_strategies(
aggregation_strategies
)
if not isinstance(custom_busmap, pd.Series):
busmap = busmap_for_n_clusters(
n, n_clusters, solver_name, focus_weights, algorithm, feature
)
else:
busmap = custom_busmap
2020-12-03 14:17:16 +00:00
clustering = get_clustering_from_busmap(
n,
busmap,
bus_strategies=bus_strategies,
aggregate_generators_weighted=True,
aggregate_generators_carriers=aggregate_carriers,
aggregate_one_ports=["Load", "StorageUnit"],
line_length_factor=line_length_factor,
generator_strategies=generator_strategies,
scale_link_capital_costs=False,
)
if not n.links.empty:
nc = clustering.network
nc.links["underwater_fraction"] = (
n.links.eval("underwater_fraction * length").div(nc.links.length).dropna()
)
nc.links["capital_cost"] = nc.links["capital_cost"].add(
(nc.links.length - n.links.length)
.clip(lower=0)
.mul(extended_link_costs)
.dropna(),
fill_value=0,
)
return clustering
def cluster_regions(busmaps, input=None, output=None):
busmap = reduce(lambda x, y: x.map(y), busmaps[1:], busmaps[0])
for which in ("regions_onshore", "regions_offshore"):
regions = gpd.read_file(getattr(input, which))
regions = regions.reindex(columns=["name", "geometry"]).set_index("name")
regions_c = regions.dissolve(busmap)
regions_c.index.name = "name"
regions_c = regions_c.reset_index()
regions_c.to_file(getattr(output, which))
def plot_busmap_for_n_clusters(n, n_clusters, fn=None):
busmap = busmap_for_n_clusters(n, n_clusters)
cs = busmap.unique()
cr = sns.color_palette("hls", len(cs))
n.plot(bus_colors=busmap.map(dict(zip(cs, cr))))
if fn is not None:
plt.savefig(fn, bbox_inches="tight")
del cs, cr
if __name__ == "__main__":
if "snakemake" not in globals():
Introduce mocksnakemake which acutally parses Snakefile (#107) * rewrite mocksnakemake for parsing real Snakefile * continue add function to scripts * going through all scripts, setting new mocksnakemake * fix plotting scripts * fix build_country_flh * fix build_country_flh II * adjust config files * fix make_summary for tutorial network * create dir also for output * incorporate suggestions * consistent import of mocksnakemake * consistent import of mocksnakemake II * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/plot_network.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * Update scripts/retrieve_databundle.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * use pathlib for mocksnakemake * rename mocksnakemake into mock_snakemake * revert change in data * Update scripts/_helpers.py Co-Authored-By: euronion <42553970+euronion@users.noreply.github.com> * remove setting logfile in mock_snakemake, use Path in configure_logging * fix fallback path and base_dir fix return type of make_io_accessable * reformulate mock_snakemake * incorporate suggestion, fix typos * mock_snakemake: apply absolute paths again, add assertion error *.py: make hard coded io path accessable for mock_snakemake * retrieve_natura_raster: use snakemake.output for fn_out * include suggestion * Apply suggestions from code review Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * linting, add return ad end of file * Update scripts/plot_p_nom_max.py Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * Update scripts/plot_p_nom_max.py fixes #112 Co-Authored-By: Jonas Hörsch <jonas.hoersch@posteo.de> * plot_p_nom_max: small correction * config.tutorial.yaml fix snapshots end * use techs instead of technology * revert try out from previous commit, complete replacing * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters * change clusters -> clusts in plot_p_nom_max due to wildcard constraints of clusters II
2019-12-09 20:29:15 +00:00
from _helpers import mock_snakemake
2023-04-27 08:25:53 +00:00
snakemake = mock_snakemake("cluster_network", simpl="", clusters="37c")
Add logging to logfiles to all snakemake workflow scripts. (#102) * Add logging to logfiles to all snakemake workflow scripts. * Fix missing quotation marks in Snakefile. * Apply suggestions from code review Co-Authored-By: Fabian Neumann <fabian.neumann@outlook.de> * Apply suggestions from code review Co-Authored-By: Fabian Neumann <fabian.neumann@outlook.de> * doc: fix _ec_ filenames in docs * Allow logging message format to be specified in config.yaml. * Add logging for Snakemake rule 'retrieve_databundle '. * Add limited logging to STDERR only for retrieve_*.py scripts. * Import progressbar module only on demand. * Fix logging to file and enable concurrent printing to STDERR for most scripts. * Add new 'logging_format' option to Travis CI test config.yaml. * Add missing parenthesis (bug fix) and cross-os compatible paths. * Fix typos in messages. * Use correct log files for logging (bug fix). * doc: fix line references * config: logging_format in all configs * doc: add doc for logging_format * environment: update to powerplantmatching 0.4.3 * doc: update line references for tutorial.rst * Change logging configuration scheme for config.yaml. * Add helper function for doing basic logging configuration. * Add logpath for prepare_links_p_nom rule. * Outsource basic logging configuration for all scripts to _helper submodule. * Update documentation for changed config.yaml structure. Instead of 'logging_level' and 'logging_format', now 'logging' with subcategories is used. * _helpers: Change configure_logging signature.
2019-11-28 07:22:52 +00:00
configure_logging(snakemake)
params = snakemake.params
solver_name = snakemake.config["solving"]["solver"]["name"]
n = pypsa.Network(snakemake.input.network)
exclude_carriers = params.cluster_network["exclude_carriers"]
2022-09-19 09:46:58 +00:00
aggregate_carriers = set(n.generators.carrier) - set(exclude_carriers)
2023-06-29 06:10:52 +00:00
conventional_carriers = set(params.conventional_carriers)
if snakemake.wildcards.clusters.endswith("m"):
n_clusters = int(snakemake.wildcards.clusters[:-1])
2023-06-29 06:10:52 +00:00
aggregate_carriers = params.conventional_carriers & aggregate_carriers
elif snakemake.wildcards.clusters.endswith("c"):
n_clusters = int(snakemake.wildcards.clusters[:-1])
2023-06-29 06:10:52 +00:00
aggregate_carriers = aggregate_carriers - conventional_carriers
elif snakemake.wildcards.clusters == "all":
n_clusters = len(n.buses)
else:
n_clusters = int(snakemake.wildcards.clusters)
if n_clusters == len(n.buses):
# Fast-path if no clustering is necessary
busmap = n.buses.index.to_series()
linemap = n.lines.index.to_series()
2023-06-29 13:37:29 +00:00
clustering = pypsa.clustering.spatial.Clustering(
n, busmap, linemap, linemap, pd.Series(dtype="O")
)
else:
Nyears = n.snapshot_weightings.objective.sum() / 8760
hvac_overhead_cost = load_costs(
snakemake.input.tech_costs,
params.costs,
params.max_hours,
Nyears,
).at["HVAC overhead", "capital_cost"]
custom_busmap = params.custom_busmap
if custom_busmap:
custom_busmap = pd.read_csv(
snakemake.input.custom_busmap, index_col=0, squeeze=True
)
custom_busmap.index = custom_busmap.index.astype(str)
logger.info(f"Imported custom busmap from {snakemake.input.custom_busmap}")
clustering = clustering_for_n_clusters(
n,
n_clusters,
custom_busmap,
aggregate_carriers,
params.length_factor,
params.aggregation_strategies,
solver_name,
params.cluster_network["algorithm"],
params.cluster_network["feature"],
hvac_overhead_cost,
params.focus_weights,
)
update_p_nom_max(clustering.network)
clustering.network.meta = dict(
snakemake.config, **dict(wildcards=dict(snakemake.wildcards))
)
clustering.network.export_to_netcdf(snakemake.output.network)
for attr in (
"busmap",
"linemap",
): # also available: linemap_positive, linemap_negative
getattr(clustering, attr).to_csv(snakemake.output[attr])
cluster_regions((clustering.busmap,), snakemake.input, snakemake.output)