merge master

This commit is contained in:
Fabian Neumann 2021-04-27 14:47:15 +02:00
commit 4ec06f54bc
85 changed files with 1836 additions and 1350 deletions

View File

@ -11,7 +11,7 @@ assignees: ''
## Checklist ## Checklist
- [ ] I am using the current [`master`](https://github.com/PyPSA/pypsa-eur/tree/master) branch or the latest [release](https://github.com/PyPSA/pypsa-eur/releases). Please indicate. - [ ] I am using the current [`master`](https://github.com/PyPSA/pypsa-eur/tree/master) branch or the latest [release](https://github.com/PyPSA/pypsa-eur/releases). Please indicate.
- [ ] I am running on an up-to-date [`pypsa-eur` environment](https://github.com/PyPSA/pypsa-eur/blob/master/environment.yaml). Update via `conda env update -f environment.yaml`. - [ ] I am running on an up-to-date [`pypsa-eur` environment](https://github.com/PyPSA/pypsa-eur/blob/master/envs/environment.yaml). Update via `conda env update -f envs/environment.yaml`.
## Describe the Bug ## Describe the Bug

View File

@ -7,7 +7,7 @@ Closes # (if applicable).
- [ ] I tested my contribution locally and it seems to work fine. - [ ] I tested my contribution locally and it seems to work fine.
- [ ] Code and workflow changes are sufficiently documented. - [ ] Code and workflow changes are sufficiently documented.
- [ ] Newly introduced dependencies are added to `environment.yaml` and `environment.docs.yaml`. - [ ] Newly introduced dependencies are added to `envs/environment.yaml` and `envs/environment.docs.yaml`.
- [ ] Changes in configuration options are added in all of `config.default.yaml`, `config.tutorial.yaml`, and `test/config.test1.yaml`. - [ ] Changes in configuration options are added in all of `config.default.yaml`, `config.tutorial.yaml`, and `test/config.test1.yaml`.
- [ ] Changes in configuration options are also documented in `doc/configtables/*.csv` and line references are adjusted in `doc/configuration.rst` and `doc/tutorial.rst`. - [ ] Changes in configuration options are also documented in `doc/configtables/*.csv` and line references are adjusted in `doc/configuration.rst` and `doc/tutorial.rst`.
- [ ] A note for the release notes `doc/release_notes.rst` is amended in the format of previous release notes. - [ ] A note for the release notes `doc/release_notes.rst` is amended in the format of previous release notes.

1
.gitignore vendored
View File

@ -7,6 +7,7 @@
__pycache__ __pycache__
*dconf *dconf
gurobi.log gurobi.log
.vscode
/bak /bak
/resources /resources

View File

@ -5,4 +5,4 @@
version: 2 version: 2
conda: conda:
environment: environment.docs.yaml environment: envs/environment.docs.yaml

View File

@ -2,6 +2,10 @@
# #
# SPDX-License-Identifier: GPL-3.0-or-later # SPDX-License-Identifier: GPL-3.0-or-later
branches:
only:
- master
os: os:
- windows - windows
- linux - linux
@ -15,14 +19,18 @@ before_install:
- source conda4travis.sh - source conda4travis.sh
# install conda environment # install conda environment
- conda env create -f ./environment.yaml - conda install -c conda-forge mamba
- mamba env create -f ./envs/environment.yaml
- conda activate pypsa-eur - conda activate pypsa-eur
# install open-source solver # install open-source solver
- conda install -c conda-forge ipopt glpk - mamba install -c conda-forge glpk ipopt'<3.13.3'
# list packages for easier debugging
- conda list
script: script:
- cp ./test/config.test1.yaml ./config.yaml - cp ./test/config.test1.yaml ./config.yaml
- snakemake -j all solve_all_elec_networks - snakemake -j all solve_all_networks
- rm -rf resources/*.nc resources/*.geojson resources/*.h5 networks results - rm -rf resources/*.nc resources/*.geojson resources/*.h5 networks results
# could repeat for more configurations in future # could repeat for more configurations in future

View File

@ -7,7 +7,7 @@ SPDX-License-Identifier: CC-BY-4.0
[![Build Status](https://travis-ci.org/PyPSA/pypsa-eur.svg?branch=master)](https://travis-ci.org/PyPSA/pypsa-eur) [![Build Status](https://travis-ci.org/PyPSA/pypsa-eur.svg?branch=master)](https://travis-ci.org/PyPSA/pypsa-eur)
[![Documentation](https://readthedocs.org/projects/pypsa-eur/badge/?version=latest)](https://pypsa-eur.readthedocs.io/en/latest/?badge=latest) [![Documentation](https://readthedocs.org/projects/pypsa-eur/badge/?version=latest)](https://pypsa-eur.readthedocs.io/en/latest/?badge=latest)
![Size](https://img.shields.io/github/repo-size/pypsa/pypsa-eur) ![Size](https://img.shields.io/github/repo-size/pypsa/pypsa-eur)
[![Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.3520875.svg)](https://doi.org/10.5281/zenodo.3520875) [![Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.3520874.svg)](https://doi.org/10.5281/zenodo.3520874)
[![Gitter](https://badges.gitter.im/PyPSA/community.svg)](https://gitter.im/PyPSA/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Gitter](https://badges.gitter.im/PyPSA/community.svg)](https://gitter.im/PyPSA/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[![Snakemake](https://img.shields.io/badge/snakemake-≥5.0.0-brightgreen.svg?style=flat)](https://snakemake.readthedocs.io) [![Snakemake](https://img.shields.io/badge/snakemake-≥5.0.0-brightgreen.svg?style=flat)](https://snakemake.readthedocs.io)
[![REUSE status](https://api.reuse.software/badge/github.com/pypsa/pypsa-eur)](https://api.reuse.software/info/github.com/pypsa/pypsa-eur) [![REUSE status](https://api.reuse.software/badge/github.com/pypsa/pypsa-eur)](https://api.reuse.software/info/github.com/pypsa/pypsa-eur)
@ -42,7 +42,7 @@ discussion in Section 3.4 "Model validation" of the paper.
![PyPSA-Eur Grid Model Simplified](doc/img/elec_s_X.png) ![PyPSA-Eur Grid Model Simplified](doc/img/elec_s_X.png)
The model is designed to be imported into the open toolbox The model building routines are defined through a snakemake workflow. The model is designed to be imported into the open toolbox
[PyPSA](https://github.com/PyPSA/PyPSA) for operational studies as [PyPSA](https://github.com/PyPSA/PyPSA) for operational studies as
well as generation and transmission expansion planning studies. well as generation and transmission expansion planning studies.
@ -61,7 +61,7 @@ The dataset consists of:
- Geographical potentials for wind and solar generators based on land use (CORINE) and excluding nature reserves (Natura2000) are computed with the [vresutils library](https://github.com/FRESNA/vresutils) and the [glaes library](https://github.com/FZJ-IEK3-VSA/glaes). - Geographical potentials for wind and solar generators based on land use (CORINE) and excluding nature reserves (Natura2000) are computed with the [vresutils library](https://github.com/FRESNA/vresutils) and the [glaes library](https://github.com/FZJ-IEK3-VSA/glaes).
Already-built versions of the model can be found in the accompanying [Zenodo Already-built versions of the model can be found in the accompanying [Zenodo
repository](https://doi.org/10.5281/zenodo.3601882). repository](https://doi.org/10.5281/zenodo.3601881).
A version of the model that adds building heating, transport and A version of the model that adds building heating, transport and
industry sectors to the model, as well as gas networks, can be found industry sectors to the model, as well as gas networks, can be found

237
Snakefile
View File

@ -11,33 +11,30 @@ if not exists("config.yaml"):
configfile: "config.yaml" configfile: "config.yaml"
COSTS="resources/costs.csv" COSTS="resources/costs.csv"
ATLITE_NPROCESSES = config['atlite'].get('nprocesses', 4)
wildcard_constraints: wildcard_constraints:
ll="(v|c)([0-9\.]+|opt|all)|all", # line limit, can be volume or cost
simpl="[a-zA-Z0-9]*|all", simpl="[a-zA-Z0-9]*|all",
clusters="[0-9]+m?|all", clusters="[0-9]+m?|all",
sectors="[+a-zA-Z0-9]+", ll="(v|c)([0-9\.]+|opt|all)|all",
opts="[-+a-zA-Z0-9\.]*" opts="[-+a-zA-Z0-9\.]*"
rule cluster_all_elec_networks:
input:
expand("networks/elec_s{simpl}_{clusters}.nc",
**config['scenario'])
rule extra_components_all_elec_networks: rule cluster_all_networks:
input: input: expand("networks/elec_s{simpl}_{clusters}.nc", **config['scenario'])
expand("networks/elec_s{simpl}_{clusters}_ec.nc",
**config['scenario'])
rule prepare_all_elec_networks:
input:
expand("networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc",
**config['scenario'])
rule solve_all_elec_networks: rule extra_components_all_networks:
input: input: expand("networks/elec_s{simpl}_{clusters}_ec.nc", **config['scenario'])
expand("results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc",
**config['scenario'])
rule prepare_all_networks:
input: expand("networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc", **config['scenario'])
rule solve_all_networks:
input: expand("results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc", **config['scenario'])
if config['enable'].get('prepare_links_p_nom', False): if config['enable'].get('prepare_links_p_nom', False):
rule prepare_links_p_nom: rule prepare_links_p_nom:
@ -45,7 +42,6 @@ if config['enable'].get('prepare_links_p_nom', False):
log: 'logs/prepare_links_p_nom.log' log: 'logs/prepare_links_p_nom.log'
threads: 1 threads: 1
resources: mem=500 resources: mem=500
# group: 'nonfeedin_preparation'
script: 'scripts/prepare_links_p_nom.py' script: 'scripts/prepare_links_p_nom.py'
@ -53,12 +49,13 @@ datafiles = ['ch_cantons.csv', 'je-e-21.03.02.xls',
'eez/World_EEZ_v8_2014.shp', 'EIA_hydro_generation_2000_2014.csv', 'eez/World_EEZ_v8_2014.shp', 'EIA_hydro_generation_2000_2014.csv',
'hydro_capacities.csv', 'naturalearth/ne_10m_admin_0_countries.shp', 'hydro_capacities.csv', 'naturalearth/ne_10m_admin_0_countries.shp',
'NUTS_2013_60M_SH/data/NUTS_RG_60M_2013.shp', 'nama_10r_3popgdp.tsv.gz', 'NUTS_2013_60M_SH/data/NUTS_RG_60M_2013.shp', 'nama_10r_3popgdp.tsv.gz',
'nama_10r_3gdp.tsv.gz', 'time_series_60min_singleindex_filtered.csv', 'nama_10r_3gdp.tsv.gz', 'corine/g250_clc06_V18_5.tif']
'corine/g250_clc06_V18_5.tif']
if not config.get('tutorial', False): if not config.get('tutorial', False):
datafiles.extend(["natura/Natura2000_end2015.shp", "GEBCO_2014_2D.nc"]) datafiles.extend(["natura/Natura2000_end2015.shp", "GEBCO_2014_2D.nc"])
if config['enable'].get('retrieve_databundle', True): if config['enable'].get('retrieve_databundle', True):
rule retrieve_databundle: rule retrieve_databundle:
output: expand('data/bundle/{file}', file=datafiles) output: expand('data/bundle/{file}', file=datafiles)
@ -66,6 +63,10 @@ if config['enable'].get('retrieve_databundle', True):
script: 'scripts/retrieve_databundle.py' script: 'scripts/retrieve_databundle.py'
rule build_load_data:
output: "resources/load.csv"
log: "logs/build_load_data.log"
script: 'scripts/build_load_data.py'
rule build_powerplants: rule build_powerplants:
input: input:
@ -75,9 +76,9 @@ rule build_powerplants:
log: "logs/build_powerplants.log" log: "logs/build_powerplants.log"
threads: 1 threads: 1
resources: mem=500 resources: mem=500
# group: 'nonfeedin_preparation'
script: "scripts/build_powerplants.py" script: "scripts/build_powerplants.py"
rule base_network: rule base_network:
input: input:
eg_buses='data/entsoegridkit/buses.csv', eg_buses='data/entsoegridkit/buses.csv',
@ -96,9 +97,9 @@ rule base_network:
benchmark: "benchmarks/base_network" benchmark: "benchmarks/base_network"
threads: 1 threads: 1
resources: mem=500 resources: mem=500
# group: 'nonfeedin_preparation'
script: "scripts/base_network.py" script: "scripts/base_network.py"
rule build_shapes: rule build_shapes:
input: input:
naturalearth='data/bundle/naturalearth/ne_10m_admin_0_countries.shp', naturalearth='data/bundle/naturalearth/ne_10m_admin_0_countries.shp',
@ -116,9 +117,9 @@ rule build_shapes:
log: "logs/build_shapes.log" log: "logs/build_shapes.log"
threads: 1 threads: 1
resources: mem=500 resources: mem=500
# group: 'nonfeedin_preparation'
script: "scripts/build_shapes.py" script: "scripts/build_shapes.py"
rule build_bus_regions: rule build_bus_regions:
input: input:
country_shapes='resources/country_shapes.geojson', country_shapes='resources/country_shapes.geojson',
@ -128,20 +129,21 @@ rule build_bus_regions:
regions_onshore="resources/regions_onshore.geojson", regions_onshore="resources/regions_onshore.geojson",
regions_offshore="resources/regions_offshore.geojson" regions_offshore="resources/regions_offshore.geojson"
log: "logs/build_bus_regions.log" log: "logs/build_bus_regions.log"
threads: 1
resources: mem=1000 resources: mem=1000
# group: 'nonfeedin_preparation'
script: "scripts/build_bus_regions.py" script: "scripts/build_bus_regions.py"
if config['enable'].get('build_cutout', False): if config['enable'].get('build_cutout', False):
rule build_cutout: rule build_cutout:
output: directory("cutouts/{cutout}") output: directory("cutouts/{cutout}")
log: "logs/build_cutout/{cutout}.log" log: "logs/build_cutout/{cutout}.log"
resources: mem=config['atlite'].get('nprocesses', 4) * 1000
threads: config['atlite'].get('nprocesses', 4)
benchmark: "benchmarks/build_cutout_{cutout}" benchmark: "benchmarks/build_cutout_{cutout}"
# group: 'feedin_preparation' threads: ATLITE_NPROCESSES
resources: mem=ATLITE_NPROCESSES * 1000
script: "scripts/build_cutout.py" script: "scripts/build_cutout.py"
if config['enable'].get('retrieve_cutout', True): if config['enable'].get('retrieve_cutout', True):
rule retrieve_cutout: rule retrieve_cutout:
output: directory(expand("cutouts/{cutouts}", **config['atlite'])), output: directory(expand("cutouts/{cutouts}", **config['atlite'])),
@ -158,6 +160,7 @@ if config['enable'].get('build_natura_raster', False):
log: "logs/build_natura_raster.log" log: "logs/build_natura_raster.log"
script: "scripts/build_natura_raster.py" script: "scripts/build_natura_raster.py"
if config['enable'].get('retrieve_natura_raster', True): if config['enable'].get('retrieve_natura_raster', True):
rule retrieve_natura_raster: rule retrieve_natura_raster:
output: "resources/natura.tiff" output: "resources/natura.tiff"
@ -177,23 +180,24 @@ rule build_renewable_profiles:
base_network="networks/base.nc", base_network="networks/base.nc",
corine="data/bundle/corine/g250_clc06_V18_5.tif", corine="data/bundle/corine/g250_clc06_V18_5.tif",
natura="resources/natura.tiff", natura="resources/natura.tiff",
gebco=lambda wildcards: ("data/bundle/GEBCO_2014_2D.nc" gebco=lambda w: ("data/bundle/GEBCO_2014_2D.nc"
if "max_depth" in config["renewable"][wildcards.technology].keys() if "max_depth" in config["renewable"][w.technology].keys()
else []), else []),
country_shapes='resources/country_shapes.geojson', country_shapes='resources/country_shapes.geojson',
offshore_shapes='resources/offshore_shapes.geojson', offshore_shapes='resources/offshore_shapes.geojson',
regions=lambda wildcards: ("resources/regions_onshore.geojson" regions=lambda w: ("resources/regions_onshore.geojson"
if wildcards.technology in ('onwind', 'solar') if w.technology in ('onwind', 'solar')
else "resources/regions_offshore.geojson"), else "resources/regions_offshore.geojson"),
cutout=lambda wildcards: "cutouts/" + config["renewable"][wildcards.technology]['cutout'] cutout=lambda w: "cutouts/" + config["renewable"][w.technology]['cutout']
output: profile="resources/profile_{technology}.nc", output:
profile="resources/profile_{technology}.nc",
log: "logs/build_renewable_profile_{technology}.log" log: "logs/build_renewable_profile_{technology}.log"
resources: mem=config['atlite'].get('nprocesses', 2) * 5000
threads: config['atlite'].get('nprocesses', 2)
benchmark: "benchmarks/build_renewable_profiles_{technology}" benchmark: "benchmarks/build_renewable_profiles_{technology}"
# group: 'feedin_preparation' threads: ATLITE_NPROCESSES
resources: mem=ATLITE_NPROCESSES * 5000
script: "scripts/build_renewable_profiles.py" script: "scripts/build_renewable_profiles.py"
if 'hydro' in config['renewable'].keys(): if 'hydro' in config['renewable'].keys():
rule build_hydro_profile: rule build_hydro_profile:
input: input:
@ -203,9 +207,9 @@ if 'hydro' in config['renewable'].keys():
output: 'resources/profile_hydro.nc' output: 'resources/profile_hydro.nc'
log: "logs/build_hydro_profile.log" log: "logs/build_hydro_profile.log"
resources: mem=5000 resources: mem=5000
# group: 'feedin_preparation'
script: 'scripts/build_hydro_profile.py' script: 'scripts/build_hydro_profile.py'
rule add_electricity: rule add_electricity:
input: input:
base_network='networks/base.nc', base_network='networks/base.nc',
@ -214,78 +218,80 @@ rule add_electricity:
powerplants='resources/powerplants.csv', powerplants='resources/powerplants.csv',
hydro_capacities='data/bundle/hydro_capacities.csv', hydro_capacities='data/bundle/hydro_capacities.csv',
geth_hydro_capacities='data/geth2015_hydro_capacities.csv', geth_hydro_capacities='data/geth2015_hydro_capacities.csv',
opsd_load='data/bundle/time_series_60min_singleindex_filtered.csv', load='resources/load.csv',
nuts3_shapes='resources/nuts3_shapes.geojson', nuts3_shapes='resources/nuts3_shapes.geojson',
**{'profile_' + t: "resources/profile_" + t + ".nc" **{f"profile_{tech}": f"resources/profile_{tech}.nc"
for t in config['renewable']} for tech in config['renewable']}
output: "networks/elec.nc" output: "networks/elec.nc"
log: "logs/add_electricity.log" log: "logs/add_electricity.log"
benchmark: "benchmarks/add_electricity" benchmark: "benchmarks/add_electricity"
threads: 1 threads: 1
resources: mem=3000 resources: mem=3000
# group: 'build_pypsa_networks'
script: "scripts/add_electricity.py" script: "scripts/add_electricity.py"
rule simplify_network: rule simplify_network:
input: input:
network='networks/{network}.nc', network='networks/elec.nc',
tech_costs=COSTS, tech_costs=COSTS,
regions_onshore="resources/regions_onshore.geojson", regions_onshore="resources/regions_onshore.geojson",
regions_offshore="resources/regions_offshore.geojson" regions_offshore="resources/regions_offshore.geojson"
output: output:
network='networks/{network}_s{simpl}.nc', network='networks/elec_s{simpl}.nc',
regions_onshore="resources/regions_onshore_{network}_s{simpl}.geojson", regions_onshore="resources/regions_onshore_elec_s{simpl}.geojson",
regions_offshore="resources/regions_offshore_{network}_s{simpl}.geojson", regions_offshore="resources/regions_offshore_elec_s{simpl}.geojson",
clustermaps='resources/clustermaps_{network}_s{simpl}.h5' busmap='resources/busmap_elec_s{simpl}.csv'
log: "logs/simplify_network/{network}_s{simpl}.log" log: "logs/simplify_network/elec_s{simpl}.log"
benchmark: "benchmarks/simplify_network/{network}_s{simpl}" benchmark: "benchmarks/simplify_network/elec_s{simpl}"
threads: 1 threads: 1
resources: mem=4000 resources: mem=4000
# group: 'build_pypsa_networks'
script: "scripts/simplify_network.py" script: "scripts/simplify_network.py"
rule cluster_network: rule cluster_network:
input: input:
network='networks/{network}_s{simpl}.nc', network='networks/elec_s{simpl}.nc',
regions_onshore="resources/regions_onshore_{network}_s{simpl}.geojson", regions_onshore="resources/regions_onshore_elec_s{simpl}.geojson",
regions_offshore="resources/regions_offshore_{network}_s{simpl}.geojson", regions_offshore="resources/regions_offshore_elec_s{simpl}.geojson",
clustermaps=ancient('resources/clustermaps_{network}_s{simpl}.h5'), busmap=ancient('resources/busmap_elec_s{simpl}.csv'),
custom_busmap=("data/custom_busmap_elec_s{simpl}_{clusters}.csv"
if config["enable"].get("custom_busmap", False) else []),
tech_costs=COSTS tech_costs=COSTS
output: output:
network='networks/{network}_s{simpl}_{clusters}.nc', network='networks/elec_s{simpl}_{clusters}.nc',
regions_onshore="resources/regions_onshore_{network}_s{simpl}_{clusters}.geojson", regions_onshore="resources/regions_onshore_elec_s{simpl}_{clusters}.geojson",
regions_offshore="resources/regions_offshore_{network}_s{simpl}_{clusters}.geojson", regions_offshore="resources/regions_offshore_elec_s{simpl}_{clusters}.geojson",
clustermaps='resources/clustermaps_{network}_s{simpl}_{clusters}.h5' busmap="resources/busmap_elec_s{simpl}_{clusters}.csv",
log: "logs/cluster_network/{network}_s{simpl}_{clusters}.log" linemap="resources/linemap_elec_s{simpl}_{clusters}.csv"
benchmark: "benchmarks/cluster_network/{network}_s{simpl}_{clusters}" log: "logs/cluster_network/elec_s{simpl}_{clusters}.log"
benchmark: "benchmarks/cluster_network/elec_s{simpl}_{clusters}"
threads: 1 threads: 1
resources: mem=3000 resources: mem=3000
# group: 'build_pypsa_networks'
script: "scripts/cluster_network.py" script: "scripts/cluster_network.py"
rule add_extra_components: rule add_extra_components:
input: input:
network='networks/{network}_s{simpl}_{clusters}.nc', network='networks/elec_s{simpl}_{clusters}.nc',
tech_costs=COSTS, tech_costs=COSTS,
output: 'networks/{network}_s{simpl}_{clusters}_ec.nc' output: 'networks/elec_s{simpl}_{clusters}_ec.nc'
log: "logs/add_extra_components/{network}_s{simpl}_{clusters}.log" log: "logs/add_extra_components/elec_s{simpl}_{clusters}.log"
benchmark: "benchmarks/add_extra_components/{network}_s{simpl}_{clusters}_ec" benchmark: "benchmarks/add_extra_components/elec_s{simpl}_{clusters}_ec"
threads: 1 threads: 1
resources: mem=3000 resources: mem=3000
# group: 'build_pypsa_networks'
script: "scripts/add_extra_components.py" script: "scripts/add_extra_components.py"
rule prepare_network: rule prepare_network:
input: 'networks/{network}_s{simpl}_{clusters}_ec.nc', tech_costs=COSTS input: 'networks/elec_s{simpl}_{clusters}_ec.nc', tech_costs=COSTS
output: 'networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc' output: 'networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc'
log: "logs/prepare_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.log" log: "logs/prepare_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.log"
benchmark: "benchmarks/prepare_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}"
threads: 1 threads: 1
resources: mem=1000 resources: mem=4000
# benchmark: "benchmarks/prepare_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}"
script: "scripts/prepare_network.py" script: "scripts/prepare_network.py"
def memory(w): def memory(w):
factor = 3. factor = 3.
for o in w.opts.split('-'): for o in w.opts.split('-'):
@ -293,52 +299,58 @@ def memory(w):
if m is not None: if m is not None:
factor /= int(m.group(1)) factor /= int(m.group(1))
break break
for o in w.opts.split('-'):
m = re.match(r'^(\d+)seg$', o, re.IGNORECASE)
if m is not None:
factor *= int(m.group(1)) / 8760
break
if w.clusters.endswith('m'): if w.clusters.endswith('m'):
return int(factor * (18000 + 180 * int(w.clusters[:-1]))) return int(factor * (18000 + 180 * int(w.clusters[:-1])))
else: else:
return int(factor * (10000 + 195 * int(w.clusters))) return int(factor * (10000 + 195 * int(w.clusters)))
# return 4890+310 * int(w.clusters)
rule solve_network: rule solve_network:
input: "networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc" input: "networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc"
output: "results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc" output: "results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc"
shadow: "shallow"
log: log:
solver=normpath("logs/solve_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_solver.log"), solver=normpath("logs/solve_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_solver.log"),
python="logs/solve_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_python.log", python="logs/solve_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_python.log",
memory="logs/solve_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_memory.log" memory="logs/solve_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_memory.log"
benchmark: "benchmarks/solve_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}" benchmark: "benchmarks/solve_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}"
threads: 4 threads: 4
resources: mem=memory resources: mem=memory
# group: "solve" # with group, threads is ignored https://bitbucket.org/snakemake/snakemake/issues/971/group-job-description-does-not-contain shadow: "shallow"
script: "scripts/solve_network.py" script: "scripts/solve_network.py"
rule solve_operations_network: rule solve_operations_network:
input: input:
unprepared="networks/{network}_s{simpl}_{clusters}_ec.nc", unprepared="networks/elec_s{simpl}_{clusters}_ec.nc",
optimized="results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc" optimized="results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc"
output: "results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_op.nc" output: "results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_op.nc"
shadow: "shallow"
log: log:
solver=normpath("logs/solve_operations_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_op_solver.log"), solver=normpath("logs/solve_operations_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_op_solver.log"),
python="logs/solve_operations_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_op_python.log", python="logs/solve_operations_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_op_python.log",
memory="logs/solve_operations_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_op_memory.log" memory="logs/solve_operations_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_op_memory.log"
benchmark: "benchmarks/solve_operations_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}" benchmark: "benchmarks/solve_operations_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}"
threads: 4 threads: 4
resources: mem=(lambda w: 5000 + 372 * int(w.clusters)) resources: mem=(lambda w: 5000 + 372 * int(w.clusters))
# group: "solve_operations" shadow: "shallow"
script: "scripts/solve_operations_network.py" script: "scripts/solve_operations_network.py"
rule plot_network: rule plot_network:
input: input:
network="results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc", network="results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc",
tech_costs=COSTS tech_costs=COSTS
output: output:
only_map="results/plots/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_{attr}.{ext}", only_map="results/plots/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_{attr}.{ext}",
ext="results/plots/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_{attr}_ext.{ext}" ext="results/plots/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_{attr}_ext.{ext}"
log: "logs/plot_network/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_{attr}_{ext}.log" log: "logs/plot_network/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_{attr}_{ext}.log"
script: "scripts/plot_network.py" script: "scripts/plot_network.py"
def input_make_summary(w): def input_make_summary(w):
# It's mildly hacky to include the separate costs input as first entry # It's mildly hacky to include the separate costs input as first entry
if w.ll.endswith("all"): if w.ll.endswith("all"):
@ -348,41 +360,47 @@ def input_make_summary(w):
else: else:
ll = w.ll ll = w.ll
return ([COSTS] + return ([COSTS] +
expand("results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc", expand("results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc",
network=w.network, network=w.network,
ll=ll, ll=ll,
**{k: config["scenario"][k] if getattr(w, k) == "all" else getattr(w, k) **{k: config["scenario"][k] if getattr(w, k) == "all" else getattr(w, k)
for k in ["simpl", "clusters", "opts"]})) for k in ["simpl", "clusters", "opts"]}))
rule make_summary: rule make_summary:
input: input_make_summary input: input_make_summary
output: directory("results/summaries/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}") output: directory("results/summaries/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}")
log: "logs/make_summary/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}.log", log: "logs/make_summary/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}.log",
script: "scripts/make_summary.py" script: "scripts/make_summary.py"
rule plot_summary: rule plot_summary:
input: "results/summaries/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}" input: "results/summaries/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}"
output: "results/plots/summary_{summary}_{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}.{ext}" output: "results/plots/summary_{summary}_elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}.{ext}"
log: "logs/plot_summary/{summary}_{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}_{ext}.log" log: "logs/plot_summary/{summary}_elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_{country}_{ext}.log"
script: "scripts/plot_summary.py" script: "scripts/plot_summary.py"
def input_plot_p_nom_max(wildcards):
return [('networks/{network}_s{simpl}{maybe_cluster}.nc' def input_plot_p_nom_max(w):
.format(maybe_cluster=('' if c == 'full' else ('_' + c)), **wildcards)) return [("networks/elec_s{simpl}{maybe_cluster}.nc"
for c in wildcards.clusts.split(",")] .format(maybe_cluster=('' if c == 'full' else ('_' + c)), **w))
for c in w.clusts.split(",")]
rule plot_p_nom_max: rule plot_p_nom_max:
input: input_plot_p_nom_max input: input_plot_p_nom_max
output: "results/plots/{network}_s{simpl}_cum_p_nom_max_{clusts}_{techs}_{country}.{ext}" output: "results/plots/elec_s{simpl}_cum_p_nom_max_{clusts}_{techs}_{country}.{ext}"
log: "logs/plot_p_nom_max/{network}_s{simpl}_{clusts}_{techs}_{country}_{ext}.log" log: "logs/plot_p_nom_max/elec_s{simpl}_{clusts}_{techs}_{country}_{ext}.log"
script: "scripts/plot_p_nom_max.py" script: "scripts/plot_p_nom_max.py"
rule build_country_flh: rule build_country_flh:
input: input:
base_network="networks/base.nc", base_network="networks/base.nc",
corine="data/bundle/corine/g250_clc06_V18_5.tif", corine="data/bundle/corine/g250_clc06_V18_5.tif",
natura="resources/natura.tiff", natura="resources/natura.tiff",
gebco=lambda wildcards: ("data/bundle/GEBCO_2014_2D.nc" gebco=lambda w: ("data/bundle/GEBCO_2014_2D.nc"
if "max_depth" in config["renewable"][wildcards.technology].keys() if "max_depth" in config["renewable"][w.technology].keys()
else []), else []),
country_shapes='resources/country_shapes.geojson', country_shapes='resources/country_shapes.geojson',
offshore_shapes='resources/offshore_shapes.geojson', offshore_shapes='resources/offshore_shapes.geojson',
@ -400,9 +418,4 @@ rule build_country_flh:
log: "logs/build_country_flh_{technology}.log" log: "logs/build_country_flh_{technology}.log"
resources: mem=10000 resources: mem=10000
benchmark: "benchmarks/build_country_flh_{technology}" benchmark: "benchmarks/build_country_flh_{technology}"
# group: 'feedin_preparation'
script: "scripts/build_country_flh.py" script: "scripts/build_country_flh.py"
# Local Variables:
# mode: python
# End:

View File

@ -1,22 +0,0 @@
# SPDX-FileCopyrightText: : 2017-2020 The PyPSA-Eur Authors
#
# SPDX-License-Identifier: GPL-3.0-or-later
__default__:
log: "logs/cluster/{{name}}.log"
feedin_preparation:
walltime: "12:00:00"
solve_network:
walltime: "05:00:00:00"
trace_solve_network:
walltime: "05:00:00:00"
solve:
walltime: "05:00:00:00"
threads: 4 # Group threads are not aggregated
solve_operations:
walltime: "01:00:00:00"

View File

@ -2,7 +2,7 @@
# #
# SPDX-License-Identifier: CC0-1.0 # SPDX-License-Identifier: CC0-1.0
version: 0.2.0 version: 0.3.0
tutorial: false tutorial: false
logging: logging:
@ -12,7 +12,6 @@ logging:
summary_dir: results summary_dir: results
scenario: scenario:
sectors: [E]
simpl: [''] simpl: ['']
ll: ['copt'] ll: ['copt']
clusters: [37, 128, 256, 512, 1024] clusters: [37, 128, 256, 512, 1024]
@ -33,17 +32,18 @@ enable:
retrieve_cutout: true retrieve_cutout: true
build_natura_raster: false build_natura_raster: false
retrieve_natura_raster: true retrieve_natura_raster: true
custom_busmap: false
electricity: electricity:
voltages: [220., 300., 380.] voltages: [220., 300., 380.]
co2limit: 7.75e+7 # 0.05 * 3.1e9*0.5 co2limit: 7.75e+7 # 0.05 * 3.1e9*0.5
co2base: 3.1e+9 # 1 * 3.1e9*0.5 co2base: 1.487e9
agg_p_nom_limits: data/agg_p_nom_minmax.csv agg_p_nom_limits: data/agg_p_nom_minmax.csv
extendable_carriers: extendable_carriers:
Generator: [] Generator: []
StorageUnit: [battery, H2] StorageUnit: [] # battery, H2
Store: [] # battery, H2 Store: [battery, H2]
Link: [] Link: []
max_hours: max_hours:
@ -53,6 +53,7 @@ electricity:
powerplants_filter: false # use pandas query strings here, e.g. Country not in ['Germany'] powerplants_filter: false # use pandas query strings here, e.g. Country not in ['Germany']
custom_powerplants: false # use pandas query strings here, e.g. Country in ['Germany'] custom_powerplants: false # use pandas query strings here, e.g. Country in ['Germany']
conventional_carriers: [nuclear, oil, OCGT, CCGT, coal, lignite, geothermal, biomass] conventional_carriers: [nuclear, oil, OCGT, CCGT, coal, lignite, geothermal, biomass]
renewable_capacities_from_OPSD: [] # onwind, offwind, solar
# estimate_renewable_capacities_from_capacity_stats: # estimate_renewable_capacities_from_capacity_stats:
# # Wind is the Fueltype in ppm.data.Capacity_stats, onwind, offwind-{ac,dc} the carrier in PyPSA-Eur # # Wind is the Fueltype in ppm.data.Capacity_stats, onwind, offwind-{ac,dc} the carrier in PyPSA-Eur
@ -143,8 +144,7 @@ renewable:
cutout: europe-2013-era5 cutout: europe-2013-era5
carriers: [ror, PHS, hydro] carriers: [ror, PHS, hydro]
PHS_max_hours: 6 PHS_max_hours: 6
hydro_max_hours: "energy_capacity_totals_by_country" # one of energy_capacity_totals_by_country, hydro_max_hours: "energy_capacity_totals_by_country" # one of energy_capacity_totals_by_country, estimate_by_large_installations or a float
# estimate_by_large_installations or a float
clip_min_inflow: 1.0 clip_min_inflow: 1.0
lines: lines:
@ -153,11 +153,13 @@ lines:
300.: "Al/St 240/40 3-bundle 300.0" 300.: "Al/St 240/40 3-bundle 300.0"
380.: "Al/St 240/40 4-bundle 380.0" 380.: "Al/St 240/40 4-bundle 380.0"
s_max_pu: 0.7 s_max_pu: 0.7
s_nom_max: .inf
length_factor: 1.25 length_factor: 1.25
under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity
links: links:
p_max_pu: 1.0 p_max_pu: 1.0
p_nom_max: .inf
include_tyndp: true include_tyndp: true
under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity
@ -167,6 +169,11 @@ transformers:
type: '' type: ''
load: load:
url: https://data.open-power-system-data.org/time_series/2019-06-05/time_series_60min_singleindex.csv
power_statistics: True # only for files from <2019; set false in order to get ENTSOE transparency data
interpolate_limit: 3 # data gaps up until this size are interpolated linearly
time_shift_for_large_gaps: 1w # data gaps up until this size are copied by copying from
manual_adjustments: true # false
scaling_factor: 1.0 scaling_factor: 1.0
costs: costs:
@ -188,7 +195,10 @@ costs:
offwind: 0.015 offwind: 0.015
hydro: 0. hydro: 0.
H2: 0. H2: 0.
electrolysis: 0.
fuel cell: 0.
battery: 0. battery: 0.
battery inverter: 0.
emission_prices: # in currency per tonne emission, only used with the option Ep emission_prices: # in currency per tonne emission, only used with the option Ep
co2: 0. co2: 0.
@ -267,67 +277,18 @@ plotting:
'waste' : '#68896b' 'waste' : '#68896b'
'geothermal' : '#ba91b1' 'geothermal' : '#ba91b1'
"OCGT" : "#d35050" "OCGT" : "#d35050"
"OCGT marginal" : "#d35050"
"OCGT-heat" : "#d35050"
"gas boiler" : "#d35050"
"gas boilers" : "#d35050"
"gas boiler marginal" : "#d35050"
"gas-to-power/heat" : "#d35050"
"gas" : "#d35050" "gas" : "#d35050"
"natural gas" : "#d35050" "natural gas" : "#d35050"
"CCGT" : "#b20101" "CCGT" : "#b20101"
"CCGT marginal" : "#b20101"
"Nuclear" : "#ff9000"
"Nuclear marginal" : "#ff9000"
"nuclear" : "#ff9000" "nuclear" : "#ff9000"
"coal" : "#707070" "coal" : "#707070"
"Coal" : "#707070"
"Coal marginal" : "#707070"
"lignite" : "#9e5a01" "lignite" : "#9e5a01"
"Lignite" : "#9e5a01"
"Lignite marginal" : "#9e5a01"
"Oil" : "#262626"
"oil" : "#262626" "oil" : "#262626"
"H2" : "#ea048a" "H2" : "#ea048a"
"hydrogen storage" : "#ea048a" "hydrogen storage" : "#ea048a"
"Sabatier" : "#a31597"
"methanation" : "#a31597"
"helmeth" : "#a31597"
"DAC" : "#d284ff"
"co2 stored" : "#e5e5e5"
"CO2 sequestration" : "#e5e5e5"
"battery" : "#b8ea04" "battery" : "#b8ea04"
"battery storage" : "#b8ea04"
"Li ion" : "#b8ea04"
"BEV charger" : "#e2ff7c"
"V2G" : "#7a9618"
"transport fuel cell" : "#e884be"
"retrofitting" : "#e0d6a8"
"building retrofitting" : "#e0d6a8"
"heat pumps" : "#ff9768"
"heat pump" : "#ff9768"
"air heat pump" : "#ffbea0"
"ground heat pump" : "#ff7a3d"
"power-to-heat" : "#a59e7c"
"power-to-gas" : "#db8585"
"power-to-liquid" : "#a9acd1"
"Fischer-Tropsch" : "#a9acd1"
"resistive heater" : "#aa4925"
"water tanks" : "#401f75"
"hot water storage" : "#401f75"
"hot water charging" : "#351c5e"
"hot water discharging" : "#683ab2"
"CHP" : "#d80a56"
"CHP heat" : "#d80a56"
"CHP electric" : "#d80a56"
"district heating" : "#93864b"
"Ambient" : "#262626"
"Electric load" : "#f9d002" "Electric load" : "#f9d002"
"electricity" : "#f9d002" "electricity" : "#f9d002"
"Heat load" : "#d35050"
"heat" : "#d35050"
"Transport load" : "#235ebc"
"transport" : "#235ebc"
"lines" : "#70af1d" "lines" : "#70af1d"
"transmission lines" : "#70af1d" "transmission lines" : "#70af1d"
"AC-AC" : "#70af1d" "AC-AC" : "#70af1d"
@ -347,18 +308,5 @@ plotting:
hydro: "Reservoir & Dam" hydro: "Reservoir & Dam"
battery: "Battery Storage" battery: "Battery Storage"
H2: "Hydrogen Storage" H2: "Hydrogen Storage"
lines: "Transmission lines" lines: "Transmission Lines"
ror: "Run of river" ror: "Run of River"
nice_names_n:
OCGT: "Open-Cycle\nGas"
CCGT: "Combined-Cycle\nGas"
offwind-ac: "Offshore\nWind (AC)"
offwind-dc: "Offshore\nWind (DC)"
onwind: "Onshore\nWind"
battery: "Battery\nStorage"
H2: "Hydrogen\nStorage"
lines: "Transmission\nlines"
ror: "Run of\nriver"
PHS: "Pumped Hydro\nStorage"
hydro: "Reservoir\n& Dam"

View File

@ -2,8 +2,9 @@
# #
# SPDX-License-Identifier: CC0-1.0 # SPDX-License-Identifier: CC0-1.0
version: 0.2.0 version: 0.3.0
tutorial: true tutorial: true
logging: logging:
level: INFO level: INFO
format: '%(levelname)s:%(name)s:%(message)s' format: '%(levelname)s:%(name)s:%(message)s'
@ -11,7 +12,6 @@ logging:
summary_dir: results summary_dir: results
scenario: scenario:
sectors: [E]
simpl: [''] simpl: ['']
ll: ['copt'] ll: ['copt']
clusters: [5] clusters: [5]
@ -32,6 +32,7 @@ enable:
retrieve_cutout: true retrieve_cutout: true
build_natura_raster: false build_natura_raster: false
retrieve_natura_raster: true retrieve_natura_raster: true
custom_busmap: false
electricity: electricity:
voltages: [220., 300., 380.] voltages: [220., 300., 380.]
@ -39,8 +40,8 @@ electricity:
extendable_carriers: extendable_carriers:
Generator: [OCGT] Generator: [OCGT]
StorageUnit: [battery, H2] StorageUnit: [] #battery, H2
Store: [] #battery, H2 Store: [battery, H2]
Link: [] Link: []
max_hours: max_hours:
@ -131,11 +132,13 @@ lines:
300.: "Al/St 240/40 3-bundle 300.0" 300.: "Al/St 240/40 3-bundle 300.0"
380.: "Al/St 240/40 4-bundle 380.0" 380.: "Al/St 240/40 4-bundle 380.0"
s_max_pu: 0.7 s_max_pu: 0.7
s_nom_max: .inf
length_factor: 1.25 length_factor: 1.25
under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity
links: links:
p_max_pu: 1.0 p_max_pu: 1.0
p_nom_max: .inf
include_tyndp: true include_tyndp: true
under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity
@ -145,6 +148,11 @@ transformers:
type: '' type: ''
load: load:
url: https://data.open-power-system-data.org/time_series/2019-06-05/time_series_60min_singleindex.csv
power_statistics: True # only for files from <2019; set false in order to get ENTSOE transparency data
interpolate_limit: 3 # data gaps up until this size are interpolated linearly
time_shift_for_large_gaps: 1w # data gaps up until this size are copied by copying from
manual_adjustments: true # false
scaling_factor: 1.0 scaling_factor: 1.0
costs: costs:
@ -179,26 +187,8 @@ solving:
clip_p_max_pu: 0.01 clip_p_max_pu: 0.01
skip_iterations: false skip_iterations: false
track_iterations: false track_iterations: false
#nhours: 10
solver: solver:
name: cbc name: cbc
# solver:
# name: gurobi
# threads: 4
# method: 2 # barrier
# crossover: 0
# BarConvTol: 1.e-5
# FeasibilityTol: 1.e-6
# AggFill: 0
# PreDual: 0
# GURO_PAR_BARDENSETHRESH: 200
# solver:
# name: cplex
# threads: 4
# lpmethod: 4 # barrier
# solutiontype: 2 # non basic solution, ie no crossover
# barrier_convergetol: 1.e-5
# feasopt_tolerance: 1.e-6
plotting: plotting:
map: map:
@ -246,67 +236,18 @@ plotting:
'waste' : '#68896b' 'waste' : '#68896b'
'geothermal' : '#ba91b1' 'geothermal' : '#ba91b1'
"OCGT" : "#d35050" "OCGT" : "#d35050"
"OCGT marginal" : "#d35050"
"OCGT-heat" : "#d35050"
"gas boiler" : "#d35050"
"gas boilers" : "#d35050"
"gas boiler marginal" : "#d35050"
"gas-to-power/heat" : "#d35050"
"gas" : "#d35050" "gas" : "#d35050"
"natural gas" : "#d35050" "natural gas" : "#d35050"
"CCGT" : "#b20101" "CCGT" : "#b20101"
"CCGT marginal" : "#b20101"
"Nuclear" : "#ff9000"
"Nuclear marginal" : "#ff9000"
"nuclear" : "#ff9000" "nuclear" : "#ff9000"
"coal" : "#707070" "coal" : "#707070"
"Coal" : "#707070"
"Coal marginal" : "#707070"
"lignite" : "#9e5a01" "lignite" : "#9e5a01"
"Lignite" : "#9e5a01"
"Lignite marginal" : "#9e5a01"
"Oil" : "#262626"
"oil" : "#262626" "oil" : "#262626"
"H2" : "#ea048a" "H2" : "#ea048a"
"hydrogen storage" : "#ea048a" "hydrogen storage" : "#ea048a"
"Sabatier" : "#a31597"
"methanation" : "#a31597"
"helmeth" : "#a31597"
"DAC" : "#d284ff"
"co2 stored" : "#e5e5e5"
"CO2 sequestration" : "#e5e5e5"
"battery" : "#b8ea04" "battery" : "#b8ea04"
"battery storage" : "#b8ea04"
"Li ion" : "#b8ea04"
"BEV charger" : "#e2ff7c"
"V2G" : "#7a9618"
"transport fuel cell" : "#e884be"
"retrofitting" : "#e0d6a8"
"building retrofitting" : "#e0d6a8"
"heat pumps" : "#ff9768"
"heat pump" : "#ff9768"
"air heat pump" : "#ffbea0"
"ground heat pump" : "#ff7a3d"
"power-to-heat" : "#a59e7c"
"power-to-gas" : "#db8585"
"power-to-liquid" : "#a9acd1"
"Fischer-Tropsch" : "#a9acd1"
"resistive heater" : "#aa4925"
"water tanks" : "#401f75"
"hot water storage" : "#401f75"
"hot water charging" : "#351c5e"
"hot water discharging" : "#683ab2"
"CHP" : "#d80a56"
"CHP heat" : "#d80a56"
"CHP electric" : "#d80a56"
"district heating" : "#93864b"
"Ambient" : "#262626"
"Electric load" : "#f9d002" "Electric load" : "#f9d002"
"electricity" : "#f9d002" "electricity" : "#f9d002"
"Heat load" : "#d35050"
"heat" : "#d35050"
"Transport load" : "#235ebc"
"transport" : "#235ebc"
"lines" : "#70af1d" "lines" : "#70af1d"
"transmission lines" : "#70af1d" "transmission lines" : "#70af1d"
"AC-AC" : "#70af1d" "AC-AC" : "#70af1d"
@ -326,17 +267,5 @@ plotting:
hydro: "Reservoir & Dam" hydro: "Reservoir & Dam"
battery: "Battery Storage" battery: "Battery Storage"
H2: "Hydrogen Storage" H2: "Hydrogen Storage"
lines: "Transmission lines" lines: "Transmission Lines"
ror: "Run of river" ror: "Run of River"
nice_names_n:
OCGT: "Open-Cycle\nGas"
CCGT: "Combined-Cycle\nGas"
offwind-ac: "Offshore\nWind (AC)"
offwind-dc: "Offshore\nWind (DC)"
onwind: "Onshore\nWind"
battery: "Battery\nStorage"
H2: "Hydrogen\nStorage"
lines: "Transmission\nlines"
ror: "Run of\nriver"
PHS: "Pumped Hydro\nStorage"
hydro: "Reservoir\n& Dam"

195
data/costs.csv Normal file
View File

@ -0,0 +1,195 @@
technology,year,parameter,value,unit,source
solar-rooftop,2030,discount rate,0.04,per unit,standard for decentral
onwind,2030,lifetime,30,years,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
offwind,2030,lifetime,30,years,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
solar,2030,lifetime,25,years,IEA2010
solar-rooftop,2030,lifetime,25,years,IEA2010
solar-utility,2030,lifetime,25,years,IEA2010
PHS,2030,lifetime,80,years,IEA2010
hydro,2030,lifetime,80,years,IEA2010
ror,2030,lifetime,80,years,IEA2010
OCGT,2030,lifetime,30,years,IEA2010
nuclear,2030,lifetime,45,years,ECF2010 in DIW DataDoc http://hdl.handle.net/10419/80348
CCGT,2030,lifetime,30,years,IEA2010
coal,2030,lifetime,40,years,IEA2010
lignite,2030,lifetime,40,years,IEA2010
geothermal,2030,lifetime,40,years,IEA2010
biomass,2030,lifetime,30,years,ECF2010 in DIW DataDoc http://hdl.handle.net/10419/80348
oil,2030,lifetime,30,years,ECF2010 in DIW DataDoc http://hdl.handle.net/10419/80348
onwind,2030,investment,1040,EUR/kWel,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
offwind,2030,investment,1640,EUR/kWel,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
offwind-ac-station,2030,investment,250,EUR/kWel,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
offwind-ac-connection-submarine,2030,investment,2685,EUR/MW/km,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
offwind-ac-connection-underground,2030,investment,1342,EUR/MW/km,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
offwind-dc-station,2030,investment,400,EUR/kWel,Haertel 2017; assuming one onshore and one offshore node + 13% learning reduction
offwind-dc-connection-submarine,2030,investment,2000,EUR/MW/km,DTU report based on Fig 34 of https://ec.europa.eu/energy/sites/ener/files/documents/2014_nsog_report.pdf
offwind-dc-connection-underground,2030,investment,1000,EUR/MW/km,Haertel 2017; average + 13% learning reduction
solar,2030,investment,600,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
biomass,2030,investment,2209,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
geothermal,2030,investment,3392,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
coal,2030,investment,1300,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
lignite,2030,investment,1500,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
solar-rooftop,2030,investment,725,EUR/kWel,ETIP PV
solar-utility,2030,investment,425,EUR/kWel,ETIP PV
PHS,2030,investment,2000,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
hydro,2030,investment,2000,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
ror,2030,investment,3000,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
OCGT,2030,investment,400,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
nuclear,2030,investment,6000,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
CCGT,2030,investment,800,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
oil,2030,investment,400,EUR/kWel,DIW DataDoc http://hdl.handle.net/10419/80348
onwind,2030,FOM,2.450549,%/year,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
offwind,2030,FOM,2.304878,%/year,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
solar,2030,FOM,4.166667,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
solar-rooftop,2030,FOM,2,%/year,ETIP PV
solar-utility,2030,FOM,3,%/year,ETIP PV
biomass,2030,FOM,4.526935,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
geothermal,2030,FOM,2.358491,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
coal,2030,FOM,1.923076,%/year,DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
lignite,2030,FOM,2.0,%/year,DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
oil,2030,FOM,1.5,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
PHS,2030,FOM,1,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
hydro,2030,FOM,1,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
ror,2030,FOM,2,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
CCGT,2030,FOM,2.5,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
OCGT,2030,FOM,3.75,%/year,DIW DataDoc http://hdl.handle.net/10419/80348
onwind,2030,VOM,2.3,EUR/MWhel,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
offwind,2030,VOM,2.7,EUR/MWhel,DEA https://ens.dk/en/our-services/projections-and-models/technology-data
solar,2030,VOM,0.01,EUR/MWhel,RES costs made up to fix curtailment order
coal,2030,VOM,6,EUR/MWhel,DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
lignite,2030,VOM,7,EUR/MWhel,DIW DataDoc http://hdl.handle.net/10419/80348
CCGT,2030,VOM,4,EUR/MWhel,DIW DataDoc http://hdl.handle.net/10419/80348
OCGT,2030,VOM,3,EUR/MWhel,DIW DataDoc http://hdl.handle.net/10419/80348
nuclear,2030,VOM,8,EUR/MWhel,DIW DataDoc http://hdl.handle.net/10419/80348
gas,2030,fuel,21.6,EUR/MWhth,IEA2011b
uranium,2030,fuel,3,EUR/MWhth,DIW DataDoc http://hdl.handle.net/10419/80348
oil,2030,VOM,3,EUR/MWhel,DIW DataDoc http://hdl.handle.net/10419/80348
nuclear,2030,fuel,3,EUR/MWhth,IEA2011b
biomass,2030,fuel,7,EUR/MWhth,IEA2011b
coal,2030,fuel,8.4,EUR/MWhth,IEA2011b
lignite,2030,fuel,2.9,EUR/MWhth,IEA2011b
oil,2030,fuel,50,EUR/MWhth,IEA WEM2017 97USD/boe = http://www.iea.org/media/weowebsite/2017/WEM_Documentation_WEO2017.pdf
PHS,2030,efficiency,0.75,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
hydro,2030,efficiency,0.9,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
ror,2030,efficiency,0.9,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
OCGT,2030,efficiency,0.39,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
CCGT,2030,efficiency,0.5,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
biomass,2030,efficiency,0.468,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
geothermal,2030,efficiency,0.239,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
nuclear,2030,efficiency,0.337,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
gas,2030,CO2 intensity,0.187,tCO2/MWth,https://www.eia.gov/environment/emissions/co2_vol_mass.php
coal,2030,efficiency,0.464,per unit,DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
lignite,2030,efficiency,0.447,per unit,DIW DataDoc http://hdl.handle.net/10419/80348
oil,2030,efficiency,0.393,per unit,DIW DataDoc http://hdl.handle.net/10419/80348 CT
coal,2030,CO2 intensity,0.354,tCO2/MWth,https://www.eia.gov/environment/emissions/co2_vol_mass.php
lignite,2030,CO2 intensity,0.334,tCO2/MWth,https://www.eia.gov/environment/emissions/co2_vol_mass.php
oil,2030,CO2 intensity,0.248,tCO2/MWth,https://www.eia.gov/environment/emissions/co2_vol_mass.php
geothermal,2030,CO2 intensity,0.026,tCO2/MWth,https://www.eia.gov/environment/emissions/co2_vol_mass.php
electrolysis,2030,investment,350,EUR/kWel,Palzer Thesis
electrolysis,2030,FOM,4,%/year,NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
electrolysis,2030,lifetime,18,years,NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
electrolysis,2030,efficiency,0.8,per unit,NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
fuel cell,2030,investment,339,EUR/kWel,NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
fuel cell,2030,FOM,3,%/year,NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
fuel cell,2030,lifetime,20,years,NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
fuel cell,2030,efficiency,0.58,per unit,NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013 conservative 2020
hydrogen storage,2030,investment,11.2,USD/kWh,budischak2013
hydrogen storage,2030,lifetime,20,years,budischak2013
hydrogen underground storage,2030,investment,0.5,EUR/kWh,maximum from https://www.nrel.gov/docs/fy10osti/46719.pdf
hydrogen underground storage,2030,lifetime,40,years,http://www.acatech.de/fileadmin/user_upload/Baumstruktur_nach_Website/Acatech/root/de/Publikationen/Materialien/ESYS_Technologiesteckbrief_Energiespeicher.pdf
H2 pipeline,2030,investment,267,EUR/MW/km,Welder et al https://doi.org/10.1016/j.ijhydene.2018.12.156
H2 pipeline,2030,lifetime,40,years,Krieg2012 http://juser.fz-juelich.de/record/136392/files/Energie%26Umwelt_144.pdf
H2 pipeline,2030,FOM,5,%/year,Krieg2012 http://juser.fz-juelich.de/record/136392/files/Energie%26Umwelt_144.pdf
H2 pipeline,2030,efficiency,0.98,per unit,Krieg2012 http://juser.fz-juelich.de/record/136392/files/Energie%26Umwelt_144.pdf
methanation,2030,investment,1000,EUR/kWH2,Schaber thesis
methanation,2030,lifetime,25,years,Schaber thesis
methanation,2030,FOM,3,%/year,Schaber thesis
methanation,2030,efficiency,0.6,per unit,Palzer; Breyer for DAC
helmeth,2030,investment,1000,EUR/kW,no source
helmeth,2030,lifetime,25,years,no source
helmeth,2030,FOM,3,%/year,no source
helmeth,2030,efficiency,0.8,per unit,HELMETH press release
DAC,2030,investment,250,EUR/(tCO2/a),Fasihi/Climeworks
DAC,2030,lifetime,30,years,Fasihi
DAC,2030,FOM,4,%/year,Fasihi
battery inverter,2030,investment,411,USD/kWel,budischak2013
battery inverter,2030,lifetime,20,years,budischak2013
battery inverter,2030,efficiency,0.9,per unit charge/discharge,budischak2013; Lund and Kempton (2008) http://dx.doi.org/10.1016/j.enpol.2008.06.007
battery inverter,2030,FOM,3,%/year,budischak2013
battery storage,2030,investment,192,USD/kWh,budischak2013
battery storage,2030,lifetime,15,years,budischak2013
decentral air-sourced heat pump,2030,investment,1050,EUR/kWth,HP; Palzer thesis
decentral air-sourced heat pump,2030,lifetime,20,years,HP; Palzer thesis
decentral air-sourced heat pump,2030,FOM,3.5,%/year,Palzer thesis
decentral air-sourced heat pump,2030,efficiency,3,per unit,default for costs
decentral air-sourced heat pump,2030,discount rate,0.04,per unit,Palzer thesis
decentral ground-sourced heat pump,2030,investment,1400,EUR/kWth,Palzer thesis
decentral ground-sourced heat pump,2030,lifetime,20,years,Palzer thesis
decentral ground-sourced heat pump,2030,FOM,3.5,%/year,Palzer thesis
decentral ground-sourced heat pump,2030,efficiency,4,per unit,default for costs
decentral ground-sourced heat pump,2030,discount rate,0.04,per unit,Palzer thesis
central air-sourced heat pump,2030,investment,700,EUR/kWth,Palzer thesis
central air-sourced heat pump,2030,lifetime,20,years,Palzer thesis
central air-sourced heat pump,2030,FOM,3.5,%/year,Palzer thesis
central air-sourced heat pump,2030,efficiency,3,per unit,default for costs
retrofitting I,2030,discount rate,0.04,per unit,Palzer thesis
retrofitting I,2030,lifetime,50,years,Palzer thesis
retrofitting I,2030,FOM,1,%/year,Palzer thesis
retrofitting I,2030,investment,50,EUR/m2/fraction reduction,Palzer thesis
retrofitting II,2030,discount rate,0.04,per unit,Palzer thesis
retrofitting II,2030,lifetime,50,years,Palzer thesis
retrofitting II,2030,FOM,1,%/year,Palzer thesis
retrofitting II,2030,investment,250,EUR/m2/fraction reduction,Palzer thesis
water tank charger,2030,efficiency,0.9,per unit,HP
water tank discharger,2030,efficiency,0.9,per unit,HP
decentral water tank storage,2030,investment,860,EUR/m3,IWES Interaktion
decentral water tank storage,2030,FOM,1,%/year,HP
decentral water tank storage,2030,lifetime,20,years,HP
decentral water tank storage,2030,discount rate,0.04,per unit,Palzer thesis
central water tank storage,2030,investment,30,EUR/m3,IWES Interaktion
central water tank storage,2030,FOM,1,%/year,HP
central water tank storage,2030,lifetime,40,years,HP
decentral resistive heater,2030,investment,100,EUR/kWhth,Schaber thesis
decentral resistive heater,2030,lifetime,20,years,Schaber thesis
decentral resistive heater,2030,FOM,2,%/year,Schaber thesis
decentral resistive heater,2030,efficiency,0.9,per unit,Schaber thesis
decentral resistive heater,2030,discount rate,0.04,per unit,Palzer thesis
central resistive heater,2030,investment,100,EUR/kWhth,Schaber thesis
central resistive heater,2030,lifetime,20,years,Schaber thesis
central resistive heater,2030,FOM,2,%/year,Schaber thesis
central resistive heater,2030,efficiency,0.9,per unit,Schaber thesis
decentral gas boiler,2030,investment,175,EUR/kWhth,Palzer thesis
decentral gas boiler,2030,lifetime,20,years,Palzer thesis
decentral gas boiler,2030,FOM,2,%/year,Palzer thesis
decentral gas boiler,2030,efficiency,0.9,per unit,Palzer thesis
decentral gas boiler,2030,discount rate,0.04,per unit,Palzer thesis
central gas boiler,2030,investment,63,EUR/kWhth,Palzer thesis
central gas boiler,2030,lifetime,22,years,Palzer thesis
central gas boiler,2030,FOM,1,%/year,Palzer thesis
central gas boiler,2030,efficiency,0.9,per unit,Palzer thesis
decentral CHP,2030,lifetime,25,years,HP
decentral CHP,2030,investment,1400,EUR/kWel,HP
decentral CHP,2030,FOM,3,%/year,HP
decentral CHP,2030,discount rate,0.04,per unit,Palzer thesis
central CHP,2030,lifetime,25,years,HP
central CHP,2030,investment,650,EUR/kWel,HP
central CHP,2030,FOM,3,%/year,HP
decentral solar thermal,2030,discount rate,0.04,per unit,Palzer thesis
decentral solar thermal,2030,FOM,1.3,%/year,HP
decentral solar thermal,2030,investment,270000,EUR/1000m2,HP
decentral solar thermal,2030,lifetime,20,years,HP
central solar thermal,2030,FOM,1.4,%/year,HP
central solar thermal,2030,investment,140000,EUR/1000m2,HP
central solar thermal,2030,lifetime,20,years,HP
HVAC overhead,2030,investment,400,EUR/MW/km,Hagspiel
HVAC overhead,2030,lifetime,40,years,Hagspiel
HVAC overhead,2030,FOM,2,%/year,Hagspiel
HVDC overhead,2030,investment,400,EUR/MW/km,Hagspiel
HVDC overhead,2030,lifetime,40,years,Hagspiel
HVDC overhead,2030,FOM,2,%/year,Hagspiel
HVDC submarine,2030,investment,2000,EUR/MW/km,DTU report based on Fig 34 of https://ec.europa.eu/energy/sites/ener/files/documents/2014_nsog_report.pdf
HVDC submarine,2030,lifetime,40,years,Hagspiel
HVDC submarine,2030,FOM,2,%/year,Hagspiel
HVDC inverter pair,2030,investment,150000,EUR/MW,Hagspiel
HVDC inverter pair,2030,lifetime,40,years,Hagspiel
HVDC inverter pair,2030,FOM,2,%/year,Hagspiel
1 technology year parameter value unit source
2 solar-rooftop 2030 discount rate 0.04 per unit standard for decentral
3 onwind 2030 lifetime 30 years DEA https://ens.dk/en/our-services/projections-and-models/technology-data
4 offwind 2030 lifetime 30 years DEA https://ens.dk/en/our-services/projections-and-models/technology-data
5 solar 2030 lifetime 25 years IEA2010
6 solar-rooftop 2030 lifetime 25 years IEA2010
7 solar-utility 2030 lifetime 25 years IEA2010
8 PHS 2030 lifetime 80 years IEA2010
9 hydro 2030 lifetime 80 years IEA2010
10 ror 2030 lifetime 80 years IEA2010
11 OCGT 2030 lifetime 30 years IEA2010
12 nuclear 2030 lifetime 45 years ECF2010 in DIW DataDoc http://hdl.handle.net/10419/80348
13 CCGT 2030 lifetime 30 years IEA2010
14 coal 2030 lifetime 40 years IEA2010
15 lignite 2030 lifetime 40 years IEA2010
16 geothermal 2030 lifetime 40 years IEA2010
17 biomass 2030 lifetime 30 years ECF2010 in DIW DataDoc http://hdl.handle.net/10419/80348
18 oil 2030 lifetime 30 years ECF2010 in DIW DataDoc http://hdl.handle.net/10419/80348
19 onwind 2030 investment 1040 EUR/kWel DEA https://ens.dk/en/our-services/projections-and-models/technology-data
20 offwind 2030 investment 1640 EUR/kWel DEA https://ens.dk/en/our-services/projections-and-models/technology-data
21 offwind-ac-station 2030 investment 250 EUR/kWel DEA https://ens.dk/en/our-services/projections-and-models/technology-data
22 offwind-ac-connection-submarine 2030 investment 2685 EUR/MW/km DEA https://ens.dk/en/our-services/projections-and-models/technology-data
23 offwind-ac-connection-underground 2030 investment 1342 EUR/MW/km DEA https://ens.dk/en/our-services/projections-and-models/technology-data
24 offwind-dc-station 2030 investment 400 EUR/kWel Haertel 2017; assuming one onshore and one offshore node + 13% learning reduction
25 offwind-dc-connection-submarine 2030 investment 2000 EUR/MW/km DTU report based on Fig 34 of https://ec.europa.eu/energy/sites/ener/files/documents/2014_nsog_report.pdf
26 offwind-dc-connection-underground 2030 investment 1000 EUR/MW/km Haertel 2017; average + 13% learning reduction
27 solar 2030 investment 600 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
28 biomass 2030 investment 2209 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
29 geothermal 2030 investment 3392 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
30 coal 2030 investment 1300 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
31 lignite 2030 investment 1500 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
32 solar-rooftop 2030 investment 725 EUR/kWel ETIP PV
33 solar-utility 2030 investment 425 EUR/kWel ETIP PV
34 PHS 2030 investment 2000 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
35 hydro 2030 investment 2000 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
36 ror 2030 investment 3000 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
37 OCGT 2030 investment 400 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
38 nuclear 2030 investment 6000 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
39 CCGT 2030 investment 800 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
40 oil 2030 investment 400 EUR/kWel DIW DataDoc http://hdl.handle.net/10419/80348
41 onwind 2030 FOM 2.450549 %/year DEA https://ens.dk/en/our-services/projections-and-models/technology-data
42 offwind 2030 FOM 2.304878 %/year DEA https://ens.dk/en/our-services/projections-and-models/technology-data
43 solar 2030 FOM 4.166667 %/year DIW DataDoc http://hdl.handle.net/10419/80348
44 solar-rooftop 2030 FOM 2 %/year ETIP PV
45 solar-utility 2030 FOM 3 %/year ETIP PV
46 biomass 2030 FOM 4.526935 %/year DIW DataDoc http://hdl.handle.net/10419/80348
47 geothermal 2030 FOM 2.358491 %/year DIW DataDoc http://hdl.handle.net/10419/80348
48 coal 2030 FOM 1.923076 %/year DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
49 lignite 2030 FOM 2.0 %/year DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
50 oil 2030 FOM 1.5 %/year DIW DataDoc http://hdl.handle.net/10419/80348
51 PHS 2030 FOM 1 %/year DIW DataDoc http://hdl.handle.net/10419/80348
52 hydro 2030 FOM 1 %/year DIW DataDoc http://hdl.handle.net/10419/80348
53 ror 2030 FOM 2 %/year DIW DataDoc http://hdl.handle.net/10419/80348
54 CCGT 2030 FOM 2.5 %/year DIW DataDoc http://hdl.handle.net/10419/80348
55 OCGT 2030 FOM 3.75 %/year DIW DataDoc http://hdl.handle.net/10419/80348
56 onwind 2030 VOM 2.3 EUR/MWhel DEA https://ens.dk/en/our-services/projections-and-models/technology-data
57 offwind 2030 VOM 2.7 EUR/MWhel DEA https://ens.dk/en/our-services/projections-and-models/technology-data
58 solar 2030 VOM 0.01 EUR/MWhel RES costs made up to fix curtailment order
59 coal 2030 VOM 6 EUR/MWhel DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
60 lignite 2030 VOM 7 EUR/MWhel DIW DataDoc http://hdl.handle.net/10419/80348
61 CCGT 2030 VOM 4 EUR/MWhel DIW DataDoc http://hdl.handle.net/10419/80348
62 OCGT 2030 VOM 3 EUR/MWhel DIW DataDoc http://hdl.handle.net/10419/80348
63 nuclear 2030 VOM 8 EUR/MWhel DIW DataDoc http://hdl.handle.net/10419/80348
64 gas 2030 fuel 21.6 EUR/MWhth IEA2011b
65 uranium 2030 fuel 3 EUR/MWhth DIW DataDoc http://hdl.handle.net/10419/80348
66 oil 2030 VOM 3 EUR/MWhel DIW DataDoc http://hdl.handle.net/10419/80348
67 nuclear 2030 fuel 3 EUR/MWhth IEA2011b
68 biomass 2030 fuel 7 EUR/MWhth IEA2011b
69 coal 2030 fuel 8.4 EUR/MWhth IEA2011b
70 lignite 2030 fuel 2.9 EUR/MWhth IEA2011b
71 oil 2030 fuel 50 EUR/MWhth IEA WEM2017 97USD/boe = http://www.iea.org/media/weowebsite/2017/WEM_Documentation_WEO2017.pdf
72 PHS 2030 efficiency 0.75 per unit DIW DataDoc http://hdl.handle.net/10419/80348
73 hydro 2030 efficiency 0.9 per unit DIW DataDoc http://hdl.handle.net/10419/80348
74 ror 2030 efficiency 0.9 per unit DIW DataDoc http://hdl.handle.net/10419/80348
75 OCGT 2030 efficiency 0.39 per unit DIW DataDoc http://hdl.handle.net/10419/80348
76 CCGT 2030 efficiency 0.5 per unit DIW DataDoc http://hdl.handle.net/10419/80348
77 biomass 2030 efficiency 0.468 per unit DIW DataDoc http://hdl.handle.net/10419/80348
78 geothermal 2030 efficiency 0.239 per unit DIW DataDoc http://hdl.handle.net/10419/80348
79 nuclear 2030 efficiency 0.337 per unit DIW DataDoc http://hdl.handle.net/10419/80348
80 gas 2030 CO2 intensity 0.187 tCO2/MWth https://www.eia.gov/environment/emissions/co2_vol_mass.php
81 coal 2030 efficiency 0.464 per unit DIW DataDoc http://hdl.handle.net/10419/80348 PC (Advanced/SuperC)
82 lignite 2030 efficiency 0.447 per unit DIW DataDoc http://hdl.handle.net/10419/80348
83 oil 2030 efficiency 0.393 per unit DIW DataDoc http://hdl.handle.net/10419/80348 CT
84 coal 2030 CO2 intensity 0.354 tCO2/MWth https://www.eia.gov/environment/emissions/co2_vol_mass.php
85 lignite 2030 CO2 intensity 0.334 tCO2/MWth https://www.eia.gov/environment/emissions/co2_vol_mass.php
86 oil 2030 CO2 intensity 0.248 tCO2/MWth https://www.eia.gov/environment/emissions/co2_vol_mass.php
87 geothermal 2030 CO2 intensity 0.026 tCO2/MWth https://www.eia.gov/environment/emissions/co2_vol_mass.php
88 electrolysis 2030 investment 350 EUR/kWel Palzer Thesis
89 electrolysis 2030 FOM 4 %/year NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
90 electrolysis 2030 lifetime 18 years NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
91 electrolysis 2030 efficiency 0.8 per unit NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
92 fuel cell 2030 investment 339 EUR/kWel NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
93 fuel cell 2030 FOM 3 %/year NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
94 fuel cell 2030 lifetime 20 years NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013
95 fuel cell 2030 efficiency 0.58 per unit NREL http://www.nrel.gov/docs/fy09osti/45873.pdf; budischak2013 conservative 2020
96 hydrogen storage 2030 investment 11.2 USD/kWh budischak2013
97 hydrogen storage 2030 lifetime 20 years budischak2013
98 hydrogen underground storage 2030 investment 0.5 EUR/kWh maximum from https://www.nrel.gov/docs/fy10osti/46719.pdf
99 hydrogen underground storage 2030 lifetime 40 years http://www.acatech.de/fileadmin/user_upload/Baumstruktur_nach_Website/Acatech/root/de/Publikationen/Materialien/ESYS_Technologiesteckbrief_Energiespeicher.pdf
100 H2 pipeline 2030 investment 267 EUR/MW/km Welder et al https://doi.org/10.1016/j.ijhydene.2018.12.156
101 H2 pipeline 2030 lifetime 40 years Krieg2012 http://juser.fz-juelich.de/record/136392/files/Energie%26Umwelt_144.pdf
102 H2 pipeline 2030 FOM 5 %/year Krieg2012 http://juser.fz-juelich.de/record/136392/files/Energie%26Umwelt_144.pdf
103 H2 pipeline 2030 efficiency 0.98 per unit Krieg2012 http://juser.fz-juelich.de/record/136392/files/Energie%26Umwelt_144.pdf
104 methanation 2030 investment 1000 EUR/kWH2 Schaber thesis
105 methanation 2030 lifetime 25 years Schaber thesis
106 methanation 2030 FOM 3 %/year Schaber thesis
107 methanation 2030 efficiency 0.6 per unit Palzer; Breyer for DAC
108 helmeth 2030 investment 1000 EUR/kW no source
109 helmeth 2030 lifetime 25 years no source
110 helmeth 2030 FOM 3 %/year no source
111 helmeth 2030 efficiency 0.8 per unit HELMETH press release
112 DAC 2030 investment 250 EUR/(tCO2/a) Fasihi/Climeworks
113 DAC 2030 lifetime 30 years Fasihi
114 DAC 2030 FOM 4 %/year Fasihi
115 battery inverter 2030 investment 411 USD/kWel budischak2013
116 battery inverter 2030 lifetime 20 years budischak2013
117 battery inverter 2030 efficiency 0.9 per unit charge/discharge budischak2013; Lund and Kempton (2008) http://dx.doi.org/10.1016/j.enpol.2008.06.007
118 battery inverter 2030 FOM 3 %/year budischak2013
119 battery storage 2030 investment 192 USD/kWh budischak2013
120 battery storage 2030 lifetime 15 years budischak2013
121 decentral air-sourced heat pump 2030 investment 1050 EUR/kWth HP; Palzer thesis
122 decentral air-sourced heat pump 2030 lifetime 20 years HP; Palzer thesis
123 decentral air-sourced heat pump 2030 FOM 3.5 %/year Palzer thesis
124 decentral air-sourced heat pump 2030 efficiency 3 per unit default for costs
125 decentral air-sourced heat pump 2030 discount rate 0.04 per unit Palzer thesis
126 decentral ground-sourced heat pump 2030 investment 1400 EUR/kWth Palzer thesis
127 decentral ground-sourced heat pump 2030 lifetime 20 years Palzer thesis
128 decentral ground-sourced heat pump 2030 FOM 3.5 %/year Palzer thesis
129 decentral ground-sourced heat pump 2030 efficiency 4 per unit default for costs
130 decentral ground-sourced heat pump 2030 discount rate 0.04 per unit Palzer thesis
131 central air-sourced heat pump 2030 investment 700 EUR/kWth Palzer thesis
132 central air-sourced heat pump 2030 lifetime 20 years Palzer thesis
133 central air-sourced heat pump 2030 FOM 3.5 %/year Palzer thesis
134 central air-sourced heat pump 2030 efficiency 3 per unit default for costs
135 retrofitting I 2030 discount rate 0.04 per unit Palzer thesis
136 retrofitting I 2030 lifetime 50 years Palzer thesis
137 retrofitting I 2030 FOM 1 %/year Palzer thesis
138 retrofitting I 2030 investment 50 EUR/m2/fraction reduction Palzer thesis
139 retrofitting II 2030 discount rate 0.04 per unit Palzer thesis
140 retrofitting II 2030 lifetime 50 years Palzer thesis
141 retrofitting II 2030 FOM 1 %/year Palzer thesis
142 retrofitting II 2030 investment 250 EUR/m2/fraction reduction Palzer thesis
143 water tank charger 2030 efficiency 0.9 per unit HP
144 water tank discharger 2030 efficiency 0.9 per unit HP
145 decentral water tank storage 2030 investment 860 EUR/m3 IWES Interaktion
146 decentral water tank storage 2030 FOM 1 %/year HP
147 decentral water tank storage 2030 lifetime 20 years HP
148 decentral water tank storage 2030 discount rate 0.04 per unit Palzer thesis
149 central water tank storage 2030 investment 30 EUR/m3 IWES Interaktion
150 central water tank storage 2030 FOM 1 %/year HP
151 central water tank storage 2030 lifetime 40 years HP
152 decentral resistive heater 2030 investment 100 EUR/kWhth Schaber thesis
153 decentral resistive heater 2030 lifetime 20 years Schaber thesis
154 decentral resistive heater 2030 FOM 2 %/year Schaber thesis
155 decentral resistive heater 2030 efficiency 0.9 per unit Schaber thesis
156 decentral resistive heater 2030 discount rate 0.04 per unit Palzer thesis
157 central resistive heater 2030 investment 100 EUR/kWhth Schaber thesis
158 central resistive heater 2030 lifetime 20 years Schaber thesis
159 central resistive heater 2030 FOM 2 %/year Schaber thesis
160 central resistive heater 2030 efficiency 0.9 per unit Schaber thesis
161 decentral gas boiler 2030 investment 175 EUR/kWhth Palzer thesis
162 decentral gas boiler 2030 lifetime 20 years Palzer thesis
163 decentral gas boiler 2030 FOM 2 %/year Palzer thesis
164 decentral gas boiler 2030 efficiency 0.9 per unit Palzer thesis
165 decentral gas boiler 2030 discount rate 0.04 per unit Palzer thesis
166 central gas boiler 2030 investment 63 EUR/kWhth Palzer thesis
167 central gas boiler 2030 lifetime 22 years Palzer thesis
168 central gas boiler 2030 FOM 1 %/year Palzer thesis
169 central gas boiler 2030 efficiency 0.9 per unit Palzer thesis
170 decentral CHP 2030 lifetime 25 years HP
171 decentral CHP 2030 investment 1400 EUR/kWel HP
172 decentral CHP 2030 FOM 3 %/year HP
173 decentral CHP 2030 discount rate 0.04 per unit Palzer thesis
174 central CHP 2030 lifetime 25 years HP
175 central CHP 2030 investment 650 EUR/kWel HP
176 central CHP 2030 FOM 3 %/year HP
177 decentral solar thermal 2030 discount rate 0.04 per unit Palzer thesis
178 decentral solar thermal 2030 FOM 1.3 %/year HP
179 decentral solar thermal 2030 investment 270000 EUR/1000m2 HP
180 decentral solar thermal 2030 lifetime 20 years HP
181 central solar thermal 2030 FOM 1.4 %/year HP
182 central solar thermal 2030 investment 140000 EUR/1000m2 HP
183 central solar thermal 2030 lifetime 20 years HP
184 HVAC overhead 2030 investment 400 EUR/MW/km Hagspiel
185 HVAC overhead 2030 lifetime 40 years Hagspiel
186 HVAC overhead 2030 FOM 2 %/year Hagspiel
187 HVDC overhead 2030 investment 400 EUR/MW/km Hagspiel
188 HVDC overhead 2030 lifetime 40 years Hagspiel
189 HVDC overhead 2030 FOM 2 %/year Hagspiel
190 HVDC submarine 2030 investment 2000 EUR/MW/km DTU report based on Fig 34 of https://ec.europa.eu/energy/sites/ener/files/documents/2014_nsog_report.pdf
191 HVDC submarine 2030 lifetime 40 years Hagspiel
192 HVDC submarine 2030 FOM 2 %/year Hagspiel
193 HVDC inverter pair 2030 investment 150000 EUR/MW Hagspiel
194 HVDC inverter pair 2030 lifetime 40 years Hagspiel
195 HVDC inverter pair 2030 FOM 2 %/year Hagspiel

View File

@ -6,8 +6,8 @@ Italy-Montenegro,Villanova (IT),Latsva (MT),445,,1200,under construction,Link.14
NordLink,Tonstad (NO),Wilster (DE),514,,1400,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/37,6.716948,58.662631,9.373979,53.922479 NordLink,Tonstad (NO),Wilster (DE),514,,1400,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/37,6.716948,58.662631,9.373979,53.922479
COBRA cable,Endrup (DK),Eemshaven (NL),325,,700,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/71,8.718392,55.523115,6.835494,53.438589 COBRA cable,Endrup (DK),Eemshaven (NL),325,,700,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/71,8.718392,55.523115,6.835494,53.438589
Thames Estuary Cluster (NEMO-Link),Richborough (GB),Gezelle (BE),140,,1000,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/74,1.324854,51.295891,3.23043,51.24902 Thames Estuary Cluster (NEMO-Link),Richborough (GB),Gezelle (BE),140,,1000,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/74,1.324854,51.295891,3.23043,51.24902
Anglo-Scottish -1,Hunterston (UK),Deeside (UK),422,,2400,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/77,-4.898329,55.723331,-3.032972,53.199735 Anglo-Scottish -1,Hunterston (UK),Deeside (UK),422,,2400,built,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/77,-4.898329,55.723331,-3.032972,53.199735
ALEGrO,Lixhe (BE),Oberzier (DE),100,,1000,in permitting,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/92,5.67933,50.7567965,6.474704,50.867532 ALEGrO,Lixhe (BE),Oberzier (DE),100,,1000,built,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/92,5.67933,50.7567965,6.474704,50.867532
North Sea Link,Kvilldal (NO),Blythe (GB),720,,1400,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/110,6.637527,59.515096,-1.510277,55.126957 North Sea Link,Kvilldal (NO),Blythe (GB),720,,1400,under construction,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/110,6.637527,59.515096,-1.510277,55.126957
HVDC SuedOstLink,Wolmirstedt (DE),Isar (DE),,557,2000,in permitting,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/130,11.629014,52.252137,12.091596,48.080837 HVDC SuedOstLink,Wolmirstedt (DE),Isar (DE),,557,2000,in permitting,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/130,11.629014,52.252137,12.091596,48.080837
HVDC Line A-North,Emden East (DE),Osterath (DE),,284,2000,in permitting,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/132,7.206009,53.359403,6.619451,51.272935 HVDC Line A-North,Emden East (DE),Osterath (DE),,284,2000,in permitting,,https://tyndp.entsoe.eu/tyndp2018/projects/projects/132,7.206009,53.359403,6.619451,51.272935

1 Name Converterstation 1 Converterstation 2 Length (given) (km) Length (distance*1.2) (km) Power (MW) status replaces Ref x1 y1 x2 y2
6 NordLink Tonstad (NO) Wilster (DE) 514 1400 under construction https://tyndp.entsoe.eu/tyndp2018/projects/projects/37 6.716948 58.662631 9.373979 53.922479
7 COBRA cable Endrup (DK) Eemshaven (NL) 325 700 under construction https://tyndp.entsoe.eu/tyndp2018/projects/projects/71 8.718392 55.523115 6.835494 53.438589
8 Thames Estuary Cluster (NEMO-Link) Richborough (GB) Gezelle (BE) 140 1000 under construction https://tyndp.entsoe.eu/tyndp2018/projects/projects/74 1.324854 51.295891 3.23043 51.24902
9 Anglo-Scottish -1 Hunterston (UK) Deeside (UK) 422 2400 under construction built https://tyndp.entsoe.eu/tyndp2018/projects/projects/77 -4.898329 55.723331 -3.032972 53.199735
10 ALEGrO Lixhe (BE) Oberzier (DE) 100 1000 in permitting built https://tyndp.entsoe.eu/tyndp2018/projects/projects/92 5.67933 50.7567965 6.474704 50.867532
11 North Sea Link Kvilldal (NO) Blythe (GB) 720 1400 under construction https://tyndp.entsoe.eu/tyndp2018/projects/projects/110 6.637527 59.515096 -1.510277 55.126957
12 HVDC SuedOstLink Wolmirstedt (DE) Isar (DE) 557 2000 in permitting https://tyndp.entsoe.eu/tyndp2018/projects/projects/130 11.629014 52.252137 12.091596 48.080837
13 HVDC Line A-North Emden East (DE) Osterath (DE) 284 2000 in permitting https://tyndp.entsoe.eu/tyndp2018/projects/projects/132 7.206009 53.359403 6.619451 51.272935

View File

@ -33,12 +33,13 @@ Link:
"14559": "6240" # fix wrong bus allocation from 6241 "14559": "6240" # fix wrong bus allocation from 6241
"12998": "1333" # combine link 12998 + 12997 in 12998 "12998": "1333" # combine link 12998 + 12997 in 12998
"5627": '2309' # combine link 5627 + 5628 in 5627 "5627": '2309' # combine link 5627 + 5628 in 5627
"8068": "5819" # fix GB location of Anglo-Scottish interconnector
length: length:
index: index:
"12998": 409.0 "12998": 409.0
"5627": 26.39 "5627": 26.39
bus0: bus0:
index: index:
# set bus0 == bus1 for removing the link in remove_unconnected_components "14552": "5819" # fix GB location of GB-IE interconnector
"5628": "7276" "5628": "7276" # bus0 == bus1 to remove link in remove_unconnected_components
"12997": "7276" "12997": "7276" # bus0 == bus1 to remove link in remove_unconnected_components

View File

@ -2,21 +2,72 @@
SPDX-License-Identifier: GPL-3.0-or-later SPDX-License-Identifier: GPL-3.0-or-later
*/ */
/* override table width restrictions */ .wy-side-nav-search {
@media screen and (min-width: 767px) { background-color: #eeeeee;
}
.wy-side-nav-search .wy-dropdown>a,
.wy-side-nav-search>a {
color: rgb(34, 97, 156)
}
.wy-side-nav-search>div.version {
color: rgb(34, 97, 156)
}
.wy-menu-vertical header,
.wy-menu-vertical p.caption,
.rst-versions a {
color: #999999;
}
.wy-menu-vertical a.reference:hover,
.wy-menu-vertical a.reference.internal:hover {
background: #dddddd;
color: #fff;
}
.wy-nav-side {
background: #efefef;
}
.wy-menu-vertical a.reference {
color: #000;
}
.rst-versions .rst-current-version,
.wy-nav-top,
.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a:hover {
background: #002221;
}
.wy-nav-content .highlight {
background: #ffffff;
}
.rst-content code.literal,
.rst-content tt.literal {
color: rgb(34, 97, 156)
}
.wy-nav-content a.reference {
color: rgb(34, 97, 156);
}
/* override table width restrictions */
@media screen and (min-width: 767px) {
.wy-table-responsive table td { .wy-table-responsive table td {
/* !important prevents the common CSS stylesheets from overriding /* !important prevents the common CSS stylesheets from overriding
this as on RTD they are loaded after this stylesheet */ this as on RTD they are loaded after this stylesheet */
white-space: normal !important; white-space: normal !important;
/* background: #eeeeee !important; */ background: rgb(250, 250, 250) !important;
} }
.wy-table-responsive { .wy-table-responsive {
max-width: 100%; max-width: 100%;
overflow: visible !important; overflow: visible !important;
} }
.wy-nav-content { .wy-nav-content {
max-width: 910px !important; max-width: 910px !important;
} }

View File

@ -60,7 +60,7 @@ Now a window with the machine details will open. You have to configure the follo
You can edit your machine configuration later. So use a cheap machine type configuration to transfer data and You can edit your machine configuration later. So use a cheap machine type configuration to transfer data and
only when everything is ready and tested, your expensive machine type, for instance a custom 8 vCPU with 160 GB memory. only when everything is ready and tested, your expensive machine type, for instance a custom 8 vCPU with 160 GB memory.
Solvers do not parallelise well, so we recommend not to choose more than 8 vCPU. Solvers do not parallelise well, so we recommend not to choose more than 8 vCPU.
Check ``snakemake -j -n 1 solve_all_elec_networks`` as a dry run to see how much memory is required. Check ``snakemake -n -j 1 solve_all_networks`` as a dry run to see how much memory is required.
The memory requirements will vary depending on the spatial and temporal resoulution of your optimisation. The memory requirements will vary depending on the spatial and temporal resoulution of your optimisation.
Example: for an hourly, 181 node full European network, set 8 vCPU and 150 GB memory since the dry-run calculated a 135 GB memory requirement.) Example: for an hourly, 181 node full European network, set 8 vCPU and 150 GB memory since the dry-run calculated a 135 GB memory requirement.)
- Boot disk: As default, your VM is created with 10 GB. Depending on how much you want to handle on one VM you should increase the disk size. - Boot disk: As default, your VM is created with 10 GB. Depending on how much you want to handle on one VM you should increase the disk size.
@ -85,7 +85,7 @@ Step 3 - Installation of Cloud SDK
sudo apt-get update sudo apt-get update
sudo apt-get install bzip2 libxml2-dev sudo apt-get install bzip2 libxml2-dev
sudo apt-get install wget sudo apt-get install wget
wget https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh (Check the link. To be up to date with anaconda, check the Anaconda website https://www.anaconda.com/products/individual ) wget https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh
ls (to see what anaconda file to bash) ls (to see what anaconda file to bash)
bash Anaconda3-2020.07-Linux-x86_64.sh bash Anaconda3-2020.07-Linux-x86_64.sh
source ~/.bashrc source ~/.bashrc

View File

@ -74,9 +74,9 @@ author = u'Jonas Hoersch (KIT, FIAS), Fabian Hofmann (FIAS), David Schlachtberge
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
version = u'0.2' version = u'0.3'
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = u'0.2.0' release = u'0.3.0'
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.

View File

@ -1,16 +1,19 @@
,Unit,Values,Description ,Unit,Values,Description,
voltages,kV,"Any subset of {220., 300., 380.}","Voltage levels to consider when" voltages,kV,"Any subset of {220., 300., 380.}",Voltage levels to consider when,
co2limit,:math:`t_{CO_2-eq}/a`,float,"Cap on total annual system carbon dioxide emissions" co2limit,:math:`t_{CO_2-eq}/a`,float,Cap on total annual system carbon dioxide emissions,
co2base,:math:`t_{CO_2-eq}/a`,float,"Reference value of total annual system carbon dioxide emissions if relative emission reduction target is specified in ``{opts}`` wildcard." co2base,:math:`t_{CO_2-eq}/a`,float,Reference value of total annual system carbon dioxide emissions if relative emission reduction target is specified in ``{opts}`` wildcard.,
agg_p_nom_limits,--,file path,"Reference to ``.csv`` file specifying per carrier generator nominal capacity constraints for individual countries if ``'CCL'`` is in ``{opts}`` wildcard. Defaults to ``data/agg_p_nom_minmax.csv``." agg_p_nom_limits,file,path,Reference to ``.csv`` file specifying per carrier generator nominal capacity constraints for individual countries if ``'CCL'`` is in ``{opts}`` wildcard. Defaults to ``data/agg_p_nom_minmax.csv``.
extendable_carriers,,, extendable_carriers,,,,
-- Generator,--,"Any subset of {'OCGT','CCGT'}","Places extendable conventional power plants (OCGT and/or CCGT) where gas power plants are located today without capacity limits." -- Generator,--,"Any subset of {'OCGT','CCGT'}",Places extendable conventional power plants (OCGT and/or CCGT) where gas power plants are located today without capacity limits.
-- StorageUnit,--,"Any subset of {'battery','H2'}","Adds extendable storage units (battery and/or hydrogen) at every node/bus after clustering without capacity limits and with zero initial capacity." -- StorageUnit,--,"Any subset of {'battery','H2'}",Adds extendable storage units (battery and/or hydrogen) at every node/bus after clustering without capacity limits and with zero initial capacity.
-- Store,--,"Any subset of {'battery','H2'}","Adds extendable storage units (battery and/or hydrogen) at every node/bus after clustering without capacity limits and with zero initial capacity." -- Store,--,"Any subset of {'battery','H2'}",Adds extendable storage units (battery and/or hydrogen) at every node/bus after clustering without capacity limits and with zero initial capacity.
-- Link,--,"Any subset of {'H2 pipeline'}","Adds extendable links (H2 pipelines only) at every connection where there are lines or HVDC links without capacity limits and with zero initial capacity. Hydrogen pipelines require hydrogen storage to be modelled as ``Store``." -- Link,--,Any subset of {'H2 pipeline'},Adds extendable links (H2 pipelines only) at every connection where there are lines or HVDC links without capacity limits and with zero initial capacity. Hydrogen pipelines require hydrogen storage to be modelled as ``Store``.
max_hours,,, max_hours,,,,
-- battery,h,float,"Maximum state of charge capacity of the battery in terms of hours at full output capacity ``p_nom``. Cf. `PyPSA documentation <https://pypsa.readthedocs.io/en/latest/components.html#storage-unit>`_." -- battery,h,float,Maximum state of charge capacity of the battery in terms of hours at full output capacity ``p_nom``. Cf. `PyPSA documentation <https://pypsa.readthedocs.io/en/latest/components.html#storage-unit>`_.
-- H2,h,float,"Maximum state of charge capacity of the hydrogen storage in terms of hours at full output capacity ``p_nom``. Cf. `PyPSA documentation <https://pypsa.readthedocs.io/en/latest/components.html#storage-unit>`_." -- H2,h,float,Maximum state of charge capacity of the hydrogen storage in terms of hours at full output capacity ``p_nom``. Cf. `PyPSA documentation <https://pypsa.readthedocs.io/en/latest/components.html#storage-unit>`_.
powerplants_filter,--,"use `pandas.query <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html>`_ strings here, e.g. Country not in ['Germany']","Filter query for the default powerplant database." powerplants_filter,--,"use `pandas.query <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html>`_ strings here, e.g. Country not in ['Germany']",Filter query for the default powerplant database.,
custom_powerplants,--,"use `pandas.query <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html>`_ strings here, e.g. Country in ['Germany']","Filter query for the custom powerplant database." custom_powerplants,--,"use `pandas.query <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html>`_ strings here, e.g. Country in ['Germany']",Filter query for the custom powerplant database.,
conventional_carriers,--,"Any subset of {nuclear, oil, OCGT, CCGT, coal, lignite, geothermal, biomass}","List of conventional power plants to include in the model from ``resources/powerplants.csv``." conventional_carriers,--,"Any subset of {nuclear, oil, OCGT, CCGT, coal, lignite, geothermal, biomass}",List of conventional power plants to include in the model from ``resources/powerplants.csv``.,
renewable_capacities_from_OPSD,,"[solar, onwind, offwind]",List of carriers (offwind-ac and offwind-dc are included in offwind) whose capacities 'p_nom' are aligned to the `OPSD renewable power plant list <https://data.open-power-system-data.org/renewable_power_plants/>`_,
estimate_renewable_capacities_from_capacitiy_stats,,,,
"-- Fueltype [ppm], e.g. Wind",,"list of fueltypes strings in PyPSA-Eur, e.g. [onwind, offwind-ac, offwind-dc]",converts ppm Fueltype to PyPSA-EUR Fueltype,

Can't render this file because it has a wrong number of fields in line 5.

View File

@ -1,5 +1,6 @@
,Unit,Values,Description ,Unit,Values,Description
types,--,"Values should specify a `line type in PyPSA <https://pypsa.readthedocs.io/en/latest/components.html#line-types>`_. Keys should specify the corresponding voltage level (e.g. 220., 300. and 380. kV)","Specifies line types to assume for the different voltage levels of the ENTSO-E grid extraction. Should normally handle voltage levels 220, 300, and 380 kV" types,--,"Values should specify a `line type in PyPSA <https://pypsa.readthedocs.io/en/latest/components.html#line-types>`_. Keys should specify the corresponding voltage level (e.g. 220., 300. and 380. kV)","Specifies line types to assume for the different voltage levels of the ENTSO-E grid extraction. Should normally handle voltage levels 220, 300, and 380 kV"
s_max_pu,--,"Value in [0.,1.]","Correction factor for line capacities (``s_nom``) to approximate :math:`N-1` security and reserve capacity for reactive power flows" s_max_pu,--,"Value in [0.,1.]","Correction factor for line capacities (``s_nom``) to approximate :math:`N-1` security and reserve capacity for reactive power flows"
s_nom_max,MW,"float","Global upper limit for the maximum capacity of each extendable line."
length_factor,--,float,"Correction factor to account for the fact that buses are *not* connected by lines through air-line distance." length_factor,--,float,"Correction factor to account for the fact that buses are *not* connected by lines through air-line distance."
under_construction,--,"One of {'zero': set capacity to zero, 'remove': remove completely, 'keep': keep with full capacity}","Specifies how to handle lines which are currently under construction." under_construction,--,"One of {'zero': set capacity to zero, 'remove': remove completely, 'keep': keep with full capacity}","Specifies how to handle lines which are currently under construction."
1 Unit Values Description
2 types -- Values should specify a `line type in PyPSA <https://pypsa.readthedocs.io/en/latest/components.html#line-types>`_. Keys should specify the corresponding voltage level (e.g. 220., 300. and 380. kV) Specifies line types to assume for the different voltage levels of the ENTSO-E grid extraction. Should normally handle voltage levels 220, 300, and 380 kV
3 s_max_pu -- Value in [0.,1.] Correction factor for line capacities (``s_nom``) to approximate :math:`N-1` security and reserve capacity for reactive power flows
4 s_nom_max MW float Global upper limit for the maximum capacity of each extendable line.
5 length_factor -- float Correction factor to account for the fact that buses are *not* connected by lines through air-line distance.
6 under_construction -- One of {'zero': set capacity to zero, 'remove': remove completely, 'keep': keep with full capacity} Specifies how to handle lines which are currently under construction.

View File

@ -1,4 +1,5 @@
,Unit,Values,Description ,Unit,Values,Description
p_max_pu,--,"Value in [0.,1.]","Correction factor for link capacities ``p_nom``." p_max_pu,--,"Value in [0.,1.]","Correction factor for link capacities ``p_nom``."
p_nom_max,MW,"float","Global upper limit for the maximum capacity of each extendable DC link."
include_tyndp,bool,"{'true', 'false'}","Specifies whether to add HVDC link projects from the `TYNDP 2018 <https://tyndp.entsoe.eu/tyndp2018/projects/>`_ which are at least in permitting." include_tyndp,bool,"{'true', 'false'}","Specifies whether to add HVDC link projects from the `TYNDP 2018 <https://tyndp.entsoe.eu/tyndp2018/projects/>`_ which are at least in permitting."
under_construction,--,"One of {'zero': set capacity to zero, 'remove': remove completely, 'keep': keep with full capacity}","Specifies how to handle lines which are currently under construction." under_construction,--,"One of {'zero': set capacity to zero, 'remove': remove completely, 'keep': keep with full capacity}","Specifies how to handle lines which are currently under construction."
1 Unit Values Description
2 p_max_pu -- Value in [0.,1.] Correction factor for link capacities ``p_nom``.
3 p_nom_max MW float Global upper limit for the maximum capacity of each extendable DC link.
4 include_tyndp bool {'true', 'false'} Specifies whether to add HVDC link projects from the `TYNDP 2018 <https://tyndp.entsoe.eu/tyndp2018/projects/>`_ which are at least in permitting.
5 under_construction -- One of {'zero': set capacity to zero, 'remove': remove completely, 'keep': keep with full capacity} Specifies how to handle lines which are currently under construction.

View File

@ -1,2 +1,7 @@
,Unit,Values,Description ,Unit,Values,Description
url,--,string,"Link to open power system data time series data."
power_statistics,bool,"{true, false}",Whether to load the electricity consumption data of the ENTSOE power statistics (only for files from 2019 and before) or from the ENTSOE transparency data (only has load data from 2015 onwards).
interpolate_limit,hours,integer,"Maximum gap size (consecutive nans) which interpolated linearly."
time_shift_for_large_gaps,string,string,"Periods which are used for copying time-slices in order to fill large gaps of nans. Have to be valid ``pandas`` period strings."
manual_adjustments,bool,"{true, false}","Whether to adjust the load data manually according to the function in :func:`manual_adjustment`."
scaling_factor,--,float,"Global correction factor for the load time series." scaling_factor,--,float,"Global correction factor for the load time series."
1 Unit Values Description
2 url -- string Link to open power system data time series data.
3 power_statistics bool {true, false} Whether to load the electricity consumption data of the ENTSOE power statistics (only for files from 2019 and before) or from the ENTSOE transparency data (only has load data from 2015 onwards).
4 interpolate_limit hours integer Maximum gap size (consecutive nans) which interpolated linearly.
5 time_shift_for_large_gaps string string Periods which are used for copying time-slices in order to fill large gaps of nans. Have to be valid ``pandas`` period strings.
6 manual_adjustments bool {true, false} Whether to adjust the load data manually according to the function in :func:`manual_adjustment`.
7 scaling_factor -- float Global correction factor for the load time series.

View File

@ -1,8 +1,11 @@
Trigger, Description, Definition, Status Trigger, Description, Definition, Status
``nH``; i.e. ``2H``-``6H``, Resample the time-resolution by averaging over every ``n`` snapshots, ``prepare_network``: `average_every_nhours() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L110>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L146>`_), In active use ``nH``; i.e. ``2H``-``6H``, Resample the time-resolution by averaging over every ``n`` snapshots, ``prepare_network``: `average_every_nhours() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L110>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L146>`_), In active use
``nSEG``; e.g. ``4380SEG``, "Apply time series segmentation with `tsam <https://tsam.readthedocs.io/en/latest/index.html>`_ package to ``n`` adjacent snapshots of varying lengths based on capacity factors of varying renewables, hydro inflow and load.", ``prepare_network``: apply_time_segmentation(), In active use
``Co2L``, Add an overall absolute carbon-dioxide emissions limit configured in ``electricity: co2limit``. If a float is appended an overall emission limit relative to the emission level given in ``electricity: co2base`` is added (e.g. ``Co2L0.05`` limits emissisions to 5% of what is given in ``electricity: co2base``), ``prepare_network``: `add_co2limit() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L19>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L154>`_, In active use ``Co2L``, Add an overall absolute carbon-dioxide emissions limit configured in ``electricity: co2limit``. If a float is appended an overall emission limit relative to the emission level given in ``electricity: co2base`` is added (e.g. ``Co2L0.05`` limits emissisions to 5% of what is given in ``electricity: co2base``), ``prepare_network``: `add_co2limit() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L19>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L154>`_, In active use
``Ep``, Add cost for a carbon-dioxide price configured in ``costs: emission_prices: co2`` to ``marginal_cost`` of generators (other emission types listed in ``network.carriers`` possible as well), ``prepare_network``: `add_emission_prices() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L24>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L158>`_, In active use ``Ep``, Add cost for a carbon-dioxide price configured in ``costs: emission_prices: co2`` to ``marginal_cost`` of generators (other emission types listed in ``network.carriers`` possible as well), ``prepare_network``: `add_emission_prices() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L24>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L158>`_, In active use
``CCL``, Add minimum and maximum levels of generator nominal capacity per carrier for individual countries. These can be specified in the file linked at ``electricity: agg_p_nom_limits`` in the configuration. File defaults to ``data/agg_p_nom_minmax.csv``., ``solve_network``, In active use ``CCL``, Add minimum and maximum levels of generator nominal capacity per carrier for individual countries. These can be specified in the file linked at ``electricity: agg_p_nom_limits`` in the configuration. File defaults to ``data/agg_p_nom_minmax.csv``., ``solve_network``, In active use
``EQ``, "Require each country or node to on average produce a minimal share of its total consumption itself. Example: ``EQ0.5c`` demands each country to produce on average at least 50% of its consumption; ``EQ0.5`` demands each node to produce on average at least 50% of its consumption.", ``solve_network``, In active use
``ATK``, "Require each node to be autarkic. Example: ``ATK`` removes all lines and links. ``ATKc`` removes all cross-border lines and links.", ``prepare_network``, In active use
``BAU``, Add a per-``carrier`` minimal overall capacity; i.e. at least ``40GW`` of ``OCGT`` in Europe; configured in ``electricity: BAU_mincapacities``, ``solve_network``: `add_opts_constraints() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/solve_network.py#L66>`_, Untested ``BAU``, Add a per-``carrier`` minimal overall capacity; i.e. at least ``40GW`` of ``OCGT`` in Europe; configured in ``electricity: BAU_mincapacities``, ``solve_network``: `add_opts_constraints() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/solve_network.py#L66>`_, Untested
``SAFE``, Add a capacity reserve margin of a certain fraction above the peak demand to which renewable generators and storage do *not* contribute. Ignores network., ``solve_network`` `add_opts_constraints() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/solve_network.py#L73>`_, Untested ``SAFE``, Add a capacity reserve margin of a certain fraction above the peak demand to which renewable generators and storage do *not* contribute. Ignores network., ``solve_network`` `add_opts_constraints() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/solve_network.py#L73>`_, Untested
``carrier+factor``, "Alter the capital cost of a carrier by a factor. Example: ``solar+0.5`` reduces the capital cost of solar to 50\% of original values.", ``prepare_network``, In active use ``carrier+{c|p}factor``, "Alter the capital cost (``c``) or installable potential (``p``) of a carrier by a factor. Example: ``solar+c0.5`` reduces the capital cost of solar to 50\% of original values.", ``prepare_network``, In active use

1 Trigger Description Definition Status
2 ``nH``; i.e. ``2H``-``6H`` Resample the time-resolution by averaging over every ``n`` snapshots ``prepare_network``: `average_every_nhours() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L110>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L146>`_) In active use
3 ``nSEG``; e.g. ``4380SEG`` Apply time series segmentation with `tsam <https://tsam.readthedocs.io/en/latest/index.html>`_ package to ``n`` adjacent snapshots of varying lengths based on capacity factors of varying renewables, hydro inflow and load. ``prepare_network``: apply_time_segmentation() In active use
4 ``Co2L`` Add an overall absolute carbon-dioxide emissions limit configured in ``electricity: co2limit``. If a float is appended an overall emission limit relative to the emission level given in ``electricity: co2base`` is added (e.g. ``Co2L0.05`` limits emissisions to 5% of what is given in ``electricity: co2base``) ``prepare_network``: `add_co2limit() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L19>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L154>`_ In active use
5 ``Ep`` Add cost for a carbon-dioxide price configured in ``costs: emission_prices: co2`` to ``marginal_cost`` of generators (other emission types listed in ``network.carriers`` possible as well) ``prepare_network``: `add_emission_prices() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L24>`_ and its `caller <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/prepare_network.py#L158>`_ In active use
6 ``CCL`` Add minimum and maximum levels of generator nominal capacity per carrier for individual countries. These can be specified in the file linked at ``electricity: agg_p_nom_limits`` in the configuration. File defaults to ``data/agg_p_nom_minmax.csv``. ``solve_network`` In active use
7 ``EQ`` Require each country or node to on average produce a minimal share of its total consumption itself. Example: ``EQ0.5c`` demands each country to produce on average at least 50% of its consumption; ``EQ0.5`` demands each node to produce on average at least 50% of its consumption. ``solve_network`` In active use
8 ``ATK`` Require each node to be autarkic. Example: ``ATK`` removes all lines and links. ``ATKc`` removes all cross-border lines and links. ``prepare_network`` In active use
9 ``BAU`` Add a per-``carrier`` minimal overall capacity; i.e. at least ``40GW`` of ``OCGT`` in Europe; configured in ``electricity: BAU_mincapacities`` ``solve_network``: `add_opts_constraints() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/solve_network.py#L66>`_ Untested
10 ``SAFE`` Add a capacity reserve margin of a certain fraction above the peak demand to which renewable generators and storage do *not* contribute. Ignores network. ``solve_network`` `add_opts_constraints() <https://github.com/PyPSA/pypsa-eur/blob/6b964540ed39d44079cdabddee8333f486d0cd63/scripts/solve_network.py#L73>`_ Untested
11 ``carrier+factor`` ``carrier+{c|p}factor`` Alter the capital cost of a carrier by a factor. Example: ``solar+0.5`` reduces the capital cost of solar to 50\% of original values. Alter the capital cost (``c``) or installable potential (``p``) of a carrier by a factor. Example: ``solar+c0.5`` reduces the capital cost of solar to 50\% of original values. ``prepare_network`` In active use

View File

@ -12,4 +12,3 @@ energy_min,TWh,float,"Lower y-axis limit in energy bar plots."
energy_threshold,TWh,float,"Threshold below which technologies will not be shown in energy bar plots." energy_threshold,TWh,float,"Threshold below which technologies will not be shown in energy bar plots."
tech_colors,--,"carrier -> HEX colour code","Mapping from network ``carrier`` to a colour (`HEX colour code <https://en.wikipedia.org/wiki/Web_colors#Hex_triplet>`_)." tech_colors,--,"carrier -> HEX colour code","Mapping from network ``carrier`` to a colour (`HEX colour code <https://en.wikipedia.org/wiki/Web_colors#Hex_triplet>`_)."
nice_names,--,"str -> str","Mapping from network ``carrier`` to a more readable name." nice_names,--,"str -> str","Mapping from network ``carrier`` to a more readable name."
nice_names_n,--,"str -> str","Same as nice_names, but with linebreaks."
1 Unit Values Description
12 energy_threshold TWh float Threshold below which technologies will not be shown in energy bar plots.
13 tech_colors -- carrier -> HEX colour code Mapping from network ``carrier`` to a colour (`HEX colour code <https://en.wikipedia.org/wiki/Web_colors#Hex_triplet>`_).
14 nice_names -- str -> str Mapping from network ``carrier`` to a more readable name.
nice_names_n -- str -> str Same as nice_names, but with linebreaks.

View File

@ -1,6 +1,5 @@
,Unit,Values,Description ,Unit,Values,Description
sectors,--,"Must be 'elec'","Placeholder for integration of other energy sectors."
simpl,--,cf. :ref:`simpl`,"List of ``{simpl}`` wildcards to run." simpl,--,cf. :ref:`simpl`,"List of ``{simpl}`` wildcards to run."
ll,--,cf. :ref:`ll`,"List of ``{ll}`` wildcards to run."
clusters,--,cf. :ref:`clusters`,"List of ``{clusters}`` wildcards to run." clusters,--,cf. :ref:`clusters`,"List of ``{clusters}`` wildcards to run."
ll,--,cf. :ref:`ll`,"List of ``{ll}`` wildcards to run."
opts,--,cf. :ref:`opts`,"List of ``{opts}`` wildcards to run." opts,--,cf. :ref:`opts`,"List of ``{opts}`` wildcards to run."
1 Unit Values Description
sectors -- Must be 'elec' Placeholder for integration of other energy sectors.
2 simpl -- cf. :ref:`simpl` List of ``{simpl}`` wildcards to run.
ll -- cf. :ref:`ll` List of ``{ll}`` wildcards to run.
3 clusters -- cf. :ref:`clusters` List of ``{clusters}`` wildcards to run.
4 ll -- cf. :ref:`ll` List of ``{ll}`` wildcards to run.
5 opts -- cf. :ref:`opts` List of ``{opts}`` wildcards to run.

View File

@ -3,7 +3,7 @@ version,--,0.x.x,"Version of PyPSA-Eur"
tutorial,bool,"{true, false}","Switch to retrieve the tutorial data set instead of the full data set." tutorial,bool,"{true, false}","Switch to retrieve the tutorial data set instead of the full data set."
logging,,, logging,,,
-- level,--,"Any of {'INFO', 'WARNING', 'ERROR'}","Restrict console outputs to all infos, warning or errors only" -- level,--,"Any of {'INFO', 'WARNING', 'ERROR'}","Restrict console outputs to all infos, warning or errors only"
-- format,--,"e.g. ``%(levelname)s:%(name)s:%(message)s``","Custom format for log messages. See `LogRecord <https://docs.python.org/3/library/logging.html#logging.LogRecord>`_ attributes." -- format,--,"","Custom format for log messages. See `LogRecord <https://docs.python.org/3/library/logging.html#logging.LogRecord>`_ attributes."
summary_dir,--,"e.g. 'results'","Directory into which results are written." summary_dir,--,"e.g. 'results'","Directory into which results are written."
countries,--,"Subset of {'AL', 'AT', 'BA', 'BE', 'BG', 'CH', 'CZ', 'DE', 'DK', 'EE', 'ES', 'FI', 'FR', 'GB', 'GR', 'HR', 'HU', 'IE', 'IT', 'LT', 'LU', 'LV', 'ME', 'MK', 'NL', 'NO', 'PL', 'PT', 'RO', 'RS', 'SE', 'SI', 'SK'}","European countries defined by their `Two-letter country codes (ISO 3166-1) <https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2>`_ which should be included in the energy system model." countries,--,"Subset of {'AL', 'AT', 'BA', 'BE', 'BG', 'CH', 'CZ', 'DE', 'DK', 'EE', 'ES', 'FI', 'FR', 'GB', 'GR', 'HR', 'HU', 'IE', 'IT', 'LT', 'LU', 'LV', 'ME', 'MK', 'NL', 'NO', 'PL', 'PT', 'RO', 'RS', 'SE', 'SI', 'SK'}","European countries defined by their `Two-letter country codes (ISO 3166-1) <https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2>`_ which should be included in the energy system model."
focus_weights,--,"Keys should be two-digit country codes (e.g. DE) and values should range between 0 and 1","Ratio of total clusters for particular countries. the remaining weight is distributed according to mean load. An example: ``focus_weights: DE: 0.6 FR: 0.2``." focus_weights,--,"Keys should be two-digit country codes (e.g. DE) and values should range between 0 and 1","Ratio of total clusters for particular countries. the remaining weight is distributed according to mean load. An example: ``focus_weights: DE: 0.6 FR: 0.2``."
@ -14,3 +14,4 @@ enable,,,
-- retrieve_cutout,bool,"{true, false}","Switch to enable the retrieval of cutouts from zenodo with :mod:`retrieve_cutout`." -- retrieve_cutout,bool,"{true, false}","Switch to enable the retrieval of cutouts from zenodo with :mod:`retrieve_cutout`."
-- build_natura_raster,bool,"{true, false}","Switch to enable the creation of the raster ``natura.tiff`` via the rule :mod:`build_natura_raster`." -- build_natura_raster,bool,"{true, false}","Switch to enable the creation of the raster ``natura.tiff`` via the rule :mod:`build_natura_raster`."
-- retrieve_natura_raster,bool,"{true, false}","Switch to enable the retrieval of ``natura.tiff`` from zenodo with :mod:`retrieve_natura_raster`." -- retrieve_natura_raster,bool,"{true, false}","Switch to enable the retrieval of ``natura.tiff`` from zenodo with :mod:`retrieve_natura_raster`."
-- custom_busmap,bool,"{true, false}","Switch to enable the use of custom busmaps in rule :mod:`cluster_network`. If activated the rule looks for provided busmaps at ``data/custom_busmap_elec_s{simpl}_{clusters}.csv`` which should have the same format as ``resources/busmap_elec_s{simpl}_{clusters}.csv``, i.e. the index should contain the buses of ``networks/elec_s{simpl}.nc``."

1 Unit Values Description
3 tutorial bool {true, false} Switch to retrieve the tutorial data set instead of the full data set.
4 logging
5 -- level -- Any of {'INFO', 'WARNING', 'ERROR'} Restrict console outputs to all infos, warning or errors only
6 -- format -- e.g. ``%(levelname)s:%(name)s:%(message)s`` Custom format for log messages. See `LogRecord <https://docs.python.org/3/library/logging.html#logging.LogRecord>`_ attributes.
7 summary_dir -- e.g. 'results' Directory into which results are written.
8 countries -- Subset of {'AL', 'AT', 'BA', 'BE', 'BG', 'CH', 'CZ', 'DE', 'DK', 'EE', 'ES', 'FI', 'FR', 'GB', 'GR', 'HR', 'HU', 'IE', 'IT', 'LT', 'LU', 'LV', 'ME', 'MK', 'NL', 'NO', 'PL', 'PT', 'RO', 'RS', 'SE', 'SI', 'SK'} European countries defined by their `Two-letter country codes (ISO 3166-1) <https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2>`_ which should be included in the energy system model.
9 focus_weights -- Keys should be two-digit country codes (e.g. DE) and values should range between 0 and 1 Ratio of total clusters for particular countries. the remaining weight is distributed according to mean load. An example: ``focus_weights: DE: 0.6 FR: 0.2``.
14 -- retrieve_cutout bool {true, false} Switch to enable the retrieval of cutouts from zenodo with :mod:`retrieve_cutout`.
15 -- build_natura_raster bool {true, false} Switch to enable the creation of the raster ``natura.tiff`` via the rule :mod:`build_natura_raster`.
16 -- retrieve_natura_raster bool {true, false} Switch to enable the retrieval of ``natura.tiff`` from zenodo with :mod:`retrieve_natura_raster`.
17 -- custom_busmap bool {true, false} Switch to enable the use of custom busmaps in rule :mod:`cluster_network`. If activated the rule looks for provided busmaps at ``data/custom_busmap_elec_s{simpl}_{clusters}.csv`` which should have the same format as ``resources/busmap_elec_s{simpl}_{clusters}.csv``, i.e. the index should contain the buses of ``networks/elec_s{simpl}.nc``.

View File

@ -18,7 +18,7 @@ Top-level configuration
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 1-8,17,24-30 :lines: 5-12,20,27-34
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -40,9 +40,9 @@ facilitate running multiple scenarios through a single command
.. code:: bash .. code:: bash
snakemake -j 1 solve_all_elec_networks snakemake -j 1 solve_all_networks
For each wildcard, a **list of values** is provided. The rule ``solve_all_elec_networks`` will trigger the rules for creating ``results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc`` for **all combinations** of the provided wildcard values as defined by Python's `itertools.product(...) <https://docs.python.org/2/library/itertools.html#itertools.product>`_ function that snakemake's `expand(...) function <https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#targets>`_ uses. For each wildcard, a **list of values** is provided. The rule ``solve_all_networks`` will trigger the rules for creating ``results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc`` for **all combinations** of the provided wildcard values as defined by Python's `itertools.product(...) <https://docs.python.org/2/library/itertools.html#itertools.product>`_ function that snakemake's `expand(...) function <https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#targets>`_ uses.
An exemplary dependency graph (starting from the simplification rules) then looks like this: An exemplary dependency graph (starting from the simplification rules) then looks like this:
@ -50,7 +50,7 @@ An exemplary dependency graph (starting from the simplification rules) then look
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 10-15 :lines: 14-18
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -66,7 +66,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 19-22 :lines: 22-25
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -80,7 +80,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 32-50 :lines: 36-60
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -97,7 +97,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 57-70 :lines: 62-75
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -114,7 +114,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 72-89 :lines: 77-94
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -126,7 +126,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 72,90-102 :lines: 77,95-107
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -138,7 +138,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 72,103-116 :lines: 77,108-121
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -150,7 +150,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 72,117-136 :lines: 77,122-141
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -162,7 +162,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 72,137-143 :lines: 77,142-147
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -176,7 +176,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 145-152 :lines: 149-157
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -190,7 +190,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 154-157 :lines: 159-163
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -204,7 +204,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 159-162 :lines: 165-168
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -218,7 +218,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 164-165 :lines: 170-176
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -232,7 +232,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 167-179 :lines: 178-190
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -241,7 +241,6 @@ Specifies the temporal range to build an energy system model for as arguments to
.. note:: .. note::
To change cost assumptions in more detail (i.e. other than ``marginal_cost`` and ``capital_cost``), consider modifying cost assumptions directly in ``resources/costs.csv`` as this is not yet supported through the config file. To change cost assumptions in more detail (i.e. other than ``marginal_cost`` and ``capital_cost``), consider modifying cost assumptions directly in ``resources/costs.csv`` as this is not yet supported through the config file.
You can also build multiple different cost databases. Make a renamed copy of ``resources/costs.csv`` (e.g. ``data/costs-optimistic.csv``) and set the variable ``COSTS=data/costs-optimistic.csv`` in the ``Snakefile``. You can also build multiple different cost databases. Make a renamed copy of ``resources/costs.csv`` (e.g. ``data/costs-optimistic.csv``) and set the variable ``COSTS=data/costs-optimistic.csv`` in the ``Snakefile``.
.. _solving_cf: .. _solving_cf:
@ -254,7 +253,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 181-189 :lines: 192-202
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -266,7 +265,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 181,190-206 :lines: 192,203-219
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1
@ -280,7 +279,7 @@ Specifies the temporal range to build an energy system model for as arguments to
.. literalinclude:: ../config.default.yaml .. literalinclude:: ../config.default.yaml
:language: yaml :language: yaml
:lines: 208-342 :lines: 221-299
.. csv-table:: .. csv-table::
:header-rows: 1 :header-rows: 1

View File

@ -12,7 +12,7 @@ be it with new ideas, suggestions, by filing bug reports or contributing code
to our `GitHub repository <https://github.com/PyPSA/PyPSA-Eur>`_. to our `GitHub repository <https://github.com/PyPSA/PyPSA-Eur>`_.
* If you already have some code changes, you can submit them directly as a `pull request <https://github.com/PyPSA/pypsa-eur/pulls>`_. * If you already have some code changes, you can submit them directly as a `pull request <https://github.com/PyPSA/pypsa-eur/pulls>`_.
* If you are wondering where we would greatly appreciate your efforts, check out the ``help wanted`` tag in the `issues list <https://github.com/PyPSA/pypsa-eur/issues`_ and initiate a discussion there.. * If you are wondering where we would greatly appreciate your efforts, check out the ``help wanted`` tag in the `issues list <https://github.com/PyPSA/pypsa-eur/issues>`_ and initiate a discussion there.
* If you start working on a feature in the code, let us know by opening an issue or a draft pull request. * If you start working on a feature in the code, let us know by opening an issue or a draft pull request.
This helps all of us to keep an overview on what is being done and helps to avoid a situation where we This helps all of us to keep an overview on what is being done and helps to avoid a situation where we
are doing the same work twice in parallel. are doing the same work twice in parallel.

View File

@ -34,7 +34,7 @@ Based on the parameters above the ``marginal_cost`` and ``capital_cost`` of the
.. note:: .. note::
Another great resource for `cost assumptions <https://ens.dk/en/our-services/projections-and-models/technology-data`_ is the cost database from the Danish Energy Agency. Another great resource for cost assumptions is the `cost database from the Danish Energy Agency <https://ens.dk/en/our-services/projections-and-models/technology-data>`_.
Modifying Cost Assumptions Modifying Cost Assumptions
========================== ==========================
@ -43,4 +43,11 @@ Some cost assumptions (e.g. marginal cost and capital cost) can be directly over
To change cost assumptions in more detail, modify cost assumptions directly in ``resources/costs.csv`` as this is not yet supported through the config file. To change cost assumptions in more detail, modify cost assumptions directly in ``resources/costs.csv`` as this is not yet supported through the config file.
<<<<<<< HEAD
You can also build multiple different cost databases. Make a renamed copy of ``resources/costs.csv`` (e.g. ``data/costs-optimistic.csv``) and set the variable ``COSTS=data/costs-optimistic.csv`` in the ``Snakefile``. You can also build multiple different cost databases. Make a renamed copy of ``resources/costs.csv`` (e.g. ``data/costs-optimistic.csv``) and set the variable ``COSTS=data/costs-optimistic.csv`` in the ``Snakefile``.
=======
.. csv-table::
:header-rows: 1
:widths: 10,3,5,4,6,8
:file: ../data/costs.csv
>>>>>>> master

View File

@ -19,8 +19,8 @@ PyPSA-Eur: An Open Optimisation Model of the European Transmission System
.. image:: https://img.shields.io/github/repo-size/pypsa/pypsa-eur .. image:: https://img.shields.io/github/repo-size/pypsa/pypsa-eur
:alt: GitHub repo size :alt: GitHub repo size
.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3520875.svg .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3520874.svg
:target: https://doi.org/10.5281/zenodo.3520875 :target: https://doi.org/10.5281/zenodo.3520874
.. image:: https://badges.gitter.im/PyPSA/community.svg .. image:: https://badges.gitter.im/PyPSA/community.svg
:target: https://gitter.im/PyPSA/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge :target: https://gitter.im/PyPSA/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge
@ -42,6 +42,8 @@ It contains alternating current lines at and above 220 kV voltage level and all
The model is suitable both for operational studies and generation and transmission expansion planning studies. The continental scope and highly resolved spatial scale enables a proper description of the long-range smoothing effects for renewable power generation and their varying resource availability. The model is suitable both for operational studies and generation and transmission expansion planning studies. The continental scope and highly resolved spatial scale enables a proper description of the long-range smoothing effects for renewable power generation and their varying resource availability.
.. image:: img/base.png .. image:: img/base.png
:width: 50%
:align: center
The restriction to freely available and open data encourages the open exchange of model data developments and eases the comparison of model results. It provides a full, automated software pipeline to assemble the load-flow-ready model from the original datasets, which enables easy replacement and improvement of the individual parts. The restriction to freely available and open data encourages the open exchange of model data developments and eases the comparison of model results. It provides a full, automated software pipeline to assemble the load-flow-ready model from the original datasets, which enables easy replacement and improvement of the individual parts.
@ -169,16 +171,16 @@ Please use the following BibTeX: ::
If you want to cite a specific PyPSA-Eur version, each release of PyPSA-Eur is stored on Zenodo with a release-specific DOI. If you want to cite a specific PyPSA-Eur version, each release of PyPSA-Eur is stored on Zenodo with a release-specific DOI.
This can be found linked from the overall PyPSA-Eur Zenodo DOI: This can be found linked from the overall PyPSA-Eur Zenodo DOI:
.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3520875.svg .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3520874.svg
:target: https://doi.org/10.5281/zenodo.3520875 :target: https://doi.org/10.5281/zenodo.3520874
Pre-Built Networks as a Dataset Pre-Built Networks as a Dataset
=============================== ===============================
There are pre-built networks available as a dataset on Zenodo as well for every release of PyPSA-Eur. There are pre-built networks available as a dataset on Zenodo as well for every release of PyPSA-Eur.
.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3601882.svg .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3601881.svg
:target: https://doi.org/10.5281/zenodo.3601882 :target: https://doi.org/10.5281/zenodo.3601881
The included ``.nc`` files are PyPSA network files which can be imported with PyPSA via: The included ``.nc`` files are PyPSA network files which can be imported with PyPSA via:

View File

@ -17,6 +17,7 @@ Clone the Repository
First of all, clone the `PyPSA-Eur repository <https://github.com/PyPSA/pypsa-eur>`_ using the version control system ``git``. First of all, clone the `PyPSA-Eur repository <https://github.com/PyPSA/pypsa-eur>`_ using the version control system ``git``.
The path to the directory into which the ``git repository`` is cloned, must **not** have any spaces! The path to the directory into which the ``git repository`` is cloned, must **not** have any spaces!
If you do not have ``git`` installed, follow installation instructions `here <https://git-scm.com/book/en/v2/Getting-Started-Installing-Git>`_.
.. code:: bash .. code:: bash
@ -24,8 +25,6 @@ The path to the directory into which the ``git repository`` is cloned, must **no
/some/path/without/spaces % git clone https://github.com/PyPSA/pypsa-eur.git /some/path/without/spaces % git clone https://github.com/PyPSA/pypsa-eur.git
.. note::
If you do not have ``git`` installed, follow installation instructions `here <https://git-scm.com/book/en/v2/Getting-Started-Installing-Git>`_.
.. _deps: .. _deps:
@ -37,16 +36,15 @@ We recommend using the package manager and environment management system ``conda
Install `miniconda <https://docs.conda.io/en/latest/miniconda.html>`_, which is a mini version of `Anaconda <https://www.anaconda.com/>`_ that includes only ``conda`` and its dependencies or make sure ``conda`` is already installed on your system. Install `miniconda <https://docs.conda.io/en/latest/miniconda.html>`_, which is a mini version of `Anaconda <https://www.anaconda.com/>`_ that includes only ``conda`` and its dependencies or make sure ``conda`` is already installed on your system.
For instructions for your operating system follow the ``conda`` `installation guide <https://docs.conda.io/projects/conda/en/latest/user-guide/install/>`_. For instructions for your operating system follow the ``conda`` `installation guide <https://docs.conda.io/projects/conda/en/latest/user-guide/install/>`_.
The python package requirements are curated in the `environment.yaml <https://github.com/PyPSA/pypsa-eur/blob/master/environment.yaml>`_ file. The python package requirements are curated in the `envs/environment.yaml <https://github.com/PyPSA/pypsa-eur/blob/master/envs/environment.yaml>`_ file.
The environment can be installed and activated using The environment can be installed and activated using
.. code:: bash .. code:: bash
.../pypsa-eur % conda env create -f environment.yaml .../pypsa-eur % conda env create -f envs/environment.yaml
.../pypsa-eur % conda activate pypsa-eur .../pypsa-eur % conda activate pypsa-eur
.. note::
Note that activation is local to the currently open shell! Note that activation is local to the currently open shell!
After opening a new terminal window, one needs to reissue the second command! After opening a new terminal window, one needs to reissue the second command!
@ -62,7 +60,7 @@ The environment can be installed and activated using
.. code:: bash .. code:: bash
mamba env create -f environment.yaml mamba env create -f envs/environment.yaml
Install a Solver Install a Solver
================ ================
@ -74,25 +72,23 @@ PyPSA is known to work with the free software
- `Cbc <https://projects.coin-or.org/Cbc#DownloadandInstall>`_ - `Cbc <https://projects.coin-or.org/Cbc#DownloadandInstall>`_
- `GLPK <https://www.gnu.org/software/glpk/>`_ (`WinGLKP <http://winglpk.sourceforge.net/>`_) - `GLPK <https://www.gnu.org/software/glpk/>`_ (`WinGLKP <http://winglpk.sourceforge.net/>`_)
and the non-free, commercial software (for which free academic licenses are available) and the non-free, commercial software (for some of which free academic licenses are available)
- `Gurobi <https://www.gurobi.com/documentation/quickstart.html>`_ - `Gurobi <https://www.gurobi.com/documentation/quickstart.html>`_
- `CPLEX <https://www.ibm.com/products/ilog-cplex-optimization-studio>`_ - `CPLEX <https://www.ibm.com/products/ilog-cplex-optimization-studio>`_
- `FICO® Xpress Solver <https://www.fico.com/de/products/fico-xpress-solver>`_
and any other solver that works with the underlying modelling framework `Pyomo <http://www.pyomo.org/>`_.
For installation instructions of these solvers for your operating system, follow the links above. For installation instructions of these solvers for your operating system, follow the links above.
Commercial solvers such as Gurobi and CPLEX currently significantly outperform open-source solvers for large-scale problems.
It might be the case that you can only retrieve solutions by using a commercial solver.
.. seealso:: .. seealso::
`Getting a solver in the PyPSA documentation <https://pypsa.readthedocs.io/en/latest/installation.html#getting-a-solver-for-linear-optimisation>`_ `Getting a solver in the PyPSA documentation <https://pypsa.readthedocs.io/en/latest/installation.html#getting-a-solver-for-linear-optimisation>`_
.. note::
Commercial solvers such as Gurobi and CPLEX currently significantly outperform open-source solvers for large-scale problems.
It might be the case that you can only retrieve solutions by using a commercial solver.
.. note:: .. note::
The rules :mod:`cluster_network` and :mod:`simplify_network` solve a quadratic optimisation problem for clustering. The rules :mod:`cluster_network` and :mod:`simplify_network` solve a quadratic optimisation problem for clustering.
The open-source solvers Cbc and GlPK cannot handle this. A fallback to Ipopt is implemented in this case, but requires The open-source solvers Cbc and GlPK cannot handle this. A fallback to Ipopt is implemented in this case, but requires
also Ipopt to be installed. For an open-source solver setup install in your `conda` environment on OSX/Linux also Ipopt to be installed. For an open-source solver setup install in your ``conda`` environment on OSX/Linux
.. code:: bash .. code:: bash

View File

@ -64,4 +64,6 @@ Folder Structure
System Requirements System Requirements
=================== ===================
Building the model with the scripts in this repository uses up to 20 GB of memory. Computing optimal investment and operation scenarios requires a strong interior-point solver compatible with the modelling library `Pyomo <https://www.pyomo.org>`_ like `Gurobi <http://www.gurobi.com/>`_ or `CPLEX <https://www.ibm.com/analytics/cplex-optimizer>`_ with up to 100 GB of memory. Building the model with the scripts in this repository runs on a normal computer.
But computing optimal investment and operation scenarios requires a strong interior-point solver
like `Gurobi <http://www.gurobi.com/>`_ or `CPLEX <https://www.ibm.com/analytics/cplex-optimizer>`_ with more memory.

View File

@ -56,4 +56,3 @@ improving the approximations.
Belarus, Ukraine, Turkey and Morocco have not been taken into account; Belarus, Ukraine, Turkey and Morocco have not been taken into account;
islands which are not connected to the main European system, such as Malta, islands which are not connected to the main European system, such as Malta,
Crete and Cyprus, are also excluded from the model. Crete and Cyprus, are also excluded from the model.

View File

@ -39,6 +39,7 @@ together into a detailed PyPSA network stored in ``networks/elec.nc``.
preparation/retrieve preparation/retrieve
preparation/build_shapes preparation/build_shapes
preparation/build_load_data
preparation/build_cutout preparation/build_cutout
preparation/build_natura_raster preparation/build_natura_raster
preparation/prepare_links_p_nom preparation/prepare_links_p_nom

View File

@ -0,0 +1,12 @@
..
SPDX-FileCopyrightText: 2020-2021 The PyPSA-Eur Authors
SPDX-License-Identifier: CC-BY-4.0
.. _load_data:
Rule ``build_load_data``
=============================
.. automodule:: build_load_data

View File

@ -11,31 +11,123 @@ Release Notes
Upcoming Release Upcoming Release
================ ================
* Added an option to alter the capital cost of carriers by a factor via ``carrier+factor`` in the ``{opts}`` wildcard. This can be useful for exploring uncertain cost parameters. Example: ``solar+0.5`` reduces the capital cost of solar to 50% of original values (`#167 <https://github.com/PyPSA/pypsa-eur/pull/167>`_). * Fix: Value for ``co2base`` in ``config.yaml`` adjusted to 1.487e9 t CO2-eq (from 3.1e9 t CO2-eq). The new value represents emissions related to the electricity sector for EU+UK. The old value was ~2x too high and used when the emissions wildcard in ``{opts}`` was used.
* Add compatibility for pyomo 5.7.0 in :mod:`cluster_network` and :mod:`simplify_network`. * Add option to include marginal costs of links representing fuel cells, electrolysis, and battery inverters
[`#232 <https://github.com/PyPSA/pypsa-eur/pull/232>`_].
* Raise a warning if `tech_colors` in the config are not defined for all carriers.
* Corrected HVDC link connections (a) between Norway and Denmark and (b) mainland Italy, Corsica (FR) and Sardinia (IT) (`#181 <https://github.com/PyPSA/pypsa-eur/pull/181>`_)
* Added Google Cloud Platform tutorial (for Windows users).
* Corrected setting of exogenous emission price (in ``cost: emission price:``). This was not weighted by the efficiency and effective emission of the generators (`#171 <https://github.com/PyPSA/pypsa-eur/pull/171>`_).
* Techno-economic parameters of technologies (e.g. costs and efficiencies) will now be retrieved from a separate repository `PyPSA/technology-data <https://github.com/pypsa/technology-data>`_ * Techno-economic parameters of technologies (e.g. costs and efficiencies) will now be retrieved from a separate repository `PyPSA/technology-data <https://github.com/pypsa/technology-data>`_
that collects assumptions from a variety of sources. It is activated by default with ``enable: retrieve_cost_data: true`` and controlled with ``costs: year:`` and ``costs: version:``. that collects assumptions from a variety of sources. It is activated by default with ``enable: retrieve_cost_data: true`` and controlled with ``costs: year:`` and ``costs: version:``.
The location of this data changed from ``data/costs.csv`` to ``resources/costs.csv`` (`#184 <https://github.com/PyPSA/pypsa-eur/pull/184>`_). The location of this data changed from ``data/costs.csv`` to ``resources/costs.csv``
[`#184 <https://github.com/PyPSA/pypsa-eur/pull/184>`_].
PyPSA-Eur 0.3.0 (7th December 2020)
==================================
**New Features**
Using the ``{opts}`` wildcard for scenarios:
* An option is introduced which adds constraints such that each country or node produces on average a minimal share of its total consumption itself.
For example ``EQ0.5c`` set in the ``{opts}`` wildcard requires each country to produce on average at least 50% of its consumption. Additionally,
the option ``ATK`` requires autarky at each node and removes all means of power transmission through lines and links. ``ATKc`` only removes
cross-border transfer capacities.
[`#166 <https://github.com/PyPSA/pypsa-eur/pull/166>`_].
* Added an option to alter the capital cost (``c``) or installable potentials (``p``) of carriers by a factor via ``carrier+{c,p}factor`` in the ``{opts}`` wildcard.
This can be useful for exploring uncertain cost parameters.
Example: ``solar+c0.5`` reduces the capital cost of solar to 50% of original values
[`#167 <https://github.com/PyPSA/pypsa-eur/pull/167>`_, `#207 <https://github.com/PyPSA/pypsa-eur/pull/207>`_].
* Added an option to the ``{opts}`` wildcard that applies a time series segmentation algorithm based on renewables, hydro inflow and load time series
to produce a given total number of adjacent snapshots of varying lengths.
This feature is an alternative to downsampling the temporal resolution by simply averaging and
uses the `tsam <https://tsam.readthedocs.io/en/latest/index.html>`_ package
[`#186 <https://github.com/PyPSA/pypsa-eur/pull/186>`_].
More OPSD integration:
* Add renewable power plants from `OPSD <https://data.open-power-system-data.org/renewable_power_plants/2020-08-25>`_ to the network for specified technologies.
This will overwrite the capacities calculated from the heuristic approach in :func:`estimate_renewable_capacities()`
[`#212 <https://github.com/PyPSA/pypsa-eur/pull/212>`_].
* Electricity consumption data is now retrieved directly from the `OPSD website <https://data.open-power-system-data.org/time_series/2019-06-05>`_ using the rule :mod:`build_load_data`.
The user can decide whether to take the ENTSO-E power statistics data (default) or the ENTSO-E transparency data
[`#211 <https://github.com/PyPSA/pypsa-eur/pull/211>`_].
Other:
* Added an option to use custom busmaps in rule :mod:`cluster_network`. To use this feature set ``enable: custom_busmap: true``.
Then, the rule looks for custom busmaps at ``data/custom_busmap_elec_s{simpl}_{clusters}.csv``,
which should have the same format as ``resources/busmap_elec_s{simpl}_{clusters}.csv``.
i.e. the index should contain the buses of ``networks/elec_s{simpl}.nc``
[`#193 <https://github.com/PyPSA/pypsa-eur/pull/193>`_].
* Line and link capacities can be capped in the ``config.yaml`` at ``lines: s_nom_max:`` and ``links: p_nom_max``:
[`#166 <https://github.com/PyPSA/pypsa-eur/pull/166>`_].
* Added Google Cloud Platform tutorial (for Windows users)
[`#177 <https://github.com/PyPSA/pypsa-eur/pull/177>`_].
**Changes**
* Don't remove capital costs from lines and links, when imposing a line volume limit (``lv``) or a line cost limit (``lc``).
Previously, these were removed to move the expansion in direction of the limit
[`#183 <https://github.com/PyPSA/pypsa-eur/pull/183>`_].
* The mappings for clustered lines and buses produced by the :mod:`simplify_network` and :mod:`cluster_network` rules
changed from Hierarchical Data Format (``.h5``) to Comma-Separated Values format (``.csv``) for ease of use.
[`#198 <https://github.com/PyPSA/pypsa-eur/pull/198>`_]
* The N-1 security margin for transmission lines is now fixed to a provided value in ``config.yaml``,
removing an undocumented linear interpolation between 0.5 and 0.7 in the range between 37 and 200 nodes.
[`#199 <https://github.com/PyPSA/pypsa-eur/pull/199>`_].
* Modelling hydrogen and battery storage with Store and Link components is now the default,
rather than using StorageUnit components with fixed power-to-energy ratio
[`#205 <https://github.com/PyPSA/pypsa-eur/pull/205>`_].
* Use ``mamba`` (https://github.com/mamba-org/mamba) for faster Travis CI builds
[`#196 <https://github.com/PyPSA/pypsa-eur/pull/196>`_].
* Multiple smaller changes: Removed unused ``{network}`` wildcard, moved environment files to dedicated ``envs`` folder,
removed sector-coupling components from configuration files, updated documentation colors, minor refactoring and code cleaning
[`#190 <https://github.com/PyPSA/pypsa-eur/pull 190>`_].
**Bugs and Compatibility**
* Add compatibility for pyomo 5.7.0 in :mod:`cluster_network` and :mod:`simplify_network`
[`#172 <https://github.com/PyPSA/pypsa-eur/pull/172>`_].
* Fixed a bug for storage units such that individual store and dispatch efficiencies are correctly taken account of rather than only their round-trip efficiencies.
In the cost database (``data/costs.csv``) the efficiency of battery inverters should be stated as per discharge/charge rather than per roundtrip
[`#202 <https://github.com/PyPSA/pypsa-eur/pull/202>`_].
* Corrected exogenous emission price setting (in ``config: cost: emission price:``),
which now correctly accounts for the efficiency and effective emission of the generators
[`#171 <https://github.com/PyPSA/pypsa-eur/pull/171>`_].
* Corrected HVDC link connections (a) between Norway and Denmark and (b) mainland Italy, Corsica (FR) and Sardinia (IT)
as well as for East-Western and Anglo-Scottish interconnectors
[`#181 <https://github.com/PyPSA/pypsa-eur/pull/181>`_, `#206 <https://github.com/PyPSA/pypsa-eur/pull/206>`_].
* Fix bug of clustering ``offwind-{ac,dc}`` generators in the option of high-resolution generators for renewables.
Now, there are more sites for ``offwind-{ac,dc}`` available than network nodes.
Before, they were clustered to the resolution of the network (``elec_s1024_37m.nc``: 37 network nodes, 1024 generators)
[`#191 <https://github.com/PyPSA/pypsa-eur/pull/191>`_].
* Raise a warning if ``tech_colors`` in the config are not defined for all carriers
[`#178 <https://github.com/PyPSA/pypsa-eur/pull/178>`_].
PyPSA-Eur 0.2.0 (8th June 2020) PyPSA-Eur 0.2.0 (8th June 2020)
================================== ==================================
* The optimization is now performed using the ``pyomo=False`` setting in the :func:`pypsa.lopf.network_lopf`. This speeds up the solving process significantly and consumes much less memory. The inclusion of additional constraints were adjusted to the new implementation. They are all passed to the :func:`network_lopf` function via the ``extra_functionality`` argument. The rule ``trace_solve_network`` was integrated into the rule :mod:`solve_network` and can be activated via configuration with ``solving: options: track_iterations: true``. The charging and discharging capacities of batteries modelled as store-link combination are now coupled (`#116 <https://github.com/PyPSA/pypsa-eur/pull/116>`_). * The optimization is now performed using the ``pyomo=False`` setting in the :func:`pypsa.lopf.network_lopf`. This speeds up the solving process significantly and consumes much less memory. The inclusion of additional constraints were adjusted to the new implementation. They are all passed to the :func:`network_lopf` function via the ``extra_functionality`` argument. The rule ``trace_solve_network`` was integrated into the rule :mod:`solve_network` and can be activated via configuration with ``solving: options: track_iterations: true``. The charging and discharging capacities of batteries modelled as store-link combination are now coupled [`#116 <https://github.com/PyPSA/pypsa-eur/pull/116>`_].
* An updated extract of the `ENTSO-E Transmission System Map <https://www.entsoe.eu/data/map/>`_ (including Malta) was added to the repository using the `GridKit <https://github.com/PyPSA/GridKit>`_ tool. This tool has been updated to retrieve up-to-date map extracts using a single `script <https://github.com/PyPSA/GridKit/blob/master/entsoe/runall_in_docker.sh>`_. The update extract features 5322 buses, 6574 lines, 46 links. (`#118 <https://github.com/PyPSA/pypsa-eur/pull/118>`_). * An updated extract of the `ENTSO-E Transmission System Map <https://www.entsoe.eu/data/map/>`_ (including Malta) was added to the repository using the `GridKit <https://github.com/PyPSA/GridKit>`_ tool. This tool has been updated to retrieve up-to-date map extracts using a single `script <https://github.com/PyPSA/GridKit/blob/master/entsoe/runall_in_docker.sh>`_. The update extract features 5322 buses, 6574 lines, 46 links. [`#118 <https://github.com/PyPSA/pypsa-eur/pull/118>`_].
* Added `FSFE REUSE <https://reuse.software>`_ compliant license information. Documentation now licensed under CC-BY-4.0 (`#160 <https://github.com/PyPSA/pypsa-eur/pull/160>`_). * Added `FSFE REUSE <https://reuse.software>`_ compliant license information. Documentation now licensed under CC-BY-4.0 [`#160 <https://github.com/PyPSA/pypsa-eur/pull/160>`_].
* Added a 30 minute `video introduction <https://pypsa-eur.readthedocs.io/en/latest/introduction.html>`_ and a 20 minute `video tutorial <https://pypsa-eur.readthedocs.io/en/latest/tutorial.html>`_ * Added a 30 minute `video introduction <https://pypsa-eur.readthedocs.io/en/latest/introduction.html>`_ and a 20 minute `video tutorial <https://pypsa-eur.readthedocs.io/en/latest/tutorial.html>`_
@ -43,55 +135,54 @@ PyPSA-Eur 0.2.0 (8th June 2020)
* Added an option to skip iterative solving usually performed to update the line impedances of expanded lines at ``solving: options: skip_iterations:``. * Added an option to skip iterative solving usually performed to update the line impedances of expanded lines at ``solving: options: skip_iterations:``.
* ``snakemake`` rules for retrieving cutouts and the natura raster can now be disabled independently from their respective rules to build them; via ``config.*yaml`` (`#136 <https://github.com/PyPSA/pypsa-eur/pull/136>`_). * ``snakemake`` rules for retrieving cutouts and the natura raster can now be disabled independently from their respective rules to build them; via ``config.*yaml`` [`#136 <https://github.com/PyPSA/pypsa-eur/pull/136>`_].
* Removed the ``id`` column for custom power plants in ``data/custom_powerplants.csv`` to avoid custom power plants with conflicting ids getting attached to the wrong bus (`#131 <https://github.com/PyPSA/pypsa-eur/pull/131>`_). * Removed the ``id`` column for custom power plants in ``data/custom_powerplants.csv`` to avoid custom power plants with conflicting ids getting attached to the wrong bus [`#131 <https://github.com/PyPSA/pypsa-eur/pull/131>`_].
* Add option ``renewables: {carrier}: keep_all_available_areas:`` to use all availabe weather cells for renewable profile and potential generation. The default ignores weather cells where only less than 1 MW can be installed (`#150 <https://github.com/PyPSA/pypsa-eur/pull/150>`_). * Add option ``renewables: {carrier}: keep_all_available_areas:`` to use all availabe weather cells for renewable profile and potential generation. The default ignores weather cells where only less than 1 MW can be installed [`#150 <https://github.com/PyPSA/pypsa-eur/pull/150>`_].
* Added a function ``_helpers.load_network()`` which loads a network with overridden components specified in ``snakemake.config['override_components']`` (`#128 <https://github.com/PyPSA/pypsa-eur/pull/128>`_). * Added a function ``_helpers.load_network()`` which loads a network with overridden components specified in ``snakemake.config['override_components']`` [`#128 <https://github.com/PyPSA/pypsa-eur/pull/128>`_].
* Bugfix in :mod:`base_network` which now finds all closest links, not only the first entry (`#143 <https://github.com/PyPSA/pypsa-eur/pull/143>`_). * Bugfix in :mod:`base_network` which now finds all closest links, not only the first entry [`#143 <https://github.com/PyPSA/pypsa-eur/pull/143>`_].
* Bugfix in :mod:`cluster_network` which now skips recalculation of link parameters if there are no links (`#149 <https://github.com/PyPSA/pypsa-eur/pull/149>`_). * Bugfix in :mod:`cluster_network` which now skips recalculation of link parameters if there are no links [`#149 <https://github.com/PyPSA/pypsa-eur/pull/149>`_].
* Added information on pull requests to contribution guidelines (`#151 <https://github.com/PyPSA/pypsa-eur/pull/151>`_). * Added information on pull requests to contribution guidelines [`#151 <https://github.com/PyPSA/pypsa-eur/pull/151>`_].
* Improved documentation on open-source solver setup and added usage warnings. * Improved documentation on open-source solver setup and added usage warnings.
* Updated ``conda`` environment regarding ``pypsa``, ``pyproj``, ``gurobi``, ``lxml``. This release requires PyPSA v0.17.0. * Updated ``conda`` environment regarding ``pypsa``, ``pyproj``, ``gurobi``, ``lxml``. This release requires PyPSA v0.17.0.
PyPSA-Eur 0.1.0 (9th January 2020) PyPSA-Eur 0.1.0 (9th January 2020)
================================== ==================================
This is the first release of PyPSA-Eur, a model of the European power system at the transmission network level. Recent changes include: This is the first release of PyPSA-Eur, a model of the European power system at the transmission network level. Recent changes include:
* Documentation on installation, workflows and configuration settings is now available online at `pypsa-eur.readthedocs.io <pypsa-eur.readthedocs.io>`_ (`#65 <https://github.com/PyPSA/pypsa-eur/pull/65>`_). * Documentation on installation, workflows and configuration settings is now available online at `pypsa-eur.readthedocs.io <pypsa-eur.readthedocs.io>`_ [`#65 <https://github.com/PyPSA/pypsa-eur/pull/65>`_].
* The ``conda`` environment files were updated and extended (`#81 <https://github.com/PyPSA/pypsa-eur/pull/81>`_). * The ``conda`` environment files were updated and extended [`#81 <https://github.com/PyPSA/pypsa-eur/pull/81>`_].
* The power plant database was updated with extensive filtering options via ``pandas.query`` functionality (`#84 <https://github.com/PyPSA/pypsa-eur/pull/84>`_ and `#94 <https://github.com/PyPSA/pypsa-eur/pull/94>`_). * The power plant database was updated with extensive filtering options via ``pandas.query`` functionality [`#84 <https://github.com/PyPSA/pypsa-eur/pull/84>`_ and `#94 <https://github.com/PyPSA/pypsa-eur/pull/94>`_].
* Continuous integration testing with `Travis CI <https://travis-ci.org>`_ is now included for Linux, Mac and Windows (`#82 <https://github.com/PyPSA/pypsa-eur/pull/82>`_). * Continuous integration testing with `Travis CI <https://travis-ci.org>`_ is now included for Linux, Mac and Windows [`#82 <https://github.com/PyPSA/pypsa-eur/pull/82>`_].
* Data dependencies were moved to `zenodo <https://zenodo.org/>`_ and are now versioned (`#60 <https://github.com/PyPSA/pypsa-eur/issues/60>`_). * Data dependencies were moved to `zenodo <https://zenodo.org/>`_ and are now versioned [`#60 <https://github.com/PyPSA/pypsa-eur/issues/60>`_].
* Data dependencies are now retrieved directly from within the snakemake workflow (`#86 <https://github.com/PyPSA/pypsa-eur/pull/86>`_). * Data dependencies are now retrieved directly from within the snakemake workflow [`#86 <https://github.com/PyPSA/pypsa-eur/pull/86>`_].
* Emission prices can be added to marginal costs of generators through the keyworks ``Ep`` in the ``{opts}`` wildcard (`#100 <https://github.com/PyPSA/pypsa-eur/pull/100>`_). * Emission prices can be added to marginal costs of generators through the keyworks ``Ep`` in the ``{opts}`` wildcard [`#100 <https://github.com/PyPSA/pypsa-eur/pull/100>`_].
* An option is introduced to add extendable nuclear power plants to the network (`#98 <https://github.com/PyPSA/pypsa-eur/pull/98>`_). * An option is introduced to add extendable nuclear power plants to the network [`#98 <https://github.com/PyPSA/pypsa-eur/pull/98>`_].
* Focus weights can now be specified for particular countries for the network clustering, which allows to set a proportion of the total number of clusters for particular countries (`#87 <https://github.com/PyPSA/pypsa-eur/pull/87>`_). * Focus weights can now be specified for particular countries for the network clustering, which allows to set a proportion of the total number of clusters for particular countries [`#87 <https://github.com/PyPSA/pypsa-eur/pull/87>`_].
* A new rule :mod:`add_extra_components` allows to add additional components to the network only after clustering. It is thereby possible to model storage units (e.g. battery and hydrogen) in more detail via a combination of ``Store``, ``Link`` and ``Bus`` elements (`#97 <https://github.com/PyPSA/pypsa-eur/pull/97>`_). * A new rule :mod:`add_extra_components` allows to add additional components to the network only after clustering. It is thereby possible to model storage units (e.g. battery and hydrogen) in more detail via a combination of ``Store``, ``Link`` and ``Bus`` elements [`#97 <https://github.com/PyPSA/pypsa-eur/pull/97>`_].
* Hydrogen pipelines (including cost assumptions) can now be added alongside clustered network connections in the rule :mod:`add_extra_components` . Set ``electricity: extendable_carriers: Link: [H2 pipeline]`` and ensure hydrogen storage is modelled as a ``Store``. This is a first simplified stage (`#108 <https://github.com/PyPSA/pypsa-eur/pull/108>`_). * Hydrogen pipelines (including cost assumptions) can now be added alongside clustered network connections in the rule :mod:`add_extra_components` . Set ``electricity: extendable_carriers: Link: [H2 pipeline]`` and ensure hydrogen storage is modelled as a ``Store``. This is a first simplified stage [`#108 <https://github.com/PyPSA/pypsa-eur/pull/108>`_].
* Logfiles for all rules of the ``snakemake`` workflow are now written in the folder ``log/`` (`#102 <https://github.com/PyPSA/pypsa-eur/pull/102>`_). * Logfiles for all rules of the ``snakemake`` workflow are now written in the folder ``log/`` [`#102 <https://github.com/PyPSA/pypsa-eur/pull/102>`_].
* The new function ``_helpers.mock_snakemake`` creates a ``snakemake`` object which mimics the actual ``snakemake`` object produced by workflow by parsing the ``Snakefile`` and setting all paths for inputs, outputs, and logs. This allows running all scripts within a (I)python terminal (or just by calling ``python <script-name>``) and thereby facilitates developing and debugging scripts significantly (`#107 <https://github.com/PyPSA/pypsa-eur/pull/107>`_). * The new function ``_helpers.mock_snakemake`` creates a ``snakemake`` object which mimics the actual ``snakemake`` object produced by workflow by parsing the ``Snakefile`` and setting all paths for inputs, outputs, and logs. This allows running all scripts within a (I)python terminal (or just by calling ``python <script-name>``) and thereby facilitates developing and debugging scripts significantly [`#107 <https://github.com/PyPSA/pypsa-eur/pull/107>`_].
Release Process Release Process
=============== ===============
@ -100,8 +191,8 @@ Release Process
* Finalise release notes at ``doc/release_notes.rst``. * Finalise release notes at ``doc/release_notes.rst``.
* Update ``environment.fixedversions.yaml`` via * Update ``envs/environment.fixed.yaml`` via
``conda env export -n pypsa-eur -f environment.fixedversions.yaml --no-builds`` ``conda env export -n pypsa-eur -f envs/environment.fixed.yaml --no-builds``
from an up-to-date `pypsa-eur` environment. from an up-to-date `pypsa-eur` environment.
* Update version number in ``doc/conf.py`` and ``*config.*.yaml``. * Update version number in ``doc/conf.py`` and ``*config.*.yaml``.
@ -111,10 +202,10 @@ Release Process
* Tag a release on Github via ``git tag v0.x.x``, ``git push``, ``git push --tags``. Include release notes in the tag message. * Tag a release on Github via ``git tag v0.x.x``, ``git push``, ``git push --tags``. Include release notes in the tag message.
* Upload code to `zenodo code repository <https://doi.org/10.5281/zenodo.3520875>`_ with `GNU GPL 3.0 <https://www.gnu.org/licenses/gpl-3.0.en.html>`_ license. * Upload code to `zenodo code repository <https://doi.org/10.5281/zenodo.3520874>`_ with `GNU GPL 3.0 <https://www.gnu.org/licenses/gpl-3.0.en.html>`_ license.
* Create pre-built networks for ``config.default.yaml`` by running ``snakemake -j 1 extra_components_all_elec_networks``. * Create pre-built networks for ``config.default.yaml`` by running ``snakemake -j 1 extra_components_all_networks``.
* Upload pre-built networks to `zenodo data repository <https://doi.org/10.5281/zenodo.3601882>`_ with `CC BY 4.0 <https://creativecommons.org/licenses/by/4.0/>`_ license. * Upload pre-built networks to `zenodo data repository <https://doi.org/10.5281/zenodo.3601881>`_ with `CC BY 4.0 <https://creativecommons.org/licenses/by/4.0/>`_ license.
* Send announcement on the `PyPSA and PyPSA-Eur mailing list <https://groups.google.com/forum/#!forum/pypsa>`_. * Send announcement on the `PyPSA and PyPSA-Eur mailing list <https://groups.google.com/forum/#!forum/pypsa>`_.

View File

@ -7,7 +7,7 @@
Solving Networks Solving Networks
########################################## ##########################################
After generating and simplifying the networks they can be solved through the rule :mod:`solve_network` by using the collection rule :mod:`solve_all_elec_networks`. Moreover, networks can be solved for another focus with the derivative rules :mod:`solve_network` by using the collection rule :mod:`solve_operations_network` for dispatch-only analyses on an already solved network. After generating and simplifying the networks they can be solved through the rule :mod:`solve_network` by using the collection rule :mod:`solve_all_networks`. Moreover, networks can be solved for another focus with the derivative rules :mod:`solve_network` by using the collection rule :mod:`solve_operations_network` for dispatch-only analyses on an already solved network.
.. toctree:: .. toctree::
:caption: Overview :caption: Overview

View File

@ -47,47 +47,47 @@ The model can be adapted to only include selected countries (e.g. Germany) inste
.. literalinclude:: ../config.tutorial.yaml .. literalinclude:: ../config.tutorial.yaml
:language: yaml :language: yaml
:lines: 16 :lines: 20
Likewise, the example's temporal scope can be restricted (e.g. to a single month). Likewise, the example's temporal scope can be restricted (e.g. to a single month).
.. literalinclude:: ../config.tutorial.yaml .. literalinclude:: ../config.tutorial.yaml
:language: yaml :language: yaml
:lines: 18-21 :lines: 22-25
It is also possible to allow less or more carbon-dioxide emissions. Here, we limit the emissions of Germany 100 Megatonnes per year. It is also possible to allow less or more carbon-dioxide emissions. Here, we limit the emissions of Germany 100 Megatonnes per year.
.. literalinclude:: ../config.tutorial.yaml .. literalinclude:: ../config.tutorial.yaml
:language: yaml :language: yaml
:lines: 33 :lines: 36,38
PyPSA-Eur also includes a database of existing conventional powerplants. PyPSA-Eur also includes a database of existing conventional powerplants.
We can select which types of powerplants we like to be included with fixed capacities: We can select which types of powerplants we like to be included with fixed capacities:
.. literalinclude:: ../config.tutorial.yaml .. literalinclude:: ../config.tutorial.yaml
:language: yaml :language: yaml
:lines: 47 :lines: 36,52
To accurately model the temporal and spatial availability of renewables such as wind and solar energy, we rely on historical weather data. To accurately model the temporal and spatial availability of renewables such as wind and solar energy, we rely on historical weather data.
It is advisable to adapt the required range of coordinates to the selection of countries. It is advisable to adapt the required range of coordinates to the selection of countries.
.. literalinclude:: ../config.tutorial.yaml .. literalinclude:: ../config.tutorial.yaml
:language: yaml :language: yaml
:lines: 49-57 :lines: 54-62
We can also decide which weather data source should be used to calculate potentials and capacity factor time-series for each carrier. We can also decide which weather data source should be used to calculate potentials and capacity factor time-series for each carrier.
For example, we may want to use the ERA-5 dataset for solar and not the default SARAH-2 dataset. For example, we may want to use the ERA-5 dataset for solar and not the default SARAH-2 dataset.
.. literalinclude:: ../config.tutorial.yaml .. literalinclude:: ../config.tutorial.yaml
:language: yaml :language: yaml
:lines: 59,102-103 :lines: 64,107-108
Finally, it is possible to pick a solver. For instance, this tutorial uses the open-source solvers CBC and Ipopt and does not rely Finally, it is possible to pick a solver. For instance, this tutorial uses the open-source solvers CBC and Ipopt and does not rely
on the commercial solvers Gurobi or CPLEX (for which free academic licenses are available). on the commercial solvers Gurobi or CPLEX (for which free academic licenses are available).
.. literalinclude:: ../config.tutorial.yaml .. literalinclude:: ../config.tutorial.yaml
:language: yaml :language: yaml
:lines: 158,167-168 :lines: 170,180-181
.. note:: .. note::
@ -119,8 +119,8 @@ orders ``snakemake`` to run the script ``solve_network`` that produces the solve
.. code:: .. code::
rule solve_network: rule solve_network:
input: "networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc" input: "networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc"
output: "results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc" output: "results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc"
[...] [...]
script: "scripts/solve_network.py" script: "scripts/solve_network.py"
@ -129,7 +129,7 @@ orders ``snakemake`` to run the script ``solve_network`` that produces the solve
.. warning:: .. warning::
On Windows the previous command may currently cause a ``MissingRuleException`` due to problems with output files in subfolders. On Windows the previous command may currently cause a ``MissingRuleException`` due to problems with output files in subfolders.
This is an `open issue <https://github.com/snakemake/snakemake/issues/46>`_ at `snakemake <https://snakemake.readthedocs.io/>`_. This is an `open issue <https://github.com/snakemake/snakemake/issues/46>`_ at `snakemake <https://snakemake.readthedocs.io/>`_.
Windows users should add the option ``--keep-target-files`` to the command or instead run ``snakemake -j 1 solve_all_elec_networks``. Windows users should add the option ``--keep-target-files`` to the command or instead run ``snakemake -j 1 solve_all_networks``.
This triggers a workflow of multiple preceding jobs that depend on each rule's inputs and outputs: This triggers a workflow of multiple preceding jobs that depend on each rule's inputs and outputs:
@ -271,7 +271,7 @@ the wildcards given in ``scenario`` in the configuration file ``config.yaml`` ar
.. literalinclude:: ../config.tutorial.yaml .. literalinclude:: ../config.tutorial.yaml
:language: yaml :language: yaml
:lines: 7-12 :lines: 14-18
In this example we would not only solve a 6-node model of Germany but also a 2-node model. In this example we would not only solve a 6-node model of Germany but also a 2-node model.
@ -286,12 +286,4 @@ The solved networks can be analysed just like any other PyPSA network (e.g. in J
network = pypsa.Network("results/networks/elec_s_6_ec_lcopt_Co2L-24H.nc") network = pypsa.Network("results/networks/elec_s_6_ec_lcopt_Co2L-24H.nc")
...
For inspiration, read the `examples section in the PyPSA documentation <https://pypsa.readthedocs.io/en/latest/examples.html>`_. For inspiration, read the `examples section in the PyPSA documentation <https://pypsa.readthedocs.io/en/latest/examples.html>`_.
.. note::
There are rules for summaries and plotting available in the repository of PyPSA-Eur.
They are currently under revision and therefore not yet documented.

View File

@ -18,16 +18,6 @@ what data to retrieve and what files to produce.
Detailed explanations of how wildcards work in ``snakemake`` can be found in the Detailed explanations of how wildcards work in ``snakemake`` can be found in the
`relevant section of the documentation <https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#wildcards>`_. `relevant section of the documentation <https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#wildcards>`_.
.. _network:
The ``{network}`` wildcard
==========================
The ``{network}`` wildcard specifies the considered energy sector(s)
and, as currently only ``elec`` (for electricity) is included,
it currently represents rather a placeholder wildcard to facilitate
future extensions including multiple energy sectors at once.
.. _simpl: .. _simpl:
The ``{simpl}`` wildcard The ``{simpl}`` wildcard
@ -37,9 +27,6 @@ The ``{simpl}`` wildcard specifies number of buses a detailed
network model should be pre-clustered to in the rule network model should be pre-clustered to in the rule
:mod:`simplify_network` (before :mod:`cluster_network`). :mod:`simplify_network` (before :mod:`cluster_network`).
.. seealso::
:mod:`simplify_network`
.. _clusters: .. _clusters:
The ``{clusters}`` wildcard The ``{clusters}`` wildcard
@ -55,9 +42,6 @@ If an `m` is placed behind the number of clusters (e.g. ``100m``),
generators are only moved to the clustered buses but not aggregated generators are only moved to the clustered buses but not aggregated
by carrier; i.e. the clustered bus may have more than one e.g. wind generator. by carrier; i.e. the clustered bus may have more than one e.g. wind generator.
.. seealso::
:mod:`cluster_network`
.. _ll: .. _ll:
The ``{ll}`` wildcard The ``{ll}`` wildcard
@ -89,9 +73,6 @@ The wildcard, in general, consists of two parts:
(c) ``c1.25`` will allow to build a transmission network that (c) ``c1.25`` will allow to build a transmission network that
costs no more than 25 % more than the current system. costs no more than 25 % more than the current system.
.. seealso::
:mod:`prepare_network`
.. _opts: .. _opts:
The ``{opts}`` wildcard The ``{opts}`` wildcard
@ -108,16 +89,13 @@ It may hold multiple triggers separated by ``-``, i.e. ``Co2L-3H`` contains the
:widths: 10,20,10,10 :widths: 10,20,10,10
:file: configtables/opts.csv :file: configtables/opts.csv
.. seealso::
:mod:`prepare_network`, :mod:`solve_network`
.. _country: .. _country:
The ``{country}`` wildcard The ``{country}`` wildcard
========================== ==========================
The rules ``make_summary`` and ``plot_summary`` (generating summaries of all or a subselection The rules :mod:`make_summary` and :mod:`plot_summary` (generating summaries of all or a subselection
of the solved networks) as well as ``plot_p_nom_max`` (for plotting the cumulative of the solved networks) as well as :mod:`plot_p_nom_map` (for plotting the cumulative
generation potentials for renewable technologies) can be narrowed to generation potentials for renewable technologies) can be narrowed to
individual countries using the ``{country}`` wildcard. individual countries using the ``{country}`` wildcard.
@ -131,9 +109,6 @@ in Germany (in the solution for Europe) use:
snakemake -j 1 results/summaries/elec_s_all_lall_Co2L-3H_DE snakemake -j 1 results/summaries/elec_s_all_lall_Co2L-3H_DE
.. seealso::
:mod:`make_summary`, :mod:`plot_summary`, :mod:`plot_p_nom_max`
.. _cutout_wc: .. _cutout_wc:
The ``{cutout}`` wildcard The ``{cutout}`` wildcard
@ -143,9 +118,6 @@ The ``{cutout}`` wildcard facilitates running the rule :mod:`build_cutout`
for all cutout configurations specified under ``atlite: cutouts:``. for all cutout configurations specified under ``atlite: cutouts:``.
These cutouts will be stored in a folder specified by ``{cutout}``. These cutouts will be stored in a folder specified by ``{cutout}``.
.. seealso::
:mod:`build_cutout`, :ref:`atlite_cf`
.. _technology: .. _technology:
The ``{technology}`` wildcard The ``{technology}`` wildcard
@ -161,22 +133,16 @@ For instance ``{technology}`` can be used to plot regionally disaggregated poten
with the rule :mod:`plot_p_nom_max` or to summarize a particular technology's with the rule :mod:`plot_p_nom_max` or to summarize a particular technology's
full load hours in various countries with the rule :mod:`build_country_flh`. full load hours in various countries with the rule :mod:`build_country_flh`.
.. seealso::
:mod:`build_renewable_profiles`, :mod:`plot_p_nom_max`, :mod:`build_country_flh`
.. _attr: .. _attr:
The ``{attr}`` wildcard The ``{attr}`` wildcard
======================= =======================
The ``{attr}`` wildcard specifies which attribute are used for size The ``{attr}`` wildcard specifies which attribute is used for size
representations of network components on a map plot produced by the rule representations of network components on a map plot produced by the rule
``plot_network``. While it might be extended in the future, ``{attr}`` :mod:`plot_network`. While it might be extended in the future, ``{attr}``
currently only supports plotting of ``p_nom``. currently only supports plotting of ``p_nom``.
.. seealso::
:mod:`plot_network`
.. _ext: .. _ext:
The ``{ext}`` wildcard The ``{ext}`` wildcard
@ -191,6 +157,3 @@ formats depends on the used backend. To query the supported file types on your s
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
plt.gcf().canvas.get_supported_filetypes() plt.gcf().canvas.get_supported_filetypes()
.. seealso::
:mod:`plot_network`, :mod:`plot_summary`, :mod:`plot_p_nom_max`

View File

@ -1,241 +0,0 @@
# SPDX-FileCopyrightText: : 2017-2020 The PyPSA-Eur Authors
#
# SPDX-License-Identifier: GPL-3.0-or-later
name: pypsa-eur
channels:
- bioconda
- gurobi
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1
- affine=2.3.0
- appdirs=1.4.3
- atlite=0.0.3
- attrs=19.3.0
- backcall=0.1.0
- beautifulsoup4=4.9.1
- blas=1.0
- blosc=1.16.3
- bokeh=2.0.2
- bottleneck=1.3.2
- bzip2=1.0.8
- ca-certificates=2020.1.1
- cairo=1.14.12
- cartopy=0.17.0
- certifi=2020.4.5.1
- cffi=1.14.0
- cfitsio=3.470
- cftime=1.1.2
- chardet=3.0.4
- click=7.1.2
- click-plugins=1.1.1
- cligj=0.5.0
- cloudpickle=1.4.1
- coincbc=2.10.5
- configargparse=1.1
- cryptography=2.9.2
- curl=7.67.0
- cycler=0.10.0
- cytoolz=0.10.1
- dask=2.17.2
- dask-core=2.17.2
- datrie=0.8.2
- dbus=1.13.14
- decorator=4.4.2
- distributed=2.17.0
- docutils=0.16
- entsoe-py=0.2.10
- expat=2.2.6
- fiona=1.8.11
- fontconfig=2.13.0
- freetype=2.9.1
- freexl=1.0.5
- fsspec=0.7.4
- gdal=3.0.2
- geographiclib=1.50
- geopandas=0.6.1
- geopy=1.22.0
- geos=3.8.0
- geotiff=1.5.1
- giflib=5.1.4
- gitdb=4.0.2
- gitpython=3.1.1
- glib=2.63.1
- gst-plugins-base=1.14.0
- gstreamer=1.14.0
- gurobi=9.0.2
- hdf4=4.2.13
- hdf5=1.10.4
- heapdict=1.0.1
- icu=58.2
- idna=2.9
- importlib-metadata=1.6.0
- importlib_metadata=1.6.0
- intel-openmp=2020.1
- ipopt=3.13.2
- ipython=7.13.0
- ipython_genutils=0.2.0
- jedi=0.17.0
- jinja2=2.11.2
- joblib=0.15.1
- jpeg=9b
- json-c=0.13.1
- jsonschema=3.2.0
- jupyter_core=4.6.3
- kealib=1.4.7
- kiwisolver=1.2.0
- krb5=1.16.4
- ld_impl_linux-64=2.33.1
- libblas=3.8.0
- libboost=1.67.0
- libcblas=3.8.0
- libcurl=7.67.0
- libdap4=3.19.1
- libedit=3.1.20181209
- libffi=3.3
- libgcc-ng=9.1.0
- libgdal=3.0.2
- libgfortran-ng=7.3.0
- libkml=1.3.0
- liblapack=3.8.0
- libnetcdf=4.6.1
- libpng=1.6.37
- libpq=11.5
- libspatialindex=1.9.3
- libspatialite=4.3.0a
- libssh2=1.9.0
- libstdcxx-ng=9.1.0
- libtiff=4.1.0
- libuuid=1.0.3
- libxcb=1.13
- libxml2=2.9.9
- libxslt=1.1.33
- locket=0.2.0
- lxml=4.5.0
- lz4-c=1.8.1.2
- lzo=2.10
- markupsafe=1.1.1
- matplotlib=3.1.3
- matplotlib-base=3.1.3
- memory_profiler=0.55.0
- metis=5.1.0
- mkl=2020.1
- mkl-service=2.3.0
- mkl_fft=1.0.15
- mkl_random=1.1.1
- mock=4.0.2
- more-itertools=8.3.0
- msgpack-python=1.0.0
- munch=2.5.0
- nbformat=5.0.6
- ncurses=6.2
- netcdf4=1.4.2
- networkx=2.4
- nose=1.3.7
- numexpr=2.7.1
- numpy=1.18.1
- numpy-base=1.18.1
- olefile=0.46
- openjpeg=2.3.0
- openssl=1.1.1g
- owslib=0.19.2
- packaging=20.3
- pandas=1.0.3
- parso=0.7.0
- partd=1.1.0
- pcre=8.43
- pexpect=4.8.0
- pickleshare=0.7.5
- pillow=7.1.2
- pip=20.0.2
- pixman=0.38.0
- pluggy=0.13.1
- ply=3.11
- poppler=0.65.0
- poppler-data=0.4.9
- postgresql=11.5
- powerplantmatching=0.4.5
- progressbar2=3.37.1
- proj=6.2.1
- prompt-toolkit=3.0.5
- prompt_toolkit=3.0.5
- psutil=5.7.0
- ptyprocess=0.6.0
- py=1.8.1
- pycountry=19.8.18
- pycparser=2.20
- pyepsg=0.4.0
- pygments=2.6.1
- pykdtree=1.3.1
- pyomo=5.6.9
- pyopenssl=19.1.0
- pyparsing=2.4.7
- pyproj=2.6.1.post1
- pypsa=0.17.0
- pyqt=5.9.2
- pyrsistent=0.16.0
- pyshp=2.1.0
- pysocks=1.7.1
- pytables=3.6.1
- pytest=5.4.2
- pytest-runner=5.2
- python=3.7.7
- python-dateutil=2.8.1
- python-utils=2.3.0
- python_abi=3.7
- pytz=2020.1
- pyutilib=5.8.0
- pyyaml=5.3.1
- qt=5.9.7
- rasterio=1.1.0
- ratelimiter=1.2.0
- readline=8.0
- requests=2.23.0
- rtree=0.9.4
- scikit-learn=0.22.1
- scipy=1.4.1
- seaborn=0.10.1
- setuptools=47.1.1
- shapely=1.7.0
- sip=4.19.8
- six=1.15.0
- smmap=3.0.2
- snakemake-minimal=5.19.2
- snappy=1.1.7
- snuggs=1.4.7
- sortedcontainers=2.1.0
- soupsieve=2.0.1
- sqlite=3.31.1
- tbb=2018.0.5
- tblib=1.6.0
- tiledb=1.6.3
- tk=8.6.8
- toolz=0.10.0
- toposort=1.5
- tornado=6.0.4
- traitlets=4.3.3
- typing_extensions=3.7.4.1
- tzcode=2020a
- urllib3=1.25.8
- wcwidth=0.1.9
- wheel=0.34.2
- wrapt=1.12.1
- xarray=0.15.1
- xerces-c=3.2.2
- xlrd=1.2.0
- xz=5.2.5
- yaml=0.1.7
- zict=2.0.0
- zipp=3.1.0
- zlib=1.2.11
- zstd=1.3.7
- pip:
- cdsapi==0.2.7
- countrycode==0.2
- descartes==1.1.0
- geokit==1.1.2
- glaes==1.1.2
- tqdm==4.46.1
- vresutils==0.3.1

View File

@ -5,19 +5,17 @@
name: pypsa-eur-docs name: pypsa-eur-docs
channels: channels:
- conda-forge - conda-forge
#- bioconda
dependencies: dependencies:
#- python - python<=3.7
- pip - pip
- pypsa>=0.17.1 - pypsa>=0.17.1
- atlite=0.0.3 - atlite=0.0.3
- pre-commit
# Dependencies of the workflow itself # Dependencies of the workflow itself
#- xlrd
- scikit-learn - scikit-learn
- pycountry - pycountry
- seaborn - seaborn
#- snakemake-minimal
- memory_profiler - memory_profiler
- yaml - yaml
- pytables - pytables
@ -25,28 +23,19 @@ dependencies:
# Second order dependencies which should really be deps of atlite # Second order dependencies which should really be deps of atlite
- xarray - xarray
#- netcdf4
#- bottleneck
#- toolz
#- dask
- progressbar2 - progressbar2
- pyyaml>=5.1.0 - pyyaml>=5.1.0
# Include ipython so that one does not inadvertently drop out of the conda
# environment by calling ipython
# - ipython
# GIS dependencies have to come all from conda-forge # GIS dependencies have to come all from conda-forge
- conda-forge::cartopy - cartopy
- conda-forge::fiona - fiona
- conda-forge::proj - proj
- conda-forge::pyshp - pyshp
- conda-forge::geopandas - geopandas
- conda-forge::rasterio - rasterio
- conda-forge::shapely - shapely
- conda-forge::libgdal - libgdal
# The FRESNA/KIT stuff is not packaged for conda yet
- pip: - pip:
- vresutils==0.3.1 - vresutils==0.3.1
- git+https://github.com/PyPSA/glaes.git#egg=glaes - git+https://github.com/PyPSA/glaes.git#egg=glaes

265
envs/environment.fixed.yaml Normal file
View File

@ -0,0 +1,265 @@
# SPDX-FileCopyrightText: : 2017-2020 The PyPSA-Eur Authors
#
# SPDX-License-Identifier: GPL-3.0-or-later
name: pypsa-eur
channels:
- bioconda
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1
- _openmp_mutex=4.5
- affine=2.3.0
- amply=0.1.4
- appdirs=1.4.4
- atlite=0.0.3
- attrs=20.3.0
- backcall=0.2.0
- backports=1.0
- backports.functools_lru_cache=1.6.1
- beautifulsoup4=4.9.3
- blosc=1.20.1
- bokeh=2.2.3
- boost-cpp=1.72.0
- bottleneck=1.3.2
- brotlipy=0.7.0
- bzip2=1.0.8
- c-ares=1.17.1
- ca-certificates=2020.11.8
- cairo=1.16.0
- cartopy=0.17.0
- certifi=2020.11.8
- cffi=1.14.4
- cfitsio=3.470
- cftime=1.3.0
- chardet=3.0.4
- click=7.1.2
- click-plugins=1.1.1
- cligj=0.7.1
- cloudpickle=1.6.0
- coincbc=2.10.5
- conda=4.9.2
- conda-package-handling=1.7.2
- configargparse=1.2.3
- cryptography=3.2.1
- curl=7.71.1
- cycler=0.10.0
- cytoolz=0.11.0
- dask=2.30.0
- dask-core=2.30.0
- datrie=0.8.2
- decorator=4.4.2
- descartes=1.1.0
- distributed=2.30.1
- docutils=0.16
- entsoe-py=0.2.10
- expat=2.2.9
- fiona=1.8.13
- fontconfig=2.13.1
- freetype=2.10.4
- freexl=1.0.5
- fsspec=0.8.4
- gdal=3.0.4
- geographiclib=1.50
- geopandas=0.8.1
- geopy=2.0.0
- geos=3.8.1
- geotiff=1.6.0
- gettext=0.19.8.1
- giflib=5.2.1
- gitdb=4.0.5
- gitpython=3.1.11
- glib=2.66.3
- glpk=4.65
- gmp=6.2.1
- hdf4=4.2.13
- hdf5=1.10.6
- heapdict=1.0.1
- icu=64.2
- idna=2.10
- importlib-metadata=3.1.1
- importlib_metadata=3.1.1
- ipopt=3.13.2
- ipython=7.19.0
- ipython_genutils=0.2.0
- jedi=0.17.2
- jinja2=2.11.2
- joblib=0.17.0
- jpeg=9d
- json-c=0.13.1
- jsonschema=3.2.0
- jupyter_core=4.7.0
- kealib=1.4.14
- kiwisolver=1.3.1
- krb5=1.17.2
- lcms2=2.11
- ld_impl_linux-64=2.35.1
- libarchive=3.3.3
- libblas=3.9.0
- libcblas=3.9.0
- libcurl=7.71.1
- libdap4=3.20.6
- libedit=3.1.20191231
- libev=4.33
- libffi=3.3
- libgcc-ng=9.3.0
- libgdal=3.0.4
- libgfortran-ng=7.5.0
- libgfortran4=7.5.0
- libgfortran5=9.3.0
- libglib=2.66.3
- libgomp=9.3.0
- libiconv=1.16
- libkml=1.3.0
- liblapack=3.9.0
- libnetcdf=4.7.4
- libnghttp2=1.41.0
- libopenblas=0.3.12
- libpng=1.6.37
- libpq=12.3
- libsolv=0.7.16
- libspatialindex=1.9.3
- libspatialite=4.3.0a
- libssh2=1.9.0
- libstdcxx-ng=9.3.0
- libtiff=4.1.0
- libuuid=2.32.1
- libwebp-base=1.1.0
- libxcb=1.13
- libxml2=2.9.10
- libxslt=1.1.33
- locket=0.2.0
- lxml=4.6.2
- lz4-c=1.9.2
- lzo=2.10
- mamba=0.7.3
- markupsafe=1.1.1
- matplotlib-base=3.3.3
- memory_profiler=0.58.0
- metis=5.1.0
- mock=4.0.2
- msgpack-python=1.0.0
- munch=2.5.0
- nbformat=5.0.8
- ncurses=6.2
- netcdf4=1.5.4
- networkx=2.5
- nose=1.3.7
- numexpr=2.7.1
- numpy=1.19.0
- olefile=0.46
- openjpeg=2.3.1
- openssl=1.1.1h
- owslib=0.20.0
- packaging=20.7
- pandas=1.1.4
- parso=0.7.1
- partd=1.1.0
- patsy=0.5.1
- pcre=8.44
- pexpect=4.8.0
- pickleshare=0.7.5
- pillow=8.0.1
- pip=20.3.1
- pixman=0.38.0
- ply=3.11
- poppler=0.87.0
- poppler-data=0.4.10
- postgresql=12.3
- powerplantmatching=0.4.8
- progressbar2=3.53.1
- proj=7.0.0
- prompt-toolkit=3.0.8
- psutil=5.7.3
- pthread-stubs=0.4
- ptyprocess=0.6.0
- pulp=2.3.1
- pycosat=0.6.3
- pycountry=20.7.3
- pycparser=2.20
- pyepsg=0.4.0
- pygments=2.7.2
- pykdtree=1.3.4
- pyomo=5.7.1
- pyopenssl=20.0.0
- pyparsing=2.4.7
- pyproj=2.6.1.post1
- pypsa=0.17.1
- pyrsistent=0.17.3
- pyshp=2.1.2
- pysocks=1.7.1
- pytables=3.6.1
- python=3.8.6
- python-dateutil=2.8.1
- python-utils=2.4.0
- python_abi=3.8
- pytz=2020.4
- pyutilib=6.0.0
- pyyaml=5.3.1
- rasterio=1.1.5
- ratelimiter=1.2.0
- readline=8.0
- reproc=14.2.1
- reproc-cpp=14.2.1
- requests=2.25.0
- rtree=0.9.4
- ruamel_yaml=0.15.80
- scikit-learn=0.23.2
- scipy=1.5.3
- seaborn=0.11.0
- seaborn-base=0.11.0
- setuptools=49.6.0
- shapely=1.7.1
- six=1.15.0
- smmap=3.0.4
- snakemake-minimal=5.30.1
- snuggs=1.4.7
- sortedcontainers=2.3.0
- soupsieve=2.0.1
- sqlite=3.34.0
- statsmodels=0.12.1
- tbb=2020.2
- tblib=1.6.0
- threadpoolctl=2.1.0
- tiledb=1.7.7
- tk=8.6.10
- toolz=0.11.1
- toposort=1.5
- tornado=6.1
- tqdm=4.54.1
- traitlets=5.0.5
- typing_extensions=3.7.4.3
- tzcode=2020a
- urllib3=1.25.11
- wcwidth=0.2.5
- wheel=0.36.1
- wrapt=1.12.1
- xarray=0.16.2
- xerces-c=3.2.2
- xlrd=1.2.0
- xorg-kbproto=1.0.7
- xorg-libice=1.0.10
- xorg-libsm=1.2.3
- xorg-libx11=1.6.12
- xorg-libxau=1.0.9
- xorg-libxdmcp=1.1.3
- xorg-libxext=1.3.4
- xorg-libxrender=0.9.10
- xorg-renderproto=0.11.1
- xorg-xextproto=7.3.0
- xorg-xproto=7.0.31
- xz=5.2.5
- yaml=0.2.5
- zict=2.0.0
- zipp=3.4.0
- zlib=1.2.11
- zstd=1.4.5
- pip:
- cdsapi==0.4.0
- countrycode==0.2
- geokit==1.1.2
- glaes==1.1.2
- sklearn==0.0
- tsam==1.1.0
- vresutils==0.3.1

View File

@ -4,19 +4,20 @@
name: pypsa-eur name: pypsa-eur
channels: channels:
- defaults
- conda-forge - conda-forge
- bioconda - bioconda
- http://conda.anaconda.org/gurobi - http://conda.anaconda.org/gurobi
dependencies: dependencies:
- python - python
- pip - pip
- mamba # esp for windows build
- pypsa>=0.17.1 - pypsa>=0.17.1
- atlite=0.0.3 - atlite=0.0.3
# Dependencies of the workflow itself # Dependencies of the workflow itself
- xlrd - xlrd
- openpyxl
- scikit-learn - scikit-learn
- pycountry - pycountry
- seaborn - seaborn
@ -25,7 +26,8 @@ dependencies:
- yaml - yaml
- pytables - pytables
- lxml - lxml
- powerplantmatching>=0.4.3 - powerplantmatching>=0.4.8
- numpy<=1.19.0 # otherwise macos fails
# Second order dependencies which should really be deps of atlite # Second order dependencies which should really be deps of atlite
- xarray - xarray
@ -36,8 +38,7 @@ dependencies:
- progressbar2 - progressbar2
- pyyaml>=5.1.0 - pyyaml>=5.1.0
# Include ipython so that one does not inadvertently drop out of the conda # Keep in conda environment when calling ipython
# environment by calling ipython
- ipython - ipython
# GIS dependencies: # GIS dependencies:
@ -48,13 +49,12 @@ dependencies:
- geopandas - geopandas
- rasterio - rasterio
- shapely - shapely
- libgdal - libgdal<=3.0.4
- descartes
# Solvers
- gurobi:gurobi # until https://github.com/conda-forge/pypsa-feedstock/issues/4 closed
- pip: - pip:
- vresutils==0.3.1 - vresutils==0.3.1
- tsam>=1.1.0
- git+https://github.com/PyPSA/glaes.git#egg=glaes - git+https://github.com/PyPSA/glaes.git#egg=glaes
- git+https://github.com/PyPSA/geokit.git#egg=geokit - git+https://github.com/PyPSA/geokit.git#egg=geokit
- cdsapi - cdsapi

View File

@ -44,6 +44,7 @@ def configure_logging(snakemake, skip_handlers=False):
}) })
logging.basicConfig(**kwargs) logging.basicConfig(**kwargs)
def load_network(import_name=None, custom_components=None): def load_network(import_name=None, custom_components=None):
""" """
Helper for importing a pypsa.Network with additional custom components. Helper for importing a pypsa.Network with additional custom components.
@ -70,7 +71,6 @@ def load_network(import_name=None, custom_components=None):
------- -------
pypsa.Network pypsa.Network
""" """
import pypsa import pypsa
from pypsa.descriptors import Dict from pypsa.descriptors import Dict
@ -90,10 +90,12 @@ def load_network(import_name=None, custom_components=None):
override_components=override_components, override_components=override_components,
override_component_attrs=override_component_attrs) override_component_attrs=override_component_attrs)
def pdbcast(v, h): def pdbcast(v, h):
return pd.DataFrame(v.values.reshape((-1, 1)) * h.values, return pd.DataFrame(v.values.reshape((-1, 1)) * h.values,
index=v.index, columns=h.index) index=v.index, columns=h.index)
def load_network_for_plots(fn, tech_costs, config, combine_hydro_ps=True): def load_network_for_plots(fn, tech_costs, config, combine_hydro_ps=True):
import pypsa import pypsa
from add_electricity import update_transmission_costs, load_costs from add_electricity import update_transmission_costs, load_costs
@ -113,7 +115,7 @@ def load_network_for_plots(fn, tech_costs, config, combine_hydro_ps=True):
if combine_hydro_ps: if combine_hydro_ps:
n.storage_units.loc[n.storage_units.carrier.isin({'PHS', 'hydro'}), 'carrier'] = 'hydro+PHS' n.storage_units.loc[n.storage_units.carrier.isin({'PHS', 'hydro'}), 'carrier'] = 'hydro+PHS'
# #if the carrier was not set on the heat storage units # if the carrier was not set on the heat storage units
# bus_carrier = n.storage_units.bus.map(n.buses.carrier) # bus_carrier = n.storage_units.bus.map(n.buses.carrier)
# n.storage_units.loc[bus_carrier == "heat","carrier"] = "water tanks" # n.storage_units.loc[bus_carrier == "heat","carrier"] = "water tanks"
@ -168,6 +170,7 @@ def aggregate_costs(n, flatten=False, opts=None, existing_only=False):
n.iterate_components(iterkeys(components), skip_empty=False), n.iterate_components(iterkeys(components), skip_empty=False),
itervalues(components) itervalues(components)
): ):
if c.df.empty: continue
if not existing_only: p_nom += "_opt" if not existing_only: p_nom += "_opt"
costs[(c.list_name, 'capital')] = (c.df[p_nom] * c.df.capital_cost).groupby(c.df.carrier).sum() costs[(c.list_name, 'capital')] = (c.df[p_nom] * c.df.capital_cost).groupby(c.df.carrier).sum()
if p_attr is not None: if p_attr is not None:

View File

@ -24,13 +24,13 @@ Relevant Settings
conventional_carriers: conventional_carriers:
co2limit: co2limit:
extendable_carriers: extendable_carriers:
Generator: include_renewable_capacities_from_OPSD:
estimate_renewable_capacities_from_capacity_stats: estimate_renewable_capacities_from_capacity_stats:
load: load:
scaling_factor: scaling_factor:
renewable: (keys) renewable:
hydro: hydro:
carriers: carriers:
hydro_max_hours: hydro_max_hours:
@ -52,15 +52,8 @@ Inputs
.. image:: ../img/hydrocapacities.png .. image:: ../img/hydrocapacities.png
:scale: 34 % :scale: 34 %
- ``data/geth2015_hydro_capacities.csv``: alternative to capacities above; NOT CURRENTLY USED! - ``data/geth2015_hydro_capacities.csv``: alternative to capacities above; not currently used!
- ``data/bundle/time_series_60min_singleindex_filtered.csv``: Hourly per-country load profiles since 2010 from the `ENTSO-E statistical database <https://www.entsoe.eu/data/power-stats/hourly_load/>`_ - ``resources/opsd_load.csv`` Hourly per-country load profiles.
.. image:: ../img/load-box.png
:scale: 33 %
.. image:: ../img/load-ts.png
:scale: 33 %
- ``resources/regions_onshore.geojson``: confer :ref:`busregions` - ``resources/regions_onshore.geojson``: confer :ref:`busregions`
- ``resources/nuts3_shapes.geojson``: confer :ref:`shapes` - ``resources/nuts3_shapes.geojson``: confer :ref:`shapes`
- ``resources/powerplants.csv``: confer :ref:`powerplants` - ``resources/powerplants.csv``: confer :ref:`powerplants`
@ -90,25 +83,28 @@ It further adds extendable ``generators`` with **zero** capacity for
- additional open- and combined-cycle gas turbines (if ``OCGT`` and/or ``CCGT`` is listed in the config setting ``electricity: extendable_carriers``) - additional open- and combined-cycle gas turbines (if ``OCGT`` and/or ``CCGT`` is listed in the config setting ``electricity: extendable_carriers``)
""" """
from vresutils.load import timeseries_opsd
from vresutils import transfer as vtransfer
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import pypsa
import pandas as pd import pandas as pd
import numpy as np import numpy as np
import xarray as xr import xarray as xr
import geopandas as gpd import geopandas as gpd
import pypsa import powerplantmatching as pm
import powerplantmatching as ppm from powerplantmatching.export import map_country_bus
from vresutils.load import timeseries_opsd
from vresutils import transfer as vtransfer
idx = pd.IndexSlice idx = pd.IndexSlice
logger = logging.getLogger(__name__)
def normed(s): return s/s.sum() def normed(s): return s/s.sum()
def _add_missing_carriers_from_costs(n, costs, carriers): def _add_missing_carriers_from_costs(n, costs, carriers):
missing_carriers = pd.Index(carriers).difference(n.carriers.index) missing_carriers = pd.Index(carriers).difference(n.carriers.index)
if missing_carriers.empty: return if missing_carriers.empty: return
@ -169,13 +165,10 @@ def load_costs(Nyears=1., tech_costs=None, config=None, elec_config=None):
def costs_for_storage(store, link1, link2=None, max_hours=1.): def costs_for_storage(store, link1, link2=None, max_hours=1.):
capital_cost = link1['capital_cost'] + max_hours * store['capital_cost'] capital_cost = link1['capital_cost'] + max_hours * store['capital_cost']
efficiency = link1['efficiency']**0.5
if link2 is not None: if link2 is not None:
capital_cost += link2['capital_cost'] capital_cost += link2['capital_cost']
efficiency *= link2['efficiency']**0.5
return pd.Series(dict(capital_cost=capital_cost, return pd.Series(dict(capital_cost=capital_cost,
marginal_cost=0., marginal_cost=0.,
efficiency=efficiency,
co2_emissions=0.)) co2_emissions=0.))
if elec_config is None: if elec_config is None:
@ -196,6 +189,7 @@ def load_costs(Nyears=1., tech_costs=None, config=None, elec_config=None):
return costs return costs
def load_powerplants(ppl_fn=None): def load_powerplants(ppl_fn=None):
if ppl_fn is None: if ppl_fn is None:
ppl_fn = snakemake.input.powerplants ppl_fn = snakemake.input.powerplants
@ -207,27 +201,19 @@ def load_powerplants(ppl_fn=None):
.replace({'carrier': carrier_dict})) .replace({'carrier': carrier_dict}))
# =============================================================================
# Attach components
# =============================================================================
# ### Load
def attach_load(n): def attach_load(n):
substation_lv_i = n.buses.index[n.buses['substation_lv']] substation_lv_i = n.buses.index[n.buses['substation_lv']]
regions = (gpd.read_file(snakemake.input.regions).set_index('name') regions = (gpd.read_file(snakemake.input.regions).set_index('name')
.reindex(substation_lv_i)) .reindex(substation_lv_i))
opsd_load = (timeseries_opsd(slice(*n.snapshots[[0,-1]].year.astype(str)), opsd_load = (pd.read_csv(snakemake.input.load, index_col=0, parse_dates=True)
snakemake.input.opsd_load) * .filter(items=snakemake.config['countries']))
snakemake.config.get('load', {}).get('scaling_factor', 1.0))
# Convert to naive UTC (has to be explicit since pandas 0.24) scaling = snakemake.config.get('load', {}).get('scaling_factor', 1.0)
opsd_load.index = opsd_load.index.tz_localize(None) logger.info(f"Load data scaled with scalling factor {scaling}.")
opsd_load *= scaling
nuts3 = gpd.read_file(snakemake.input.nuts3_shapes).set_index('index') nuts3 = gpd.read_file(snakemake.input.nuts3_shapes).set_index('index')
def normed(x): return x.divide(x.sum())
def upsample(cntry, group): def upsample(cntry, group):
l = opsd_load[cntry] l = opsd_load[cntry]
if len(group) == 1: if len(group) == 1:
@ -242,7 +228,8 @@ def attach_load(n):
index=group.index) index=group.index)
# relative factors 0.6 and 0.4 have been determined from a linear # relative factors 0.6 and 0.4 have been determined from a linear
# regression on the country to continent load data (refer to vresutils.load._upsampling_weights) # regression on the country to continent load data
# (refer to vresutils.load._upsampling_weights)
factors = normed(0.6 * normed(gdp_n) + 0.4 * normed(pop_n)) factors = normed(0.6 * normed(gdp_n) + 0.4 * normed(pop_n))
return pd.DataFrame(factors.values * l.values[:,np.newaxis], return pd.DataFrame(factors.values * l.values[:,np.newaxis],
index=l.index, columns=factors.index) index=l.index, columns=factors.index)
@ -252,7 +239,6 @@ def attach_load(n):
n.madd("Load", substation_lv_i, bus=substation_lv_i, p_set=load) n.madd("Load", substation_lv_i, bus=substation_lv_i, p_set=load)
### Set line costs
def update_transmission_costs(n, costs, length_factor=1.0, simple_hvdc_costs=False): def update_transmission_costs(n, costs, length_factor=1.0, simple_hvdc_costs=False):
n.lines['capital_cost'] = (n.lines['length'] * length_factor * n.lines['capital_cost'] = (n.lines['length'] * length_factor *
@ -261,6 +247,11 @@ def update_transmission_costs(n, costs, length_factor=1.0, simple_hvdc_costs=Fal
if n.links.empty: return if n.links.empty: return
dc_b = n.links.carrier == 'DC' dc_b = n.links.carrier == 'DC'
# If there are no dc links, then the 'underwater_fraction' column
# may be missing. Therefore we have to return here.
if n.links.loc[dc_b].empty: return
if simple_hvdc_costs: if simple_hvdc_costs:
costs = (n.links.loc[dc_b, 'length'] * length_factor * costs = (n.links.loc[dc_b, 'length'] * length_factor *
costs.at['HVDC overhead', 'capital_cost']) costs.at['HVDC overhead', 'capital_cost'])
@ -273,7 +264,6 @@ def update_transmission_costs(n, costs, length_factor=1.0, simple_hvdc_costs=Fal
costs.at['HVDC inverter pair', 'capital_cost']) costs.at['HVDC inverter pair', 'capital_cost'])
n.links.loc[dc_b, 'capital_cost'] = costs n.links.loc[dc_b, 'capital_cost'] = costs
### Generators
def attach_wind_and_solar(n, costs): def attach_wind_and_solar(n, costs):
for tech in snakemake.config['renewable']: for tech in snakemake.config['renewable']:
@ -312,15 +302,17 @@ def attach_wind_and_solar(n, costs):
p_max_pu=ds['profile'].transpose('time', 'bus').to_pandas()) p_max_pu=ds['profile'].transpose('time', 'bus').to_pandas())
def attach_conventional_generators(n, costs, ppl): def attach_conventional_generators(n, costs, ppl):
carriers = snakemake.config['electricity']['conventional_carriers'] carriers = snakemake.config['electricity']['conventional_carriers']
_add_missing_carriers_from_costs(n, costs, carriers) _add_missing_carriers_from_costs(n, costs, carriers)
ppl = (ppl.query('carrier in @carriers').join(costs, on='carrier') ppl = (ppl.query('carrier in @carriers').join(costs, on='carrier')
.rename(index=lambda s: 'C' + str(s))) .rename(index=lambda s: 'C' + str(s)))
logger.info('Adding {} generators with capacities\n{}' logger.info('Adding {} generators with capacities [MW] \n{}'
.format(len(ppl), ppl.groupby('carrier').p_nom.sum())) .format(len(ppl), ppl.groupby('carrier').p_nom.sum()))
n.madd("Generator", ppl.index, n.madd("Generator", ppl.index,
carrier=ppl.carrier, carrier=ppl.carrier,
bus=ppl.bus, bus=ppl.bus,
@ -328,6 +320,7 @@ def attach_conventional_generators(n, costs, ppl):
efficiency=ppl.efficiency, efficiency=ppl.efficiency,
marginal_cost=ppl.marginal_cost, marginal_cost=ppl.marginal_cost,
capital_cost=0) capital_cost=0)
logger.warning(f'Capital costs for conventional generators put to 0 EUR/MW.') logger.warning(f'Capital costs for conventional generators put to 0 EUR/MW.')
@ -346,7 +339,7 @@ def attach_hydro(n, costs, ppl):
country = ppl['bus'].map(n.buses.country).rename("country") country = ppl['bus'].map(n.buses.country).rename("country")
inflow_idx = ror.index | hydro.index inflow_idx = ror.index.union(hydro.index)
if not inflow_idx.empty: if not inflow_idx.empty:
dist_key = ppl.loc[inflow_idx, 'p_nom'].groupby(country).transform(normed) dist_key = ppl.loc[inflow_idx, 'p_nom'].groupby(country).transform(normed)
@ -377,8 +370,8 @@ def attach_hydro(n, costs, ppl):
.where(lambda df: df<=1., other=1.))) .where(lambda df: df<=1., other=1.)))
if 'PHS' in carriers and not phs.empty: if 'PHS' in carriers and not phs.empty:
# fill missing max hours to config value and assume no natural inflow # fill missing max hours to config value and
# due to lack of data # assume no natural inflow due to lack of data
phs = phs.replace({'max_hours': {0: c['PHS_max_hours']}}) phs = phs.replace({'max_hours': {0: c['PHS_max_hours']}})
n.madd('StorageUnit', phs.index, n.madd('StorageUnit', phs.index,
carrier='PHS', carrier='PHS',
@ -416,7 +409,6 @@ def attach_hydro(n, costs, ppl):
hydro_max_hours = hydro.max_hours.where(hydro.max_hours > 0, hydro_max_hours = hydro.max_hours.where(hydro.max_hours > 0,
hydro.country.map(max_hours_country)).fillna(6) hydro.country.map(max_hours_country)).fillna(6)
n.madd('StorageUnit', hydro.index, carrier='hydro', n.madd('StorageUnit', hydro.index, carrier='hydro',
bus=hydro['bus'], bus=hydro['bus'],
p_nom=hydro['p_nom'], p_nom=hydro['p_nom'],
@ -435,6 +427,7 @@ def attach_hydro(n, costs, ppl):
def attach_extendable_generators(n, costs, ppl): def attach_extendable_generators(n, costs, ppl):
elec_opts = snakemake.config['electricity'] elec_opts = snakemake.config['electricity']
carriers = pd.Index(elec_opts['extendable_carriers']['Generator']) carriers = pd.Index(elec_opts['extendable_carriers']['Generator'])
_add_missing_carriers_from_costs(n, costs, carriers) _add_missing_carriers_from_costs(n, costs, carriers)
for tech in carriers: for tech in carriers:
@ -480,6 +473,39 @@ def attach_extendable_generators(n, costs, ppl):
"Only OCGT, CCGT and nuclear are allowed at the moment.") "Only OCGT, CCGT and nuclear are allowed at the moment.")
def attach_OPSD_renewables(n):
available = ['DE', 'FR', 'PL', 'CH', 'DK', 'CZ', 'SE', 'GB']
tech_map = {'Onshore': 'onwind', 'Offshore': 'offwind', 'Solar': 'solar'}
countries = set(available) & set(n.buses.country)
techs = snakemake.config['electricity'].get('renewable_capacities_from_OPSD', [])
tech_map = {k: v for k, v in tech_map.items() if v in techs}
if not tech_map:
return
logger.info(f'Using OPSD renewable capacities in {", ".join(countries)} '
f'for technologies {", ".join(tech_map.values())}.')
df = pd.concat([pm.data.OPSD_VRE_country(c) for c in countries])
technology_b = ~df.Technology.isin(['Onshore', 'Offshore'])
df['Fueltype'] = df.Fueltype.where(technology_b, df.Technology)
df = df.query('Fueltype in @tech_map').powerplant.convert_country_to_alpha2()
for fueltype, carrier_like in tech_map.items():
gens = n.generators[lambda df: df.carrier.str.contains(carrier_like)]
buses = n.buses.loc[gens.bus.unique()]
gens_per_bus = gens.groupby('bus').p_nom.count()
caps = map_country_bus(df.query('Fueltype == @fueltype'), buses)
caps = caps.groupby(['bus']).Capacity.sum()
caps = caps / gens_per_bus.reindex(caps.index, fill_value=1)
n.generators.p_nom.update(gens.bus.map(caps).dropna())
def estimate_renewable_capacities(n, tech_map=None): def estimate_renewable_capacities(n, tech_map=None):
if tech_map is None: if tech_map is None:
tech_map = (snakemake.config['electricity'] tech_map = (snakemake.config['electricity']
@ -487,23 +513,33 @@ def estimate_renewable_capacities(n, tech_map=None):
if len(tech_map) == 0: return if len(tech_map) == 0: return
capacities = (ppm.data.Capacity_stats().powerplant.convert_country_to_alpha2() capacities = (pm.data.Capacity_stats().powerplant.convert_country_to_alpha2()
[lambda df: df.Energy_Source_Level_2] [lambda df: df.Energy_Source_Level_2]
.set_index(['Fueltype', 'Country']).sort_index()) .set_index(['Fueltype', 'Country']).sort_index())
countries = n.buses.country.unique() countries = n.buses.country.unique()
if len(countries) == 0: return
logger.info('heuristics applied to distribute renewable capacities [MW] \n{}'
.format(capacities.query('Fueltype in @tech_map.keys() and Capacity >= 0.1')
.groupby('Country').agg({'Capacity': 'sum'})))
for ppm_fueltype, techs in tech_map.items(): for ppm_fueltype, techs in tech_map.items():
tech_capacities = capacities.loc[ppm_fueltype, 'Capacity']\ tech_capacities = capacities.loc[ppm_fueltype, 'Capacity']\
.reindex(countries, fill_value=0.) .reindex(countries, fill_value=0.)
tech_i = n.generators.query('carrier in @techs').index #tech_i = n.generators.query('carrier in @techs').index
tech_i = (n.generators.query('carrier in @techs')
[n.generators.query('carrier in @techs')
.bus.map(n.buses.country).isin(countries)].index)
n.generators.loc[tech_i, 'p_nom'] = ( n.generators.loc[tech_i, 'p_nom'] = (
(n.generators_t.p_max_pu[tech_i].mean() * (n.generators_t.p_max_pu[tech_i].mean() *
n.generators.loc[tech_i, 'p_nom_max']) # maximal yearly generation n.generators.loc[tech_i, 'p_nom_max']) # maximal yearly generation
.groupby(n.generators.bus.map(n.buses.country)) # for each country .groupby(n.generators.bus.map(n.buses.country))
.transform(lambda s: normed(s) * tech_capacities.at[s.name]) .transform(lambda s: normed(s) * tech_capacities.at[s.name])
.where(lambda s: s>0.1, 0.)) # only capacities above 100kW .where(lambda s: s>0.1, 0.)) # only capacities above 100kW
def add_nice_carrier_names(n, config=None): def add_nice_carrier_names(n, config=None):
if config is None: config = snakemake.config if config is None: config = snakemake.config
carrier_i = n.carriers.index carrier_i = n.carriers.index
@ -540,6 +576,8 @@ if __name__ == "__main__":
attach_extendable_generators(n, costs, ppl) attach_extendable_generators(n, costs, ppl)
estimate_renewable_capacities(n) estimate_renewable_capacities(n)
attach_OPSD_renewables(n)
add_nice_carrier_names(n) add_nice_carrier_names(n)
n.export_to_netcdf(snakemake.output[0]) n.export_to_netcdf(snakemake.output[0])

View File

@ -37,30 +37,33 @@ Inputs
Outputs Outputs
------- -------
- ``networks/{network}_s{simpl}_{clusters}_ec.nc``: - ``networks/elec_s{simpl}_{clusters}_ec.nc``:
Description Description
----------- -----------
The rule :mod:`add_extra_components` attaches additional extendable components to the clustered and simplified network. These can be configured in the ``config.yaml`` at ``electricity: extendable_carriers: ``. It processes ``networks/{network}_s{simpl}_{clusters}.nc`` to build ``networks/{network}_s{simpl}_{clusters}_ec.nc``, which in contrast to the former (depending on the configuration) contain with **zero** initial capacity The rule :mod:`add_extra_components` attaches additional extendable components to the clustered and simplified network. These can be configured in the ``config.yaml`` at ``electricity: extendable_carriers:``. It processes ``networks/elec_s{simpl}_{clusters}.nc`` to build ``networks/elec_s{simpl}_{clusters}_ec.nc``, which in contrast to the former (depending on the configuration) contain with **zero** initial capacity
- ``StorageUnits`` of carrier 'H2' and/or 'battery'. If this option is chosen, every bus is given an extendable ``StorageUnit`` of the corresponding carrier. The energy and power capacities are linked through a parameter that specifies the energy capacity as maximum hours at full dispatch power and is configured in ``electricity: max_hours:``. This linkage leads to one investment variable per storage unit. The default ``max_hours`` lead to long-term hydrogen and short-term battery storage units. - ``StorageUnits`` of carrier 'H2' and/or 'battery'. If this option is chosen, every bus is given an extendable ``StorageUnit`` of the corresponding carrier. The energy and power capacities are linked through a parameter that specifies the energy capacity as maximum hours at full dispatch power and is configured in ``electricity: max_hours:``. This linkage leads to one investment variable per storage unit. The default ``max_hours`` lead to long-term hydrogen and short-term battery storage units.
- ``Stores`` of carrier 'H2' and/or 'battery' in combination with ``Links``. If this option is chosen, the script adds extra buses with corresponding carrier where energy ``Stores`` are attached and which are connected to the corresponding power buses via two links, one each for charging and discharging. This leads to three investment variables for the energy capacity, charging and discharging capacity of the storage unit. - ``Stores`` of carrier 'H2' and/or 'battery' in combination with ``Links``. If this option is chosen, the script adds extra buses with corresponding carrier where energy ``Stores`` are attached and which are connected to the corresponding power buses via two links, one each for charging and discharging. This leads to three investment variables for the energy capacity, charging and discharging capacity of the storage unit.
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import pypsa
import pandas as pd import pandas as pd
import numpy as np import numpy as np
import pypsa
from add_electricity import (load_costs, add_nice_carrier_names, from add_electricity import (load_costs, add_nice_carrier_names,
_add_missing_carriers_from_costs) _add_missing_carriers_from_costs)
idx = pd.IndexSlice idx = pd.IndexSlice
logger = logging.getLogger(__name__)
def attach_storageunits(n, costs): def attach_storageunits(n, costs):
elec_opts = snakemake.config['electricity'] elec_opts = snakemake.config['electricity']
carriers = elec_opts['extendable_carriers']['StorageUnit'] carriers = elec_opts['extendable_carriers']['StorageUnit']
@ -70,6 +73,9 @@ def attach_storageunits(n, costs):
buses_i = n.buses.index buses_i = n.buses.index
lookup_store = {"H2": "electrolysis", "battery": "battery inverter"}
lookup_dispatch = {"H2": "fuel cell", "battery": "battery inverter"}
for carrier in carriers: for carrier in carriers:
n.madd("StorageUnit", buses_i, ' ' + carrier, n.madd("StorageUnit", buses_i, ' ' + carrier,
bus=buses_i, bus=buses_i,
@ -77,11 +83,12 @@ def attach_storageunits(n, costs):
p_nom_extendable=True, p_nom_extendable=True,
capital_cost=costs.at[carrier, 'capital_cost'], capital_cost=costs.at[carrier, 'capital_cost'],
marginal_cost=costs.at[carrier, 'marginal_cost'], marginal_cost=costs.at[carrier, 'marginal_cost'],
efficiency_store=costs.at[carrier, 'efficiency'], efficiency_store=costs.at[lookup_store[carrier], 'efficiency'],
efficiency_dispatch=costs.at[carrier, 'efficiency'], efficiency_dispatch=costs.at[lookup_dispatch[carrier], 'efficiency'],
max_hours=max_hours[carrier], max_hours=max_hours[carrier],
cyclic_state_of_charge=True) cyclic_state_of_charge=True)
def attach_stores(n, costs): def attach_stores(n, costs):
elec_opts = snakemake.config['electricity'] elec_opts = snakemake.config['electricity']
carriers = elec_opts['extendable_carriers']['Store'] carriers = elec_opts['extendable_carriers']['Store']
@ -107,7 +114,8 @@ def attach_stores(n, costs):
carrier='H2 electrolysis', carrier='H2 electrolysis',
p_nom_extendable=True, p_nom_extendable=True,
efficiency=costs.at["electrolysis", "efficiency"], efficiency=costs.at["electrolysis", "efficiency"],
capital_cost=costs.at["electrolysis", "capital_cost"]) capital_cost=costs.at["electrolysis", "capital_cost"],
marginal_cost=costs.at["electrolysis", "marginal_cost"])
n.madd("Link", h2_buses_i + " Fuel Cell", n.madd("Link", h2_buses_i + " Fuel Cell",
bus0=h2_buses_i, bus0=h2_buses_i,
@ -116,7 +124,8 @@ def attach_stores(n, costs):
p_nom_extendable=True, p_nom_extendable=True,
efficiency=costs.at["fuel cell", "efficiency"], efficiency=costs.at["fuel cell", "efficiency"],
#NB: fixed cost is per MWel #NB: fixed cost is per MWel
capital_cost=costs.at["fuel cell", "capital_cost"] * costs.at["fuel cell", "efficiency"]) capital_cost=costs.at["fuel cell", "capital_cost"] * costs.at["fuel cell", "efficiency"],
marginal_cost=costs.at["fuel cell", "marginal_cost"])
if 'battery' in carriers: if 'battery' in carriers:
b_buses_i = n.madd("Bus", buses_i + " battery", carrier="battery", **bus_sub_dict) b_buses_i = n.madd("Bus", buses_i + " battery", carrier="battery", **bus_sub_dict)
@ -126,23 +135,27 @@ def attach_stores(n, costs):
carrier='battery', carrier='battery',
e_cyclic=True, e_cyclic=True,
e_nom_extendable=True, e_nom_extendable=True,
capital_cost=costs.at['battery storage', 'capital_cost']) capital_cost=costs.at['battery storage', 'capital_cost'],
marginal_cost=costs.at["battery", "marginal_cost"])
n.madd("Link", b_buses_i + " charger", n.madd("Link", b_buses_i + " charger",
bus0=buses_i, bus0=buses_i,
bus1=b_buses_i, bus1=b_buses_i,
carrier='battery charger', carrier='battery charger',
efficiency=costs.at['battery inverter', 'efficiency']**0.5, efficiency=costs.at['battery inverter', 'efficiency'],
capital_cost=costs.at['battery inverter', 'capital_cost'], capital_cost=costs.at['battery inverter', 'capital_cost'],
p_nom_extendable=True) p_nom_extendable=True,
marginal_cost=costs.at["battery inverter", "marginal_cost"])
n.madd("Link", b_buses_i + " discharger", n.madd("Link", b_buses_i + " discharger",
bus0=b_buses_i, bus0=b_buses_i,
bus1=buses_i, bus1=buses_i,
carrier='battery discharger', carrier='battery discharger',
efficiency=costs.at['battery inverter','efficiency']**0.5, efficiency=costs.at['battery inverter','efficiency'],
capital_cost=costs.at['battery inverter', 'capital_cost'], capital_cost=costs.at['battery inverter', 'capital_cost'],
p_nom_extendable=True) p_nom_extendable=True,
marginal_cost=costs.at["battery inverter", "marginal_cost"])
def attach_hydrogen_pipelines(n, costs): def attach_hydrogen_pipelines(n, costs):
elec_opts = snakemake.config['electricity'] elec_opts = snakemake.config['electricity']
@ -176,6 +189,7 @@ def attach_hydrogen_pipelines(n, costs):
efficiency=costs.at['H2 pipeline','efficiency'], efficiency=costs.at['H2 pipeline','efficiency'],
carrier="H2 pipeline") carrier="H2 pipeline")
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake

View File

@ -63,14 +63,16 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import pypsa
import yaml import yaml
import pandas as pd import pandas as pd
import geopandas as gpd import geopandas as gpd
import numpy as np import numpy as np
import scipy as sp import scipy as sp
import networkx as nx
from scipy.sparse import csgraph from scipy.sparse import csgraph
from six import iteritems from six import iteritems
from itertools import product from itertools import product
@ -78,9 +80,8 @@ from itertools import product
from shapely.geometry import Point, LineString from shapely.geometry import Point, LineString
import shapely, shapely.prepared, shapely.wkt import shapely, shapely.prepared, shapely.wkt
import networkx as nx logger = logging.getLogger(__name__)
import pypsa
def _get_oid(df): def _get_oid(df):
if "tags" in df.columns: if "tags" in df.columns:
@ -88,12 +89,14 @@ def _get_oid(df):
else: else:
return pd.Series(np.nan, df.index) return pd.Series(np.nan, df.index)
def _get_country(df): def _get_country(df):
if "tags" in df.columns: if "tags" in df.columns:
return df.tags.str.extract('"country"=>"([A-Z]{2})"', expand=False) return df.tags.str.extract('"country"=>"([A-Z]{2})"', expand=False)
else: else:
return pd.Series(np.nan, df.index) return pd.Series(np.nan, df.index)
def _find_closest_links(links, new_links, distance_upper_bound=1.5): def _find_closest_links(links, new_links, distance_upper_bound=1.5):
treecoords = np.asarray([np.asarray(shapely.wkt.loads(s))[[0, -1]].flatten() treecoords = np.asarray([np.asarray(shapely.wkt.loads(s))[[0, -1]].flatten()
for s in links.geometry]) for s in links.geometry])
@ -109,6 +112,7 @@ def _find_closest_links(links, new_links, distance_upper_bound=1.5):
[lambda ds: ~ds.index.duplicated(keep='first')]\ [lambda ds: ~ds.index.duplicated(keep='first')]\
.sort_index()['i'] .sort_index()['i']
def _load_buses_from_eg(): def _load_buses_from_eg():
buses = (pd.read_csv(snakemake.input.eg_buses, quotechar="'", buses = (pd.read_csv(snakemake.input.eg_buses, quotechar="'",
true_values='t', false_values='f', true_values='t', false_values='f',
@ -130,6 +134,7 @@ def _load_buses_from_eg():
return pd.DataFrame(buses.loc[buses_in_europe_b & buses_with_v_nom_to_keep_b]) return pd.DataFrame(buses.loc[buses_in_europe_b & buses_with_v_nom_to_keep_b])
def _load_transformers_from_eg(buses): def _load_transformers_from_eg(buses):
transformers = (pd.read_csv(snakemake.input.eg_transformers, quotechar="'", transformers = (pd.read_csv(snakemake.input.eg_transformers, quotechar="'",
true_values='t', false_values='f', true_values='t', false_values='f',
@ -140,6 +145,7 @@ def _load_transformers_from_eg(buses):
return transformers return transformers
def _load_converters_from_eg(buses): def _load_converters_from_eg(buses):
converters = (pd.read_csv(snakemake.input.eg_converters, quotechar="'", converters = (pd.read_csv(snakemake.input.eg_converters, quotechar="'",
true_values='t', false_values='f', true_values='t', false_values='f',
@ -201,8 +207,8 @@ def _add_links_from_tyndp(buses, links):
buses = buses.loc[keep_b['Bus']] buses = buses.loc[keep_b['Bus']]
links = links.loc[keep_b['Link']] links = links.loc[keep_b['Link']]
links_tyndp["j"] = _find_closest_links(links, links_tyndp, distance_upper_bound=0.15) links_tyndp["j"] = _find_closest_links(links, links_tyndp, distance_upper_bound=0.20)
# Corresponds approximately to 15km tolerances # Corresponds approximately to 20km tolerances
if links_tyndp["j"].notnull().any(): if links_tyndp["j"].notnull().any():
logger.info("TYNDP links already in the dataset (skipping): " + ", ".join(links_tyndp.loc[links_tyndp["j"].notnull(), "Name"])) logger.info("TYNDP links already in the dataset (skipping): " + ", ".join(links_tyndp.loc[links_tyndp["j"].notnull(), "Name"]))
@ -241,6 +247,7 @@ def _add_links_from_tyndp(buses, links):
return buses, links.append(links_tyndp, sort=True) return buses, links.append(links_tyndp, sort=True)
def _load_lines_from_eg(buses): def _load_lines_from_eg(buses):
lines = (pd.read_csv(snakemake.input.eg_lines, quotechar="'", true_values='t', false_values='f', lines = (pd.read_csv(snakemake.input.eg_lines, quotechar="'", true_values='t', false_values='f',
dtype=dict(line_id='str', bus0='str', bus1='str', dtype=dict(line_id='str', bus0='str', bus1='str',
@ -254,11 +261,13 @@ def _load_lines_from_eg(buses):
return lines return lines
def _apply_parameter_corrections(n): def _apply_parameter_corrections(n):
with open(snakemake.input.parameter_corrections) as f: with open(snakemake.input.parameter_corrections) as f:
corrections = yaml.safe_load(f) corrections = yaml.safe_load(f)
if corrections is None: return if corrections is None: return
for component, attrs in iteritems(corrections): for component, attrs in iteritems(corrections):
df = n.df(component) df = n.df(component)
oid = _get_oid(df) oid = _get_oid(df)
@ -275,6 +284,7 @@ def _apply_parameter_corrections(n):
inds = r.index.intersection(df.index) inds = r.index.intersection(df.index)
df.loc[inds, attr] = r[inds].astype(df[attr].dtype) df.loc[inds, attr] = r[inds].astype(df[attr].dtype)
def _set_electrical_parameters_lines(lines): def _set_electrical_parameters_lines(lines):
v_noms = snakemake.config['electricity']['voltages'] v_noms = snakemake.config['electricity']['voltages']
linetypes = snakemake.config['lines']['types'] linetypes = snakemake.config['lines']['types']
@ -286,12 +296,14 @@ def _set_electrical_parameters_lines(lines):
return lines return lines
def _set_lines_s_nom_from_linetypes(n): def _set_lines_s_nom_from_linetypes(n):
n.lines['s_nom'] = ( n.lines['s_nom'] = (
np.sqrt(3) * n.lines['type'].map(n.line_types.i_nom) * np.sqrt(3) * n.lines['type'].map(n.line_types.i_nom) *
n.lines['v_nom'] * n.lines.num_parallel n.lines['v_nom'] * n.lines.num_parallel
) )
def _set_electrical_parameters_links(links): def _set_electrical_parameters_links(links):
if links.empty: return links if links.empty: return links
@ -301,7 +313,7 @@ def _set_electrical_parameters_links(links):
links_p_nom = pd.read_csv(snakemake.input.links_p_nom) links_p_nom = pd.read_csv(snakemake.input.links_p_nom)
#Filter links that are not in operation anymore # filter links that are not in operation anymore
removed_b = links_p_nom.Remarks.str.contains('Shut down|Replaced', na=False) removed_b = links_p_nom.Remarks.str.contains('Shut down|Replaced', na=False)
links_p_nom = links_p_nom[~removed_b] links_p_nom = links_p_nom[~removed_b]
@ -318,6 +330,7 @@ def _set_electrical_parameters_links(links):
return links return links
def _set_electrical_parameters_converters(converters): def _set_electrical_parameters_converters(converters):
p_max_pu = snakemake.config['links'].get('p_max_pu', 1.) p_max_pu = snakemake.config['links'].get('p_max_pu', 1.)
converters['p_max_pu'] = p_max_pu converters['p_max_pu'] = p_max_pu
@ -331,6 +344,7 @@ def _set_electrical_parameters_converters(converters):
return converters return converters
def _set_electrical_parameters_transformers(transformers): def _set_electrical_parameters_transformers(transformers):
config = snakemake.config['transformers'] config = snakemake.config['transformers']
@ -341,9 +355,11 @@ def _set_electrical_parameters_transformers(transformers):
return transformers return transformers
def _remove_dangling_branches(branches, buses): def _remove_dangling_branches(branches, buses):
return pd.DataFrame(branches.loc[branches.bus0.isin(buses.index) & branches.bus1.isin(buses.index)]) return pd.DataFrame(branches.loc[branches.bus0.isin(buses.index) & branches.bus1.isin(buses.index)])
def _remove_unconnected_components(network): def _remove_unconnected_components(network):
_, labels = csgraph.connected_components(network.adjacency_matrix(), directed=False) _, labels = csgraph.connected_components(network.adjacency_matrix(), directed=False)
component = pd.Series(labels, index=network.buses.index) component = pd.Series(labels, index=network.buses.index)
@ -356,6 +372,7 @@ def _remove_unconnected_components(network):
return network[component == component_sizes.index[0]] return network[component == component_sizes.index[0]]
def _set_countries_and_substations(n): def _set_countries_and_substations(n):
buses = n.buses buses = n.buses
@ -442,6 +459,7 @@ def _set_countries_and_substations(n):
return buses return buses
def _replace_b2b_converter_at_country_border_by_link(n): def _replace_b2b_converter_at_country_border_by_link(n):
# Affects only the B2B converter in Lithuania at the Polish border at the moment # Affects only the B2B converter in Lithuania at the Polish border at the moment
buscntry = n.buses.country buscntry = n.buses.country
@ -479,6 +497,7 @@ def _replace_b2b_converter_at_country_border_by_link(n):
logger.info("Replacing B2B converter `{}` together with bus `{}` and line `{}` by an HVDC tie-line {}-{}" logger.info("Replacing B2B converter `{}` together with bus `{}` and line `{}` by an HVDC tie-line {}-{}"
.format(i, b0, line, linkcntry.at[i], buscntry.at[b1])) .format(i, b0, line, linkcntry.at[i], buscntry.at[b1]))
def _set_links_underwater_fraction(n): def _set_links_underwater_fraction(n):
if n.links.empty: return if n.links.empty: return
@ -489,6 +508,7 @@ def _set_links_underwater_fraction(n):
links = gpd.GeoSeries(n.links.geometry.dropna().map(shapely.wkt.loads)) links = gpd.GeoSeries(n.links.geometry.dropna().map(shapely.wkt.loads))
n.links['underwater_fraction'] = links.intersection(offshore_shape).length / links.length n.links['underwater_fraction'] = links.intersection(offshore_shape).length / links.length
def _adjust_capacities_of_under_construction_branches(n): def _adjust_capacities_of_under_construction_branches(n):
lines_mode = snakemake.config['lines'].get('under_construction', 'undef') lines_mode = snakemake.config['lines'].get('under_construction', 'undef')
if lines_mode == 'zero': if lines_mode == 'zero':
@ -513,6 +533,7 @@ def _adjust_capacities_of_under_construction_branches(n):
return n return n
def base_network(): def base_network():
buses = _load_buses_from_eg() buses = _load_buses_from_eg()
@ -565,4 +586,5 @@ if __name__ == "__main__":
configure_logging(snakemake) configure_logging(snakemake)
n = base_network() n = base_network()
n.export_to_netcdf(snakemake.output[0]) n.export_to_netcdf(snakemake.output[0])

View File

@ -42,17 +42,24 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
from vresutils.graph import voronoi_partition_pts import pypsa
import os import os
import pandas as pd import pandas as pd
import geopandas as gpd import geopandas as gpd
import pypsa from vresutils.graph import voronoi_partition_pts
logger = logging.getLogger(__name__)
def save_to_geojson(s, fn):
if os.path.exists(fn):
os.unlink(fn)
schema = {**gpd.io.file.infer_schema(s), 'geometry': 'Unknown'}
s.to_file(fn, driver='GeoJSON', schema=schema)
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
@ -96,12 +103,6 @@ if __name__ == "__main__":
offshore_regions_c = offshore_regions_c.loc[offshore_regions_c.area > 1e-2] offshore_regions_c = offshore_regions_c.loc[offshore_regions_c.area > 1e-2]
offshore_regions.append(offshore_regions_c) offshore_regions.append(offshore_regions_c)
def save_to_geojson(s, fn):
if os.path.exists(fn):
os.unlink(fn)
schema = {**gpd.io.file.infer_schema(s), 'geometry': 'Unknown'}
s.to_file(fn, driver='GeoJSON', schema=schema)
save_to_geojson(pd.concat(onshore_regions, ignore_index=True), snakemake.output.regions_onshore) save_to_geojson(pd.concat(onshore_regions, ignore_index=True), snakemake.output.regions_onshore)
save_to_geojson(pd.concat(offshore_regions, ignore_index=True), snakemake.output.regions_offshore) save_to_geojson(pd.concat(offshore_regions, ignore_index=True), snakemake.output.regions_offshore)

View File

@ -63,7 +63,6 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import os import os
@ -84,6 +83,9 @@ import progressbar as pgb
from build_renewable_profiles import init_globals, calculate_potential from build_renewable_profiles import init_globals, calculate_potential
logger = logging.getLogger(__name__)
def build_area(flh, countries, areamatrix, breaks, fn): def build_area(flh, countries, areamatrix, breaks, fn):
area_unbinned = xr.DataArray(areamatrix.todense(), [countries, capacity_factor.coords['spatial']]) area_unbinned = xr.DataArray(areamatrix.todense(), [countries, capacity_factor.coords['spatial']])
bins = xr.DataArray(pd.cut(flh.to_series(), bins=breaks), flh.coords, name="bins") bins = xr.DataArray(pd.cut(flh.to_series(), bins=breaks), flh.coords, name="bins")
@ -92,6 +94,7 @@ def build_area(flh, countries, areamatrix, breaks, fn):
area.columns = area.columns.map(lambda s: s.left) area.columns = area.columns.map(lambda s: s.left)
return area return area
def plot_area_not_solar(area, countries): def plot_area_not_solar(area, countries):
# onshore wind/offshore wind # onshore wind/offshore wind
a = area.T a = area.T

View File

@ -92,12 +92,13 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import os import os
import atlite import atlite
logger = logging.getLogger(__name__)
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
@ -113,4 +114,6 @@ if __name__ == "__main__":
cutout_dir=os.path.dirname(snakemake.output[0]), cutout_dir=os.path.dirname(snakemake.output[0]),
**cutout_params) **cutout_params)
cutout.prepare(nprocesses=snakemake.config['atlite'].get('nprocesses', 4)) nprocesses = snakemake.config['atlite'].get('nprocesses', 4)
cutout.prepare(nprocesses=nprocesses)

View File

@ -60,7 +60,6 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import os import os
@ -68,6 +67,8 @@ import atlite
import geopandas as gpd import geopandas as gpd
from vresutils import hydro as vhydro from vresutils import hydro as vhydro
logger = logging.getLogger(__name__)
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
@ -75,8 +76,8 @@ if __name__ == "__main__":
configure_logging(snakemake) configure_logging(snakemake)
config = snakemake.config['renewable']['hydro'] config = snakemake.config['renewable']['hydro']
cutout = atlite.Cutout(config['cutout'], cutout_dir = os.path.dirname(snakemake.input.cutout)
cutout_dir=os.path.dirname(snakemake.input.cutout)) cutout = atlite.Cutout(config['cutout'], cutout_dir=cutout_dir)
countries = snakemake.config['countries'] countries = snakemake.config['countries']
country_shapes = gpd.read_file(snakemake.input.country_shapes).set_index('name')['geometry'].reindex(countries) country_shapes = gpd.read_file(snakemake.input.country_shapes).set_index('name')['geometry'].reindex(countries)

227
scripts/build_load_data.py Executable file
View File

@ -0,0 +1,227 @@
# SPDX-FileCopyrightText: : 2020 @JanFrederickUnnewehr, The PyPSA-Eur Authors
#
# SPDX-License-Identifier: GPL-3.0-or-later
"""
This rule downloads the load data from `Open Power System Data Time series <https://data.open-power-system-data.org/time_series/>`_. For all countries in the network, the per country load timeseries with suffix ``_load_actual_entsoe_transparency`` are extracted from the dataset. After filling small gaps linearly and large gaps by copying time-slice of a given period, the load data is exported to a ``.csv`` file.
Relevant Settings
-----------------
.. code:: yaml
snapshots:
load:
url:
interpolate_limit:
time_shift_for_large_gaps:
manual_adjustments:
.. seealso::
Documentation of the configuration file ``config.yaml`` at
:ref:`load_cf`
Inputs
------
Outputs
-------
- ``resource/time_series_60min_singleindex_filtered.csv``:
"""
import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging
import pandas as pd
import numpy as np
import dateutil
from pandas import Timedelta as Delta
def load_timeseries(fn, years, countries, powerstatistics=True):
"""
Read load data from OPSD time-series package version 2020-10-06.
Parameters
----------
years : None or slice()
Years for which to read load data (defaults to
slice("2018","2019"))
fn : str
File name or url location (file format .csv)
countries : listlike
Countries for which to read load data.
powerstatistics: bool
Whether the electricity consumption data of the ENTSOE power
statistics (if true) or of the ENTSOE transparency map (if false)
should be parsed.
Returns
-------
load : pd.DataFrame
Load time-series with UTC timestamps x ISO-2 countries
"""
logger.info(f"Retrieving load data from '{fn}'.")
pattern = 'power_statistics' if powerstatistics else '_transparency'
pattern = f'_load_actual_entsoe_{pattern}'
rename = lambda s: s[:-len(pattern)]
date_parser = lambda x: dateutil.parser.parse(x, ignoretz=True)
return (pd.read_csv(fn, index_col=0, parse_dates=[0], date_parser=date_parser)
.filter(like=pattern)
.rename(columns=rename)
.dropna(how="all", axis=0)
.rename(columns={'GB_UKM' : 'GB'})
.filter(items=countries)
.loc[years])
def consecutive_nans(ds):
return (ds.isnull().astype(int)
.groupby(ds.notnull().astype(int).cumsum()[ds.isnull()])
.transform('sum').fillna(0))
def fill_large_gaps(ds, shift):
"""
Fill up large gaps with load data from the previous week.
This function fills gaps ragning from 3 to 168 hours (one week).
"""
shift = Delta(shift)
nhours = shift / np.timedelta64(1, 'h')
if (consecutive_nans(ds) > nhours).any():
logger.warning('There exist gaps larger then the time shift used for '
'copying time slices.')
time_shift = pd.Series(ds.values, ds.index + shift)
return ds.where(ds.notnull(), time_shift.reindex_like(ds))
def nan_statistics(df):
def max_consecutive_nans(ds):
return (ds.isnull().astype(int)
.groupby(ds.notnull().astype(int).cumsum())
.sum().max())
consecutive = df.apply(max_consecutive_nans)
total = df.isnull().sum()
max_total_per_month = df.isnull().resample('m').sum().max()
return pd.concat([total, consecutive, max_total_per_month],
keys=['total', 'consecutive', 'max_total_per_month'], axis=1)
def copy_timeslice(load, cntry, start, stop, delta):
start = pd.Timestamp(start)
stop = pd.Timestamp(stop)
if start-delta in load.index and stop in load.index and cntry in load:
load.loc[start:stop, cntry] = load.loc[start-delta:stop-delta, cntry].values
def manual_adjustment(load, powerstatistics):
"""
Adjust gaps manual for load data from OPSD time-series package.
1. For the ENTSOE power statistics load data (if powerstatistics is True)
Kosovo (KV) and Albania (AL) do not exist in the data set. Kosovo gets the
same load curve as Serbia and Albania the same as Macdedonia, both scaled
by the corresponding ratio of total energy consumptions reported by
IEA Data browser [0] for the year 2013.
2. For the ENTSOE transparency load data (if powerstatistics is False)
Albania (AL) and Macedonia (MK) do not exist in the data set. Both get the
same load curve as Montenegro, scaled by the corresponding ratio of total energy
consumptions reported by IEA Data browser [0] for the year 2016.
[0] https://www.iea.org/data-and-statistics?country=WORLD&fuel=Electricity%20and%20heat&indicator=TotElecCons
Parameters
----------
load : pd.DataFrame
Load time-series with UTC timestamps x ISO-2 countries
powerstatistics: bool
Whether argument load comprises the electricity consumption data of
the ENTSOE power statistics or of the ENTSOE transparency map
Returns
-------
load : pd.DataFrame
Manual adjusted and interpolated load time-series with UTC
timestamps x ISO-2 countries
"""
if powerstatistics:
if 'MK' in load.columns:
if 'AL' not in load.columns or load.AL.isnull().values.all():
load['AL'] = load['MK'] * (4.1 / 7.4)
if 'RS' in load.columns:
if 'KV' not in load.columns or load.KV.isnull().values.all():
load['KV'] = load['RS'] * (4.8 / 27.)
copy_timeslice(load, 'GR', '2015-08-11 21:00', '2015-08-15 20:00', Delta(weeks=1))
copy_timeslice(load, 'AT', '2018-12-31 22:00', '2019-01-01 22:00', Delta(days=2))
copy_timeslice(load, 'CH', '2010-01-19 07:00', '2010-01-19 22:00', Delta(days=1))
copy_timeslice(load, 'CH', '2010-03-28 00:00', '2010-03-28 21:00', Delta(days=1))
# is a WE, so take WE before
copy_timeslice(load, 'CH', '2010-10-08 13:00', '2010-10-10 21:00', Delta(weeks=1))
copy_timeslice(load, 'CH', '2010-11-04 04:00', '2010-11-04 22:00', Delta(days=1))
copy_timeslice(load, 'NO', '2010-12-09 11:00', '2010-12-09 18:00', Delta(days=1))
# whole january missing
copy_timeslice(load, 'GB', '2009-12-31 23:00', '2010-01-31 23:00', Delta(days=-364))
else:
if 'ME' in load:
if 'AL' not in load and 'AL' in countries:
load['AL'] = load.ME * (5.7/2.9)
if 'MK' not in load and 'MK' in countries:
load['MK'] = load.ME * (6.7/2.9)
copy_timeslice(load, 'BG', '2018-10-27 21:00', '2018-10-28 22:00', Delta(weeks=1))
return load
if __name__ == "__main__":
if 'snakemake' not in globals():
from _helpers import mock_snakemake
snakemake = mock_snakemake('build_load_data')
configure_logging(snakemake)
config = snakemake.config
powerstatistics = config['load']['power_statistics']
url = config['load']['url']
interpolate_limit = config['load']['interpolate_limit']
countries = config['countries']
snapshots = pd.date_range(freq='h', **config['snapshots'])
years = slice(snapshots[0], snapshots[-1])
time_shift = config['load']['time_shift_for_large_gaps']
load = load_timeseries(url, years, countries, powerstatistics)
if config['load']['manual_adjustments']:
load = manual_adjustment(load, powerstatistics)
logger.info(f"Linearly interpolate gaps of size {interpolate_limit} and less.")
load = load.interpolate(method='linear', limit=interpolate_limit)
logger.info("Filling larger gaps by copying time-slices of period "
f"'{time_shift}'.")
load = load.apply(fill_large_gaps, shift=time_shift)
assert not load.isna().any().any(), (
'Load data contains nans. Adjust the parameters '
'`time_shift_for_large_gaps` or modify the `manual_adjustment` function '
'for implementing the needed load data modifications.')
load.to_csv(snakemake.output[0])

View File

@ -41,6 +41,7 @@ Description
import logging import logging
from _helpers import configure_logging from _helpers import configure_logging
import atlite import atlite
import geokit as gk import geokit as gk
from pathlib import Path from pathlib import Path
@ -58,7 +59,7 @@ def determine_cutout_xXyY(cutout_name):
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
snakemake = mock_snakemake('build_natura_raster') #has to be enabled snakemake = mock_snakemake('build_natura_raster')
configure_logging(snakemake) configure_logging(snakemake)
cutout_dir = Path(snakemake.input.cutouts[0]).parent.resolve() cutout_dir = Path(snakemake.input.cutouts[0]).parent.resolve()

View File

@ -72,16 +72,18 @@ The configuration options ``electricity: powerplants_filter`` and ``electricity:
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
from scipy.spatial import cKDTree as KDTree
import pypsa import pypsa
import powerplantmatching as pm import powerplantmatching as pm
import pandas as pd import pandas as pd
import numpy as np import numpy as np
from scipy.spatial import cKDTree as KDTree
logger = logging.getLogger(__name__)
def add_custom_powerplants(ppl): def add_custom_powerplants(ppl):
custom_ppl_query = snakemake.config['electricity']['custom_powerplants'] custom_ppl_query = snakemake.config['electricity']['custom_powerplants']
if not custom_ppl_query: if not custom_ppl_query:
@ -94,7 +96,6 @@ def add_custom_powerplants(ppl):
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
snakemake = mock_snakemake('build_powerplants') snakemake = mock_snakemake('build_powerplants')

View File

@ -181,27 +181,28 @@ node (`p_nom_max`): ``simple`` and ``conservative``:
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import matplotlib.pyplot as plt
import os import os
import atlite import atlite
import numpy as np import numpy as np
import xarray as xr import xarray as xr
import pandas as pd import pandas as pd
import multiprocessing as mp import multiprocessing as mp
import matplotlib.pyplot as plt
import progressbar as pgb
from scipy.sparse import csr_matrix, vstack from scipy.sparse import csr_matrix, vstack
from pypsa.geo import haversine from pypsa.geo import haversine
from vresutils import landuse as vlanduse from vresutils import landuse as vlanduse
from vresutils.array import spdiag from vresutils.array import spdiag
import progressbar as pgb logger = logging.getLogger(__name__)
bounds = dx = dy = config = paths = gebco = clc = natura = None bounds = dx = dy = config = paths = gebco = clc = natura = None
def init_globals(bounds_xXyY, n_dx, n_dy, n_config, n_paths): def init_globals(bounds_xXyY, n_dx, n_dy, n_config, n_paths):
# Late import so that the GDAL Context is only created in the new processes # Late import so that the GDAL Context is only created in the new processes
global gl, gk, gdal global gl, gk, gdal
@ -227,6 +228,7 @@ def init_globals(bounds_xXyY, n_dx, n_dy, n_config, n_paths):
natura = gk.raster.loadRaster(paths["natura"]) natura = gk.raster.loadRaster(paths["natura"])
def downsample_to_coarse_grid(bounds, dx, dy, mask, data): def downsample_to_coarse_grid(bounds, dx, dy, mask, data):
# The GDAL warp function with the 'average' resample algorithm needs a band of zero values of at least # The GDAL warp function with the 'average' resample algorithm needs a band of zero values of at least
# the size of one coarse cell around the original raster or it produces erroneous results # the size of one coarse cell around the original raster or it produces erroneous results
@ -238,6 +240,7 @@ def downsample_to_coarse_grid(bounds, dx, dy, mask, data):
assert gdal.Warp(average, padded, resampleAlg='average') == 1, "gdal warp failed: %s" % gdal.GetLastErrorMsg() assert gdal.Warp(average, padded, resampleAlg='average') == 1, "gdal warp failed: %s" % gdal.GetLastErrorMsg()
return average return average
def calculate_potential(gid, save_map=None): def calculate_potential(gid, save_map=None):
feature = gk.vector.extractFeature(paths["regions"], where=gid) feature = gk.vector.extractFeature(paths["regions"], where=gid)
ec = gl.ExclusionCalculator(feature.geom) ec = gl.ExclusionCalculator(feature.geom)

View File

@ -92,6 +92,7 @@ def _get_country(target, **keys):
except (KeyError, AttributeError): except (KeyError, AttributeError):
return np.nan return np.nan
def _simplify_polys(polys, minarea=0.1, tolerance=0.01, filterremote=True): def _simplify_polys(polys, minarea=0.1, tolerance=0.01, filterremote=True):
if isinstance(polys, MultiPolygon): if isinstance(polys, MultiPolygon):
polys = sorted(polys, key=attrgetter('area'), reverse=True) polys = sorted(polys, key=attrgetter('area'), reverse=True)
@ -105,6 +106,7 @@ def _simplify_polys(polys, minarea=0.1, tolerance=0.01, filterremote=True):
polys = mainpoly polys = mainpoly
return polys.simplify(tolerance=tolerance) return polys.simplify(tolerance=tolerance)
def countries(): def countries():
cntries = snakemake.config['countries'] cntries = snakemake.config['countries']
if 'RS' in cntries: cntries.append('KV') if 'RS' in cntries: cntries.append('KV')
@ -121,6 +123,7 @@ def countries():
return s return s
def eez(country_shapes): def eez(country_shapes):
df = gpd.read_file(snakemake.input.eez) df = gpd.read_file(snakemake.input.eez)
df = df.loc[df['ISO_3digit'].isin([_get_country('alpha_3', alpha_2=c) for c in snakemake.config['countries']])] df = df.loc[df['ISO_3digit'].isin([_get_country('alpha_3', alpha_2=c) for c in snakemake.config['countries']])]
@ -130,6 +133,7 @@ def eez(country_shapes):
s.index.name = "name" s.index.name = "name"
return s return s
def country_cover(country_shapes, eez_shapes=None): def country_cover(country_shapes, eez_shapes=None):
shapes = list(country_shapes) shapes = list(country_shapes)
if eez_shapes is not None: if eez_shapes is not None:
@ -140,6 +144,7 @@ def country_cover(country_shapes, eez_shapes=None):
europe_shape = max(europe_shape, key=attrgetter('area')) europe_shape = max(europe_shape, key=attrgetter('area'))
return Polygon(shell=europe_shape.exterior) return Polygon(shell=europe_shape.exterior)
def nuts3(country_shapes): def nuts3(country_shapes):
df = gpd.read_file(snakemake.input.nuts3) df = gpd.read_file(snakemake.input.nuts3)
df = df.loc[df['STAT_LEVL_'] == 3] df = df.loc[df['STAT_LEVL_'] == 3]
@ -158,7 +163,6 @@ def nuts3(country_shapes):
.applymap(lambda x: pd.to_numeric(x, errors='coerce')) .applymap(lambda x: pd.to_numeric(x, errors='coerce'))
.fillna(method='bfill', axis=1))['2014'] .fillna(method='bfill', axis=1))['2014']
# Swiss data
cantons = pd.read_csv(snakemake.input.ch_cantons) cantons = pd.read_csv(snakemake.input.ch_cantons)
cantons = cantons.set_index(cantons['HASC'].str[3:])['NUTS'] cantons = cantons.set_index(cantons['HASC'].str[3:])['NUTS']
cantons = cantons.str.pad(5, side='right', fillchar='0') cantons = cantons.str.pad(5, side='right', fillchar='0')
@ -197,6 +201,7 @@ def nuts3(country_shapes):
return df return df
def save_to_geojson(df, fn): def save_to_geojson(df, fn):
if os.path.exists(fn): if os.path.exists(fn):
os.unlink(fn) os.unlink(fn)
@ -206,20 +211,23 @@ def save_to_geojson(df, fn):
schema = {**gpd.io.file.infer_schema(df), 'geometry': 'Unknown'} schema = {**gpd.io.file.infer_schema(df), 'geometry': 'Unknown'}
df.to_file(fn, driver='GeoJSON', schema=schema) df.to_file(fn, driver='GeoJSON', schema=schema)
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
snakemake = mock_snakemake('build_shapes') snakemake = mock_snakemake('build_shapes')
configure_logging(snakemake) configure_logging(snakemake)
out = snakemake.output
country_shapes = countries() country_shapes = countries()
save_to_geojson(country_shapes, snakemake.output.country_shapes) save_to_geojson(country_shapes, out.country_shapes)
offshore_shapes = eez(country_shapes) offshore_shapes = eez(country_shapes)
save_to_geojson(offshore_shapes, snakemake.output.offshore_shapes) save_to_geojson(offshore_shapes, out.offshore_shapes)
europe_shape = country_cover(country_shapes, offshore_shapes) europe_shape = country_cover(country_shapes, offshore_shapes)
save_to_geojson(gpd.GeoSeries(europe_shape), snakemake.output.europe_shape) save_to_geojson(gpd.GeoSeries(europe_shape), out.europe_shape)
nuts3_shapes = nuts3(country_shapes) nuts3_shapes = nuts3(country_shapes)
save_to_geojson(nuts3_shapes, snakemake.output.nuts3_shapes) save_to_geojson(nuts3_shapes, out.nuts3_shapes)

View File

@ -31,26 +31,28 @@ Relevant Settings
Inputs Inputs
------ ------
- ``resources/regions_onshore_{network}_s{simpl}.geojson``: confer :ref:`simplify` - ``resources/regions_onshore_elec_s{simpl}.geojson``: confer :ref:`simplify`
- ``resources/regions_offshore_{network}_s{simpl}.geojson``: confer :ref:`simplify` - ``resources/regions_offshore_elec_s{simpl}.geojson``: confer :ref:`simplify`
- ``resources/clustermaps_{network}_s{simpl}.h5``: confer :ref:`simplify` - ``resources/busmap_elec_s{simpl}.csv``: confer :ref:`simplify`
- ``networks/{network}_s{simpl}.nc``: confer :ref:`simplify` - ``networks/elec_s{simpl}.nc``: confer :ref:`simplify`
- ``data/custom_busmap_elec_s{simpl}_{clusters}.csv``: optional input
Outputs Outputs
------- -------
- ``resources/regions_onshore_{network}_s{simpl}_{clusters}.geojson``: - ``resources/regions_onshore_elec_s{simpl}_{clusters}.geojson``:
.. image:: ../img/regions_onshore_elec_s_X.png .. image:: ../img/regions_onshore_elec_s_X.png
:scale: 33 % :scale: 33 %
- ``resources/regions_offshore_{network}_s{simpl}_{clusters}.geojson``: - ``resources/regions_offshore_elec_s{simpl}_{clusters}.geojson``:
.. image:: ../img/regions_offshore_elec_s_X.png .. image:: ../img/regions_offshore_elec_s_X.png
:scale: 33 % :scale: 33 %
- ``resources/clustermaps_{network}_s{simpl}_{clusters}.h5``: Mapping of buses and lines from ``networks/elec_s{simpl}.nc`` to ``networks/elec_s{simpl}_{clusters}.nc``; has keys ['/busmap', '/busmap_s', '/linemap', '/linemap_negative', '/linemap_positive'] - ``resources/busmap_elec_s{simpl}_{clusters}.csv``: Mapping of buses from ``networks/elec_s{simpl}.nc`` to ``networks/elec_s{simpl}_{clusters}.nc``;
- ``networks/{network}_s{simpl}_{clusters}.nc``: - ``resources/linemap_elec_s{simpl}_{clusters}.csv``: Mapping of lines from ``networks/elec_s{simpl}.nc`` to ``networks/elec_s{simpl}_{clusters}.nc``;
- ``networks/elec_s{simpl}_{clusters}.nc``:
.. image:: ../img/elec_s_X.png .. image:: ../img/elec_s_X.png
:scale: 40 % :scale: 40 %
@ -120,31 +122,33 @@ Exemplary unsolved network clustered to 37 nodes:
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import pandas as pd import pypsa
idx = pd.IndexSlice
import os import os
import shapely
import pandas as pd
import numpy as np import numpy as np
import geopandas as gpd import geopandas as gpd
import shapely import pyomo.environ as po
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import seaborn as sns import seaborn as sns
from six.moves import reduce from six.moves import reduce
import pyomo.environ as po
import pypsa
from pypsa.networkclustering import (busmap_by_kmeans, busmap_by_spectral_clustering, from pypsa.networkclustering import (busmap_by_kmeans, busmap_by_spectral_clustering,
_make_consense, get_clustering_from_busmap) _make_consense, get_clustering_from_busmap)
from add_electricity import load_costs from add_electricity import load_costs
def normed(x): idx = pd.IndexSlice
return (x/x.sum()).fillna(0.)
logger = logging.getLogger(__name__)
def normed(x): return (x/x.sum()).fillna(0.)
def weighting_for_country(n, x): def weighting_for_country(n, x):
conv_carriers = {'OCGT','CCGT','PHS', 'hydro'} conv_carriers = {'OCGT','CCGT','PHS', 'hydro'}
@ -166,18 +170,9 @@ def weighting_for_country(n, x):
return (w * (100. / w.max())).clip(lower=1.).astype(int) return (w * (100. / w.max())).clip(lower=1.).astype(int)
## Plot weighting for Germany
def plot_weighting(n, country, country_shape=None):
n.plot(bus_sizes=(2*weighting_for_country(n.buses.loc[n.buses.country == country])).reindex(n.buses.index, fill_value=1))
if country_shape is not None:
plt.xlim(country_shape.bounds[0], country_shape.bounds[2])
plt.ylim(country_shape.bounds[1], country_shape.bounds[3])
# # Determining the number of clusters per country
def distribute_clusters(n, n_clusters, focus_weights=None, solver_name=None): def distribute_clusters(n, n_clusters, focus_weights=None, solver_name=None):
"""Determine the number of clusters per country"""
if solver_name is None: if solver_name is None:
solver_name = snakemake.config['solving']['solver']['name'] solver_name = snakemake.config['solving']['solver']['name']
@ -189,7 +184,7 @@ def distribute_clusters(n, n_clusters, focus_weights=None, solver_name=None):
N = n.buses.groupby(['country', 'sub_network']).size() N = n.buses.groupby(['country', 'sub_network']).size()
assert n_clusters >= len(N) and n_clusters <= N.sum(), \ assert n_clusters >= len(N) and n_clusters <= N.sum(), \
"Number of clusters must be {} <= n_clusters <= {} for this selection of countries.".format(len(N), N.sum()) f"Number of clusters must be {len(N)} <= n_clusters <= {N.sum()} for this selection of countries."
if focus_weights is not None: if focus_weights is not None:
@ -205,7 +200,7 @@ def distribute_clusters(n, n_clusters, focus_weights=None, solver_name=None):
logger.warning('Using custom focus weights for determining number of clusters.') logger.warning('Using custom focus weights for determining number of clusters.')
assert np.isclose(L.sum(), 1.0, rtol=1e-3), "Country weights L must sum up to 1.0 when distributing clusters. Is {}.".format(L.sum()) assert np.isclose(L.sum(), 1.0, rtol=1e-3), f"Country weights L must sum up to 1.0 when distributing clusters. Is {L.sum()}."
m = po.ConcreteModel() m = po.ConcreteModel()
def n_bounds(model, *n_id): def n_bounds(model, *n_id):
@ -221,10 +216,11 @@ def distribute_clusters(n, n_clusters, focus_weights=None, solver_name=None):
opt = po.SolverFactory('ipopt') opt = po.SolverFactory('ipopt')
results = opt.solve(m) results = opt.solve(m)
assert results['Solver'][0]['Status'] == 'ok', "Solver returned non-optimally: {}".format(results) assert results['Solver'][0]['Status'] == 'ok', f"Solver returned non-optimally: {results}"
return pd.Series(m.n.get_values(), index=L.index).astype(int) return pd.Series(m.n.get_values(), index=L.index).astype(int)
def busmap_for_n_clusters(n, n_clusters, solver_name, focus_weights=None, algorithm="kmeans", **algorithm_kwds): def busmap_for_n_clusters(n, n_clusters, solver_name, focus_weights=None, algorithm="kmeans", **algorithm_kwds):
if algorithm == "kmeans": if algorithm == "kmeans":
algorithm_kwds.setdefault('n_init', 1000) algorithm_kwds.setdefault('n_init', 1000)
@ -243,7 +239,7 @@ def busmap_for_n_clusters(n, n_clusters, solver_name, focus_weights=None, algori
def busmap_for_country(x): def busmap_for_country(x):
prefix = x.name[0] + x.name[1] + ' ' prefix = x.name[0] + x.name[1] + ' '
logger.debug("Determining busmap for country {}".format(prefix[:-1])) logger.debug(f"Determining busmap for country {prefix[:-1]}")
if len(x) == 1: if len(x) == 1:
return pd.Series(prefix + '0', index=x.index) return pd.Series(prefix + '0', index=x.index)
weight = weighting_for_country(n, x) weight = weighting_for_country(n, x)
@ -257,31 +253,30 @@ def busmap_for_n_clusters(n, n_clusters, solver_name, focus_weights=None, algori
else: else:
raise ValueError(f"`algorithm` must be one of 'kmeans', 'spectral' or 'louvain'. Is {algorithm}.") raise ValueError(f"`algorithm` must be one of 'kmeans', 'spectral' or 'louvain'. Is {algorithm}.")
return (n.buses.groupby(['country', 'sub_network'], group_keys=False, squeeze=True) return (n.buses.groupby(['country', 'sub_network'], group_keys=False)
.apply(busmap_for_country).rename('busmap')) .apply(busmap_for_country).squeeze().rename('busmap'))
def plot_busmap_for_n_clusters(n, n_clusters=50):
busmap = busmap_for_n_clusters(n, n_clusters)
cs = busmap.unique()
cr = sns.color_palette("hls", len(cs))
n.plot(bus_colors=busmap.map(dict(zip(cs, cr))))
del cs, cr
def clustering_for_n_clusters(n, n_clusters, aggregate_carriers=None, def clustering_for_n_clusters(n, n_clusters, custom_busmap=False, aggregate_carriers=None,
line_length_factor=1.25, potential_mode='simple', line_length_factor=1.25, potential_mode='simple', solver_name="cbc",
solver_name="cbc", algorithm="kmeans", algorithm="kmeans", extended_link_costs=0, focus_weights=None):
extended_link_costs=0, focus_weights=None):
if potential_mode == 'simple': if potential_mode == 'simple':
p_nom_max_strategy = np.sum p_nom_max_strategy = np.sum
elif potential_mode == 'conservative': elif potential_mode == 'conservative':
p_nom_max_strategy = np.min p_nom_max_strategy = np.min
else: else:
raise AttributeError("potential_mode should be one of 'simple' or 'conservative', " raise AttributeError(f"potential_mode should be one of 'simple' or 'conservative' but is '{potential_mode}'")
"but is '{}'".format(potential_mode))
if custom_busmap:
busmap = pd.read_csv(snakemake.input.custom_busmap, index_col=0, squeeze=True)
busmap.index = busmap.index.astype(str)
logger.info(f"Imported custom busmap from {snakemake.input.custom_busmap}")
else:
busmap = busmap_for_n_clusters(n, n_clusters, solver_name, focus_weights, algorithm)
clustering = get_clustering_from_busmap( clustering = get_clustering_from_busmap(
n, busmap_for_n_clusters(n, n_clusters, solver_name, focus_weights, algorithm), n, busmap,
bus_strategies=dict(country=_make_consense("Bus", "country")), bus_strategies=dict(country=_make_consense("Bus", "country")),
aggregate_generators_weighted=True, aggregate_generators_weighted=True,
aggregate_generators_carriers=aggregate_carriers, aggregate_generators_carriers=aggregate_carriers,
@ -301,6 +296,7 @@ def clustering_for_n_clusters(n, n_clusters, aggregate_carriers=None,
return clustering return clustering
def save_to_geojson(s, fn): def save_to_geojson(s, fn):
if os.path.exists(fn): if os.path.exists(fn):
os.unlink(fn) os.unlink(fn)
@ -308,6 +304,7 @@ def save_to_geojson(s, fn):
schema = {**gpd.io.file.infer_schema(df), 'geometry': 'Unknown'} schema = {**gpd.io.file.infer_schema(df), 'geometry': 'Unknown'}
df.to_file(fn, driver='GeoJSON', schema=schema) df.to_file(fn, driver='GeoJSON', schema=schema)
def cluster_regions(busmaps, input=None, output=None): def cluster_regions(busmaps, input=None, output=None):
if input is None: input = snakemake.input if input is None: input = snakemake.input
if output is None: output = snakemake.output if output is None: output = snakemake.output
@ -321,6 +318,17 @@ def cluster_regions(busmaps, input=None, output=None):
regions_c.index.name = 'name' regions_c.index.name = 'name'
save_to_geojson(regions_c, getattr(output, which)) save_to_geojson(regions_c, getattr(output, which))
def plot_busmap_for_n_clusters(n, n_clusters, fn=None):
busmap = busmap_for_n_clusters(n, n_clusters)
cs = busmap.unique()
cr = sns.color_palette("hls", len(cs))
n.plot(bus_colors=busmap.map(dict(zip(cs, cr))))
if fn is not None:
plt.savefig(fn, bbox_inches='tight')
del cs, cr
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
@ -333,7 +341,7 @@ if __name__ == "__main__":
renewable_carriers = pd.Index([tech renewable_carriers = pd.Index([tech
for tech in n.generators.carrier.unique() for tech in n.generators.carrier.unique()
if tech.split('-', 2)[0] in snakemake.config['renewable']]) if tech in snakemake.config['renewable']])
if snakemake.wildcards.clusters.endswith('m'): if snakemake.wildcards.clusters.endswith('m'):
n_clusters = int(snakemake.wildcards.clusters[:-1]) n_clusters = int(snakemake.wildcards.clusters[:-1])
@ -363,7 +371,8 @@ if __name__ == "__main__":
return v return v
potential_mode = consense(pd.Series([snakemake.config['renewable'][tech]['potential'] potential_mode = consense(pd.Series([snakemake.config['renewable'][tech]['potential']
for tech in renewable_carriers])) for tech in renewable_carriers]))
clustering = clustering_for_n_clusters(n, n_clusters, aggregate_carriers, custom_busmap = snakemake.config["enable"].get("custom_busmap", False)
clustering = clustering_for_n_clusters(n, n_clusters, custom_busmap, aggregate_carriers,
line_length_factor=line_length_factor, line_length_factor=line_length_factor,
potential_mode=potential_mode, potential_mode=potential_mode,
solver_name=snakemake.config['solving']['solver']['name'], solver_name=snakemake.config['solving']['solver']['name'],
@ -371,11 +380,7 @@ if __name__ == "__main__":
focus_weights=focus_weights) focus_weights=focus_weights)
clustering.network.export_to_netcdf(snakemake.output.network) clustering.network.export_to_netcdf(snakemake.output.network)
with pd.HDFStore(snakemake.output.clustermaps, mode='w') as store: for attr in ('busmap', 'linemap'): #also available: linemap_positive, linemap_negative
with pd.HDFStore(snakemake.input.clustermaps, mode='r') as clustermaps: getattr(clustering, attr).to_csv(snakemake.output[attr])
for attr in clustermaps.keys():
store.put(attr, clustermaps[attr], format="table", index=False)
for attr in ('busmap', 'linemap', 'linemap_positive', 'linemap_negative'):
store.put(attr, getattr(clustering, attr), format="table", index=False)
cluster_regions((clustering.busmap,)) cluster_regions((clustering.busmap,))

View File

@ -55,29 +55,30 @@ Replacing '/summaries/' with '/plots/' creates nice colored maps of the results.
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import os import os
import pypsa
from six import iteritems
import pandas as pd import pandas as pd
import pypsa from six import iteritems
from add_electricity import load_costs, update_transmission_costs from add_electricity import load_costs, update_transmission_costs
idx = pd.IndexSlice idx = pd.IndexSlice
logger = logging.getLogger(__name__)
opt_name = {"Store": "e", "Line" : "s", "Transformer" : "s"} opt_name = {"Store": "e", "Line" : "s", "Transformer" : "s"}
def _add_indexed_rows(df, raw_index): def _add_indexed_rows(df, raw_index):
new_index = df.index|pd.MultiIndex.from_product(raw_index) new_index = df.index.union(pd.MultiIndex.from_product(raw_index))
if isinstance(new_index, pd.Index): if isinstance(new_index, pd.Index):
new_index = pd.MultiIndex.from_tuples(new_index) new_index = pd.MultiIndex.from_tuples(new_index)
return df.reindex(new_index) return df.reindex(new_index)
def assign_carriers(n): def assign_carriers(n):
if "carrier" not in n.loads: if "carrier" not in n.loads:
@ -98,6 +99,7 @@ def assign_carriers(n):
if "EU gas store" in n.stores.index and n.stores.loc["EU gas Store","carrier"] == "": if "EU gas store" in n.stores.index and n.stores.loc["EU gas Store","carrier"] == "":
n.stores.loc["EU gas Store","carrier"] = "gas Store" n.stores.loc["EU gas Store","carrier"] = "gas Store"
def calculate_costs(n, label, costs): def calculate_costs(n, label, costs):
for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}): for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}):
@ -125,7 +127,7 @@ def calculate_costs(n,label,costs):
marginal_costs_grouped = marginal_costs.groupby(c.df.carrier).sum() marginal_costs_grouped = marginal_costs.groupby(c.df.carrier).sum()
costs = costs.reindex(costs.index|pd.MultiIndex.from_product([[c.list_name],["marginal"],marginal_costs_grouped.index])) costs = costs.reindex(costs.index.union(pd.MultiIndex.from_product([[c.list_name],["marginal"],marginal_costs_grouped.index])))
costs.loc[idx[c.list_name,"marginal",list(marginal_costs_grouped.index)],label] = marginal_costs_grouped.values costs.loc[idx[c.list_name,"marginal",list(marginal_costs_grouped.index)],label] = marginal_costs_grouped.values
@ -160,6 +162,7 @@ def include_in_summary(summary, multiindexprefix, label, item):
summary = _add_indexed_rows(summary, raw_index) summary = _add_indexed_rows(summary, raw_index)
summary.loc[idx[raw_index], label] = item.values summary.loc[idx[raw_index], label] = item.values
return summary return summary
def calculate_capacity(n,label,capacity): def calculate_capacity(n,label,capacity):
@ -220,11 +223,12 @@ def calculate_supply(n,label,supply):
#lots of sign compensation for direction and to do maximums #lots of sign compensation for direction and to do maximums
s = (-1)**(1-int(end))*((-1)**int(end)*c.pnl["p"+end][items]).max().groupby(c.df.loc[items,'carrier']).sum() s = (-1)**(1-int(end))*((-1)**int(end)*c.pnl["p"+end][items]).max().groupby(c.df.loc[items,'carrier']).sum()
supply = supply.reindex(supply.index|pd.MultiIndex.from_product([[i],[c.list_name],s.index])) supply = supply.reindex(supply.index.union(pd.MultiIndex.from_product([[i],[c.list_name],s.index])))
supply.loc[idx[i,c.list_name,list(s.index)],label] = s.values supply.loc[idx[i,c.list_name,list(s.index)],label] = s.values
return supply return supply
def calculate_supply_energy(n, label, supply_energy): def calculate_supply_energy(n, label, supply_energy):
"""calculate the total dispatch of each component at the buses where the loads are attached""" """calculate the total dispatch of each component at the buses where the loads are attached"""
@ -265,14 +269,15 @@ def calculate_supply_energy(n,label,supply_energy):
s = (-1)*c.pnl["p"+end][items].sum().groupby(c.df.loc[items,'carrier']).sum() s = (-1)*c.pnl["p"+end][items].sum().groupby(c.df.loc[items,'carrier']).sum()
supply_energy = supply_energy.reindex(supply_energy.index|pd.MultiIndex.from_product([[i],[c.list_name],s.index])) supply_energy = supply_energy.reindex(supply_energy.index.union(pd.MultiIndex.from_product([[i],[c.list_name],s.index])))
supply_energy.loc[idx[i,c.list_name,list(s.index)],label] = s.values supply_energy.loc[idx[i,c.list_name,list(s.index)],label] = s.values
return supply_energy return supply_energy
def calculate_metrics(n,label,metrics): def calculate_metrics(n,label,metrics):
metrics = metrics.reindex(metrics.index|pd.Index(["line_volume","line_volume_limit","line_volume_AC","line_volume_DC","line_volume_shadow","co2_shadow"])) metrics = metrics.reindex(metrics.index.union(pd.Index(["line_volume","line_volume_limit","line_volume_AC","line_volume_DC","line_volume_shadow","co2_shadow"])))
metrics.at["line_volume_DC",label] = (n.links.length*n.links.p_nom_opt)[n.links.carrier == "DC"].sum() metrics.at["line_volume_DC",label] = (n.links.length*n.links.p_nom_opt)[n.links.carrier == "DC"].sum()
metrics.at["line_volume_AC",label] = (n.lines.length*n.lines.s_nom_opt).sum() metrics.at["line_volume_AC",label] = (n.lines.length*n.lines.s_nom_opt).sum()
@ -294,18 +299,17 @@ def calculate_prices(n,label,prices):
bus_type = pd.Series(n.buses.index.str[3:],n.buses.index).replace("","electricity") bus_type = pd.Series(n.buses.index.str[3:],n.buses.index).replace("","electricity")
prices = prices.reindex(prices.index|bus_type.value_counts().index) prices = prices.reindex(prices.index.union(bus_type.value_counts().index))
#WARNING: this is time-averaged, should really be load-weighted average logger.warning("Prices are time-averaged, not load-weighted")
prices[label] = n.buses_t.marginal_price.mean().groupby(bus_type).mean() prices[label] = n.buses_t.marginal_price.mean().groupby(bus_type).mean()
return prices return prices
def calculate_weighted_prices(n,label,weighted_prices): def calculate_weighted_prices(n,label,weighted_prices):
# Warning: doesn't include storage units as loads
logger.warning("Weighted prices don't include storage units as loads")
weighted_prices = weighted_prices.reindex(pd.Index(["electricity","heat","space heat","urban heat","space urban heat","gas","H2"])) weighted_prices = weighted_prices.reindex(pd.Index(["electricity","heat","space heat","urban heat","space urban heat","gas","H2"]))
@ -362,62 +366,6 @@ def calculate_weighted_prices(n,label,weighted_prices):
return weighted_prices return weighted_prices
# BROKEN don't use
#
# def calculate_market_values(n, label, market_values):
# # Warning: doesn't include storage units
# n.buses["suffix"] = n.buses.index.str[2:]
# suffix = ""
# buses = n.buses.index[n.buses.suffix == suffix]
# ## First do market value of generators ##
# generators = n.generators.index[n.buses.loc[n.generators.bus,"suffix"] == suffix]
# techs = n.generators.loc[generators,"carrier"].value_counts().index
# market_values = market_values.reindex(market_values.index | techs)
# for tech in techs:
# gens = generators[n.generators.loc[generators,"carrier"] == tech]
# dispatch = n.generators_t.p[gens].groupby(n.generators.loc[gens,"bus"],axis=1).sum().reindex(columns=buses,fill_value=0.)
# revenue = dispatch*n.buses_t.marginal_price[buses]
# market_values.at[tech,label] = revenue.sum().sum()/dispatch.sum().sum()
# ## Now do market value of links ##
# for i in ["0","1"]:
# all_links = n.links.index[n.buses.loc[n.links["bus"+i],"suffix"] == suffix]
# techs = n.links.loc[all_links,"carrier"].value_counts().index
# market_values = market_values.reindex(market_values.index | techs)
# for tech in techs:
# links = all_links[n.links.loc[all_links,"carrier"] == tech]
# dispatch = n.links_t["p"+i][links].groupby(n.links.loc[links,"bus"+i],axis=1).sum().reindex(columns=buses,fill_value=0.)
# revenue = dispatch*n.buses_t.marginal_price[buses]
# market_values.at[tech,label] = revenue.sum().sum()/dispatch.sum().sum()
# return market_values
# OLD CODE must be adapted
# def calculate_price_statistics(n, label, price_statistics):
# price_statistics = price_statistics.reindex(price_statistics.index|pd.Index(["zero_hours","mean","standard_deviation"]))
# n.buses["suffix"] = n.buses.index.str[2:]
# suffix = ""
# buses = n.buses.index[n.buses.suffix == suffix]
# threshold = 0.1 #higher than phoney marginal_cost of wind/solar
# df = pd.DataFrame(data=0.,columns=buses,index=n.snapshots)
# df[n.buses_t.marginal_price[buses] < threshold] = 1.
# price_statistics.at["zero_hours", label] = df.sum().sum()/(df.shape[0]*df.shape[1])
# price_statistics.at["mean", label] = n.buses_t.marginal_price[buses].unstack().mean()
# price_statistics.at["standard_deviation", label] = n.buses_t.marginal_price[buses].unstack().std()
# return price_statistics
outputs = ["costs", outputs = ["costs",
"curtailment", "curtailment",
"energy", "energy",
@ -426,11 +374,10 @@ outputs = ["costs",
"supply_energy", "supply_energy",
"prices", "prices",
"weighted_prices", "weighted_prices",
# "price_statistics",
# "market_values",
"metrics", "metrics",
] ]
def make_summaries(networks_dict, country='all'): def make_summaries(networks_dict, country='all'):
columns = pd.MultiIndex.from_tuples(networks_dict.keys(),names=["simpl","clusters","ll","opts"]) columns = pd.MultiIndex.from_tuples(networks_dict.keys(),names=["simpl","clusters","ll","opts"])
@ -485,7 +432,6 @@ if __name__ == "__main__":
network_dir = os.path.join('results', 'networks') network_dir = os.path.join('results', 'networks')
configure_logging(snakemake) configure_logging(snakemake)
def expand_from_wildcard(key): def expand_from_wildcard(key):
w = getattr(snakemake.wildcards, key) w = getattr(snakemake.wildcards, key)
return snakemake.config["scenario"][key] if w == "all" else [w] return snakemake.config["scenario"][key] if w == "all" else [w]
@ -505,8 +451,6 @@ if __name__ == "__main__":
for l in ll for l in ll
for opts in expand_from_wildcard("opts")} for opts in expand_from_wildcard("opts")}
print(networks_dict)
dfs = make_summaries(networks_dict, country=snakemake.wildcards.country) dfs = make_summaries(networks_dict, country=snakemake.wildcards.country)
to_csv(dfs) to_csv(dfs)

View File

@ -20,7 +20,6 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import (load_network_for_plots, aggregate_p, aggregate_costs, from _helpers import (load_network_for_plots, aggregate_p, aggregate_costs,
configure_logging) configure_logging)
@ -35,6 +34,9 @@ from matplotlib.patches import Circle, Ellipse
from matplotlib.legend_handler import HandlerPatch from matplotlib.legend_handler import HandlerPatch
to_rgba = mpl.colors.colorConverter.to_rgba to_rgba = mpl.colors.colorConverter.to_rgba
logger = logging.getLogger(__name__)
def make_handler_map_to_scale_circles_as_in(ax, dont_resize_actively=False): def make_handler_map_to_scale_circles_as_in(ax, dont_resize_actively=False):
fig = ax.get_figure() fig = ax.get_figure()
def axes2pt(): def axes2pt():
@ -57,9 +59,11 @@ def make_handler_map_to_scale_circles_as_in(ax, dont_resize_actively=False):
return e return e
return {Circle: HandlerPatch(patch_func=legend_circle_handler)} return {Circle: HandlerPatch(patch_func=legend_circle_handler)}
def make_legend_circles_for(sizes, scale=1.0, **kw): def make_legend_circles_for(sizes, scale=1.0, **kw):
return [Circle((0,0), radius=(s/scale)**0.5, **kw) for s in sizes] return [Circle((0,0), radius=(s/scale)**0.5, **kw) for s in sizes]
def set_plot_style(): def set_plot_style():
plt.style.use(['classic', 'seaborn-white', plt.style.use(['classic', 'seaborn-white',
{'axes.grid': False, 'grid.linestyle': '--', 'grid.color': u'0.6', {'axes.grid': False, 'grid.linestyle': '--', 'grid.color': u'0.6',
@ -69,9 +73,9 @@ def set_plot_style():
'legend.fontsize': 'medium', 'legend.fontsize': 'medium',
'lines.linewidth': 1.5, 'lines.linewidth': 1.5,
'pdf.fonttype': 42, 'pdf.fonttype': 42,
# 'font.family': 'Times New Roman'
}]) }])
def plot_map(n, ax=None, attribute='p_nom', opts={}): def plot_map(n, ax=None, attribute='p_nom', opts={}):
if ax is None: if ax is None:
ax = plt.gca() ax = plt.gca()
@ -114,16 +118,11 @@ def plot_map(n, ax=None, attribute='p_nom', opts={}):
bus_sizes=0, bus_sizes=0,
bus_colors=tech_colors, bus_colors=tech_colors,
boundaries=map_boundaries, boundaries=map_boundaries,
geomap=True, # TODO : Turn to False, after the release of PyPSA 0.14.2 (refer to https://github.com/PyPSA/PyPSA/issues/75) geomap=False,
ax=ax) ax=ax)
ax.set_aspect('equal') ax.set_aspect('equal')
ax.axis('off') ax.axis('off')
# x1, y1, x2, y2 = map_boundaries
# ax.set_xlim(x1, x2)
# ax.set_ylim(y1, y2)
# Rasterize basemap # Rasterize basemap
# TODO : Check if this also works with cartopy # TODO : Check if this also works with cartopy
for c in ax.collections[:2]: c.set_rasterized(True) for c in ax.collections[:2]: c.set_rasterized(True)
@ -165,7 +164,7 @@ def plot_map(n, ax=None, attribute='p_nom', opts={}):
handler_map=make_handler_map_to_scale_circles_as_in(ax)) handler_map=make_handler_map_to_scale_circles_as_in(ax))
ax.add_artist(l2) ax.add_artist(l2)
techs = (bus_sizes.index.levels[1]) & pd.Index(opts['vre_techs'] + opts['conv_techs'] + opts['storage_techs']) techs = (bus_sizes.index.levels[1]).intersection(pd.Index(opts['vre_techs'] + opts['conv_techs'] + opts['storage_techs']))
handles = [] handles = []
labels = [] labels = []
for t in techs: for t in techs:
@ -176,13 +175,9 @@ def plot_map(n, ax=None, attribute='p_nom', opts={}):
return fig return fig
#n = load_network_for_plots(snakemake.input.network, opts, combine_hydro_ps=False)
def plot_total_energy_pie(n, ax=None): def plot_total_energy_pie(n, ax=None):
"""Add total energy pie plot""" if ax is None: ax = plt.gca()
if ax is None:
ax = plt.gca()
ax.set_title('Energy per technology', fontdict=dict(fontsize="medium")) ax.set_title('Energy per technology', fontdict=dict(fontsize="medium"))
@ -190,7 +185,7 @@ def plot_total_energy_pie(n, ax=None):
patches, texts, autotexts = ax.pie(e_primary, patches, texts, autotexts = ax.pie(e_primary,
startangle=90, startangle=90,
labels = e_primary.rename(opts['nice_names_n']).index, labels = e_primary.rename(opts['nice_names']).index,
autopct='%.0f%%', autopct='%.0f%%',
shadow=False, shadow=False,
colors = [opts['tech_colors'][tech] for tech in e_primary.index]) colors = [opts['tech_colors'][tech] for tech in e_primary.index])
@ -200,9 +195,7 @@ def plot_total_energy_pie(n, ax=None):
t2.remove() t2.remove()
def plot_total_cost_bar(n, ax=None): def plot_total_cost_bar(n, ax=None):
"""Add average system cost bar plot""" if ax is None: ax = plt.gca()
if ax is None:
ax = plt.gca()
total_load = (n.snapshot_weightings * n.loads_t.p.sum(axis=1)).sum() total_load = (n.snapshot_weightings * n.loads_t.p.sum(axis=1)).sum()
tech_colors = opts['tech_colors'] tech_colors = opts['tech_colors']
@ -240,14 +233,13 @@ def plot_total_cost_bar(n, ax=None):
if abs(data[-1]) < 5: if abs(data[-1]) < 5:
continue continue
text = ax.text(1.1,(bottom-0.5*data)[-1]-3,opts['nice_names_n'].get(ind,ind)) text = ax.text(1.1,(bottom-0.5*data)[-1]-3,opts['nice_names'].get(ind,ind))
texts.append(text) texts.append(text)
ax.set_ylabel("Average system cost [Eur/MWh]") ax.set_ylabel("Average system cost [Eur/MWh]")
ax.set_ylim([0, 80]) # opts['costs_max']]) ax.set_ylim([0, opts.get('costs_max', 80)])
ax.set_xlim([0, 1]) ax.set_xlim([0, 1])
#ax.set_xticks([0.5]) ax.set_xticklabels([])
ax.set_xticklabels([]) #["w/o\nEp", "w/\nEp"])
ax.grid(True, axis="y", color='k', linestyle='dotted') ax.grid(True, axis="y", color='k', linestyle='dotted')
@ -280,8 +272,6 @@ if __name__ == "__main__":
ax2 = fig.add_axes([-0.075, 0.1, 0.1, 0.45]) ax2 = fig.add_axes([-0.075, 0.1, 0.1, 0.45])
plot_total_cost_bar(n, ax2) plot_total_cost_bar(n, ax2)
#fig.tight_layout()
ll = snakemake.wildcards.ll ll = snakemake.wildcards.ll
ll_type = ll[0] ll_type = ll[0]
ll_factor = ll[1:] ll_factor = ll[1:]

View File

@ -19,19 +19,19 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import pypsa import pypsa
import pandas as pd import pandas as pd
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
logger = logging.getLogger(__name__)
def cum_p_nom_max(net, tech, country=None): def cum_p_nom_max(net, tech, country=None):
carrier_b = net.generators.carrier == tech carrier_b = net.generators.carrier == tech
generators = \ generators = pd.DataFrame(dict(
pd.DataFrame(dict(
p_nom_max=net.generators.loc[carrier_b, 'p_nom_max'], p_nom_max=net.generators.loc[carrier_b, 'p_nom_max'],
p_max_pu=net.generators_t.p_max_pu.loc[:,carrier_b].mean(), p_max_pu=net.generators_t.p_max_pu.loc[:,carrier_b].mean(),
country=net.generators.loc[carrier_b, 'bus'].map(net.buses.country) country=net.generators.loc[carrier_b, 'bus'].map(net.buses.country)

View File

@ -21,41 +21,19 @@ Description
import os import os
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import pandas as pd import pandas as pd
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
#consolidate and rename logger = logging.getLogger(__name__)
def rename_techs(label):
if label.startswith("central "):
label = label[len("central "):]
elif label.startswith("urban "):
label = label[len("urban "):]
if "retrofitting" in label:
label = "building retrofitting" def rename_techs(label):
elif "H2" in label: if "H2" in label:
label = "hydrogen storage" label = "hydrogen storage"
elif "CHP" in label:
label = "CHP"
elif "water tank" in label:
label = "water tanks"
elif label == "water tanks":
label = "hot water storage"
elif "gas" in label and label != "gas boiler":
label = "natural gas"
elif "solar thermal" in label:
label = "solar thermal"
elif label == "solar": elif label == "solar":
label = "solar PV" label = "solar PV"
elif label == "heat pump":
label = "air heat pump"
elif label == "Sabatier":
label = "methanation"
elif label == "offwind":
label = "offshore wind"
elif label == "offwind-ac": elif label == "offwind-ac":
label = "offshore wind ac" label = "offshore wind ac"
elif label == "offwind-dc": elif label == "offwind-dc":
@ -68,15 +46,14 @@ def rename_techs(label):
label = "hydroelectricity" label = "hydroelectricity"
elif label == "PHS": elif label == "PHS":
label = "hydroelectricity" label = "hydroelectricity"
elif label == "co2 Store":
label = "DAC"
elif "battery" in label: elif "battery" in label:
label = "battery storage" label = "battery storage"
return label return label
preferred_order = pd.Index(["transmission lines","hydroelectricity","hydro reservoir","run of river","pumped hydro storage","onshore wind","offshore wind ac", "offshore wind dc","solar PV","solar thermal","building retrofitting","ground heat pump","air heat pump","resistive heater","CHP","OCGT","gas boiler","gas","natural gas","methanation","hydrogen storage","battery storage","hot water storage"]) preferred_order = pd.Index(["transmission lines","hydroelectricity","hydro reservoir","run of river","pumped hydro storage","onshore wind","offshore wind ac", "offshore wind dc","solar PV","solar thermal","OCGT","hydrogen storage","battery storage"])
def plot_costs(infn, fn=None): def plot_costs(infn, fn=None):

View File

@ -37,18 +37,16 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import pandas as pd import pandas as pd
if __name__ == "__main__": logger = logging.getLogger(__name__)
if 'snakemake' not in globals():
from _helpers import mock_snakemake #rule must be enabled in config
snakemake = mock_snakemake('prepare_links_p_nom', simpl='', network='elec') def multiply(s):
configure_logging(snakemake) return s.str[0].astype(float) * s.str[1].astype(float)
links_p_nom = pd.read_html('https://en.wikipedia.org/wiki/List_of_HVDC_projects', header=0, match="SwePol")[0]
def extract_coordinates(s): def extract_coordinates(s):
regex = (r"(\d{1,2})°(\d{1,2})(\d{1,2})″(N|S) " regex = (r"(\d{1,2})°(\d{1,2})(\d{1,2})″(N|S) "
@ -58,11 +56,20 @@ if __name__ == "__main__":
lon = (e[4].astype(float) + (e[5].astype(float) + e[6].astype(float)/60.)/60.)*e[7].map({'E': +1., 'W': -1.}) lon = (e[4].astype(float) + (e[5].astype(float) + e[6].astype(float)/60.)/60.)*e[7].map({'E': +1., 'W': -1.})
return lon, lat return lon, lat
m_b = links_p_nom["Power (MW)"].str.contains('x').fillna(False)
def multiply(s): return s.str[0].astype(float) * s.str[1].astype(float)
links_p_nom.loc[m_b, "Power (MW)"] = links_p_nom.loc[m_b, "Power (MW)"].str.split('x').pipe(multiply) if __name__ == "__main__":
links_p_nom["Power (MW)"] = links_p_nom["Power (MW)"].str.extract("[-/]?([\d.]+)", expand=False).astype(float) if 'snakemake' not in globals():
from _helpers import mock_snakemake #rule must be enabled in config
snakemake = mock_snakemake('prepare_links_p_nom', simpl='', network='elec')
configure_logging(snakemake)
links_p_nom = pd.read_html('https://en.wikipedia.org/wiki/List_of_HVDC_projects', header=0, match="SwePol")[0]
mw = "Power (MW)"
m_b = links_p_nom[mw].str.contains('x').fillna(False)
links_p_nom.loc[m_b, mw] = links_p_nom.loc[m_b, mw].str.split('x').pipe(multiply)
links_p_nom[mw] = links_p_nom[mw].str.extract("[-/]?([\d.]+)", expand=False).astype(float)
links_p_nom['x1'], links_p_nom['y1'] = extract_coordinates(links_p_nom['Converterstation 1']) links_p_nom['x1'], links_p_nom['y1'] = extract_coordinates(links_p_nom['Converterstation 1'])
links_p_nom['x2'], links_p_nom['y2'] = extract_coordinates(links_p_nom['Converterstation 2']) links_p_nom['x2'], links_p_nom['y2'] = extract_coordinates(links_p_nom['Converterstation 2'])

View File

@ -9,9 +9,10 @@ Prepare PyPSA network for solving according to :ref:`opts` and :ref:`ll`, such a
- adding an annual **limit** of carbon-dioxide emissions, - adding an annual **limit** of carbon-dioxide emissions,
- adding an exogenous **price** per tonne emissions of carbon-dioxide (or other kinds), - adding an exogenous **price** per tonne emissions of carbon-dioxide (or other kinds),
- setting an **N-1 security margin** factor for transmission line capacities, - setting an **N-1 security margin** factor for transmission line capacities,
- specifying a limit on the **cost** of transmission expansion, - specifying an expansion limit on the **cost** of transmission expansion,
- specifying a limit on the **volume** of transmission expansion, and - specifying an expansion limit on the **volume** of transmission expansion, and
- reducing the **temporal** resolution by averaging over multiple hours. - reducing the **temporal** resolution by averaging over multiple hours
or segmenting time series into chunks of varying lengths using ``tsam``.
Relevant Settings Relevant Settings
----------------- -----------------
@ -38,12 +39,12 @@ Inputs
------ ------
- ``resources/costs.csv``: The database of cost assumptions for all included technologies for specific years from various sources; e.g. discount rate, lifetime, investment (CAPEX), fixed operation and maintenance (FOM), variable operation and maintenance (VOM), fuel costs, efficiency, carbon-dioxide intensity. - ``resources/costs.csv``: The database of cost assumptions for all included technologies for specific years from various sources; e.g. discount rate, lifetime, investment (CAPEX), fixed operation and maintenance (FOM), variable operation and maintenance (VOM), fuel costs, efficiency, carbon-dioxide intensity.
- ``networks/{network}_s{simpl}_{clusters}.nc``: confer :ref:`cluster` - ``networks/elec_s{simpl}_{clusters}.nc``: confer :ref:`cluster`
Outputs Outputs
------- -------
- ``networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc``: Complete PyPSA network that will be handed to the ``solve_network`` rule. - ``networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc``: Complete PyPSA network that will be handed to the ``solve_network`` rule.
Description Description
----------- -----------
@ -56,19 +57,21 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
from add_electricity import load_costs, update_transmission_costs
from six import iteritems
import numpy as np
import re import re
import pypsa import pypsa
import numpy as np
import pandas as pd import pandas as pd
from six import iteritems
from add_electricity import load_costs, update_transmission_costs
idx = pd.IndexSlice idx = pd.IndexSlice
logger = logging.getLogger(__name__)
def add_co2limit(n, Nyears=1., factor=None): def add_co2limit(n, Nyears=1., factor=None):
if factor is not None: if factor is not None:
@ -80,6 +83,7 @@ def add_co2limit(n, Nyears=1., factor=None):
carrier_attribute="co2_emissions", sense="<=", carrier_attribute="co2_emissions", sense="<=",
constant=annual_emissions * Nyears) constant=annual_emissions * Nyears)
def add_emission_prices(n, emission_prices=None, exclude_co2=False): def add_emission_prices(n, emission_prices=None, exclude_co2=False):
if emission_prices is None: if emission_prices is None:
emission_prices = snakemake.config['costs']['emission_prices'] emission_prices = snakemake.config['costs']['emission_prices']
@ -91,88 +95,49 @@ def add_emission_prices(n, emission_prices=None, exclude_co2=False):
su_ep = n.storage_units.carrier.map(ep) / n.storage_units.efficiency_dispatch su_ep = n.storage_units.carrier.map(ep) / n.storage_units.efficiency_dispatch
n.storage_units['marginal_cost'] += su_ep n.storage_units['marginal_cost'] += su_ep
def set_line_s_max_pu(n): def set_line_s_max_pu(n):
# set n-1 security margin to 0.5 for 37 clusters and to 0.7 from 200 clusters s_max_pu = snakemake.config['lines']['s_max_pu']
n_clusters = len(n.buses)
s_max_pu = np.clip(0.5 + 0.2 * (n_clusters - 37) / (200 - 37), 0.5, 0.7)
n.lines['s_max_pu'] = s_max_pu n.lines['s_max_pu'] = s_max_pu
logger.info(f"N-1 security margin of lines set to {s_max_pu}")
def set_line_cost_limit(n, lc, Nyears=1.):
def set_transmission_limit(n, ll_type, factor, Nyears=1):
links_dc_b = n.links.carrier == 'DC' if not n.links.empty else pd.Series() links_dc_b = n.links.carrier == 'DC' if not n.links.empty else pd.Series()
lines_s_nom = n.lines.s_nom.where( _lines_s_nom = (np.sqrt(3) * n.lines.type.map(n.line_types.i_nom) *
n.lines.type == '', n.lines.num_parallel * n.lines.bus0.map(n.buses.v_nom))
np.sqrt(3) * n.lines.num_parallel * lines_s_nom = n.lines.s_nom.where(n.lines.type == '', _lines_s_nom)
n.lines.type.map(n.line_types.i_nom) *
n.lines.bus0.map(n.buses.v_nom)
)
n.lines['capital_cost_lc'] = n.lines['capital_cost']
n.links['capital_cost_lc'] = n.links['capital_cost']
total_line_cost = ((lines_s_nom * n.lines['capital_cost_lc']).sum() +
n.links.loc[links_dc_b].eval('p_nom * capital_cost_lc').sum())
if lc == 'opt': col = 'capital_cost' if ll_type == 'c' else 'length'
ref = (lines_s_nom @ n.lines[col] +
n.links.loc[links_dc_b, "p_nom"] @ n.links.loc[links_dc_b, col])
costs = load_costs(Nyears, snakemake.input.tech_costs, costs = load_costs(Nyears, snakemake.input.tech_costs,
snakemake.config['costs'], snakemake.config['electricity']) snakemake.config['costs'],
snakemake.config['electricity'])
update_transmission_costs(n, costs, simple_hvdc_costs=False) update_transmission_costs(n, costs, simple_hvdc_costs=False)
else:
# Either line_volume cap or cost
n.lines['capital_cost'] = 0.
n.links.loc[links_dc_b, 'capital_cost'] = 0.
if lc == 'opt' or float(lc) > 1.0: if factor == 'opt' or float(factor) > 1.0:
n.lines['s_nom_min'] = lines_s_nom n.lines['s_nom_min'] = lines_s_nom
n.lines['s_nom_extendable'] = True n.lines['s_nom_extendable'] = True
n.links.loc[links_dc_b, 'p_nom_min'] = n.links.loc[links_dc_b, 'p_nom'] n.links.loc[links_dc_b, 'p_nom_min'] = n.links.loc[links_dc_b, 'p_nom']
n.links.loc[links_dc_b, 'p_nom_extendable'] = True n.links.loc[links_dc_b, 'p_nom_extendable'] = True
if lc != 'opt': if factor != 'opt':
line_cost = float(lc) * total_line_cost con_type = 'expansion_cost' if ll_type == 'c' else 'volume_expansion'
n.add('GlobalConstraint', 'lc_limit', rhs = float(factor) * ref
type='transmission_expansion_cost_limit', n.add('GlobalConstraint', f'l{ll_type}_limit',
sense='<=', constant=line_cost, carrier_attribute='AC, DC') type=f'transmission_{con_type}_limit',
sense='<=', constant=rhs, carrier_attribute='AC, DC')
return n return n
def set_line_volume_limit(n, lv, Nyears=1.):
links_dc_b = n.links.carrier == 'DC' if not n.links.empty else pd.Series()
lines_s_nom = n.lines.s_nom.where(
n.lines.type == '',
np.sqrt(3) * n.lines.num_parallel *
n.lines.type.map(n.line_types.i_nom) *
n.lines.bus0.map(n.buses.v_nom)
)
total_line_volume = ((lines_s_nom * n.lines['length']).sum() +
n.links.loc[links_dc_b].eval('p_nom * length').sum())
if lv == 'opt':
costs = load_costs(Nyears, snakemake.input.tech_costs,
snakemake.config['costs'], snakemake.config['electricity'])
update_transmission_costs(n, costs, simple_hvdc_costs=True)
else:
# Either line_volume cap or cost
n.lines['capital_cost'] = 0.
n.links.loc[links_dc_b, 'capital_cost'] = 0.
if lv == 'opt' or float(lv) > 1.0:
n.lines['s_nom_min'] = lines_s_nom
n.lines['s_nom_extendable'] = True
n.links.loc[links_dc_b, 'p_nom_min'] = n.links.loc[links_dc_b, 'p_nom']
n.links.loc[links_dc_b, 'p_nom_extendable'] = True
if lv != 'opt':
line_volume = float(lv) * total_line_volume
n.add('GlobalConstraint', 'lv_limit',
type='transmission_volume_expansion_limit',
sense='<=', constant=line_volume, carrier_attribute='AC, DC')
return n
def average_every_nhours(n, offset): def average_every_nhours(n, offset):
logger.info('Resampling the network to {}'.format(offset)) logger.info(f"Resampling the network to {offset}")
m = n.copy(with_time=False) m = n.copy(with_time=False)
snapshot_weightings = n.snapshot_weightings.resample(offset).sum() snapshot_weightings = n.snapshot_weightings.resample(offset).sum()
@ -187,12 +152,74 @@ def average_every_nhours(n, offset):
return m return m
def apply_time_segmentation(n, segments):
logger.info(f"Aggregating time series to {segments} segments.")
try:
import tsam.timeseriesaggregation as tsam
except:
raise ModuleNotFoundError("Optional dependency 'tsam' not found."
"Install via 'pip install tsam'")
p_max_pu_norm = n.generators_t.p_max_pu.max()
p_max_pu = n.generators_t.p_max_pu / p_max_pu_norm
load_norm = n.loads_t.p_set.max()
load = n.loads_t.p_set / load_norm
inflow_norm = n.storage_units_t.inflow.max()
inflow = n.storage_units_t.inflow / inflow_norm
raw = pd.concat([p_max_pu, load, inflow], axis=1, sort=False)
solver_name = snakemake.config["solving"]["solver"]["name"]
agg = tsam.TimeSeriesAggregation(raw, hoursPerPeriod=len(raw),
noTypicalPeriods=1, noSegments=int(segments),
segmentation=True, solver=solver_name)
segmented = agg.createTypicalPeriods()
weightings = segmented.index.get_level_values("Segment Duration")
offsets = np.insert(np.cumsum(weightings[:-1]), 0, 0)
snapshots = [n.snapshots[0] + pd.Timedelta(f"{offset}h") for offset in offsets]
n.set_snapshots(pd.DatetimeIndex(snapshots, name='name'))
n.snapshot_weightings = pd.Series(weightings, index=snapshots, name="weightings", dtype="float64")
segmented.index = snapshots
n.generators_t.p_max_pu = segmented[n.generators_t.p_max_pu.columns] * p_max_pu_norm
n.loads_t.p_set = segmented[n.loads_t.p_set.columns] * load_norm
n.storage_units_t.inflow = segmented[n.storage_units_t.inflow.columns] * inflow_norm
return n
def enforce_autarky(n, only_crossborder=False):
if only_crossborder:
lines_rm = n.lines.loc[
n.lines.bus0.map(n.buses.country) !=
n.lines.bus1.map(n.buses.country)
].index
links_rm = n.links.loc[
n.links.bus0.map(n.buses.country) !=
n.links.bus1.map(n.buses.country)
].index
else:
lines_rm = n.lines.index
links_rm = n.links.loc[n.links.carrier=="DC"].index
n.mremove("Line", lines_rm)
n.mremove("Link", links_rm)
def set_line_nom_max(n):
s_nom_max_set = snakemake.config["lines"].get("s_nom_max,", np.inf)
p_nom_max_set = snakemake.config["links"].get("p_nom_max", np.inf)
n.lines.s_nom_max.clip(upper=s_nom_max_set, inplace=True)
n.links.p_nom_max.clip(upper=p_nom_max_set, inplace=True)
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
snakemake = mock_snakemake('prepare_network', network='elec', simpl='', snakemake = mock_snakemake('prepare_network', network='elec', simpl='',
clusters='5', ll='copt', opts='Co2L-24H') clusters='40', ll='v0.3', opts='Co2L-24H')
configure_logging(snakemake) configure_logging(snakemake)
opts = snakemake.wildcards.opts.split('-') opts = snakemake.wildcards.opts.split('-')
@ -207,8 +234,12 @@ if __name__ == "__main__":
if m is not None: if m is not None:
n = average_every_nhours(n, m.group(0)) n = average_every_nhours(n, m.group(0))
break break
else:
logger.info("No resampling") for o in opts:
m = re.match(r'^\d+seg$', o, re.IGNORECASE)
if m is not None:
n = apply_time_segmentation(n, m.group(0)[:-3])
break
for o in opts: for o in opts:
if "Co2L" in o: if "Co2L" in o:
@ -217,27 +248,36 @@ if __name__ == "__main__":
add_co2limit(n, Nyears, float(m[0])) add_co2limit(n, Nyears, float(m[0]))
else: else:
add_co2limit(n, Nyears) add_co2limit(n, Nyears)
break
for o in opts: for o in opts:
oo = o.split("+") oo = o.split("+")
if oo[0].startswith(tuple(n.carriers.index)): suptechs = map(lambda c: c.split("-", 2)[0], n.carriers.index)
if oo[0].startswith(tuple(suptechs)):
carrier = oo[0] carrier = oo[0]
cost_factor = float(oo[1]) # handles only p_nom_max as stores and lines have no potentials
attr_lookup = {"p": "p_nom_max", "c": "capital_cost"}
attr = attr_lookup[oo[1][0]]
factor = float(oo[1][1:])
if carrier == "AC": # lines do not have carrier if carrier == "AC": # lines do not have carrier
n.lines.capital_cost *= cost_factor n.lines[attr] *= factor
else: else:
comps = {"Generator", "Link", "StorageUnit"} comps = {"Generator", "Link", "StorageUnit", "Store"}
for c in n.iterate_components(comps): for c in n.iterate_components(comps):
sel = c.df.carrier.str.contains(carrier) sel = c.df.carrier.str.contains(carrier)
c.df.loc[sel,"capital_cost"] *= cost_factor c.df.loc[sel,attr] *= factor
if 'Ep' in opts: if 'Ep' in opts:
add_emission_prices(n) add_emission_prices(n)
ll_type, factor = snakemake.wildcards.ll[0], snakemake.wildcards.ll[1:] ll_type, factor = snakemake.wildcards.ll[0], snakemake.wildcards.ll[1:]
if ll_type == 'v': set_transmission_limit(n, ll_type, factor, Nyears)
set_line_volume_limit(n, factor, Nyears)
elif ll_type == 'c': set_line_nom_max(n)
set_line_cost_limit(n, factor, Nyears)
if "ATK" in opts:
enforce_autarky(n)
elif "ATKc" in opts:
enforce_autarky(n, only_crossborder=True)
n.export_to_netcdf(snakemake.output[0]) n.export_to_netcdf(snakemake.output[0])

View File

@ -33,14 +33,15 @@ The :ref:`tutorial` uses a smaller `data bundle <https://zenodo.org/record/35179
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import progress_retrieve, configure_logging from _helpers import progress_retrieve, configure_logging
from pathlib import Path
import tarfile import tarfile
from pathlib import Path
logger = logging.getLogger(__name__)
if __name__ == "__main__": if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
snakemake = mock_snakemake('retrieve_databundle') snakemake = mock_snakemake('retrieve_databundle')

View File

@ -30,10 +30,11 @@ This rule, as a substitute for :mod:`build_natura_raster`, downloads an already
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import progress_retrieve, configure_logging from _helpers import progress_retrieve, configure_logging
logger = logging.getLogger(__name__)
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake

View File

@ -48,23 +48,23 @@ Inputs
- ``resources/costs.csv``: The database of cost assumptions for all included technologies for specific years from various sources; e.g. discount rate, lifetime, investment (CAPEX), fixed operation and maintenance (FOM), variable operation and maintenance (VOM), fuel costs, efficiency, carbon-dioxide intensity. - ``resources/costs.csv``: The database of cost assumptions for all included technologies for specific years from various sources; e.g. discount rate, lifetime, investment (CAPEX), fixed operation and maintenance (FOM), variable operation and maintenance (VOM), fuel costs, efficiency, carbon-dioxide intensity.
- ``resources/regions_onshore.geojson``: confer :ref:`busregions` - ``resources/regions_onshore.geojson``: confer :ref:`busregions`
- ``resources/regions_offshore.geojson``: confer :ref:`busregions` - ``resources/regions_offshore.geojson``: confer :ref:`busregions`
- ``networks/{network}.nc``: confer :ref:`electricity` - ``networks/elec.nc``: confer :ref:`electricity`
Outputs Outputs
------- -------
- ``resources/regions_onshore_{network}_s{simpl}.geojson``: - ``resources/regions_onshore_elec_s{simpl}.geojson``:
.. image:: ../img/regions_onshore_elec_s.png .. image:: ../img/regions_onshore_elec_s.png
:scale: 33 % :scale: 33 %
- ``resources/regions_offshore_{network}_s{simpl}.geojson``: - ``resources/regions_offshore_elec_s{simpl}.geojson``:
.. image:: ../img/regions_offshore_elec_s .png .. image:: ../img/regions_offshore_elec_s .png
:scale: 33 % :scale: 33 %
- ``resources/clustermaps_{network}_s{simpl}.h5``: Mapping of buses from ``networks/elec.nc`` to ``networks/elec_s{simpl}.nc``; has keys ['/busmap_s'] - ``resources/busmap_elec_s{simpl}.csv``: Mapping of buses from ``networks/elec.nc`` to ``networks/elec_s{simpl}.nc``;
- ``networks/{network}_s{simpl}.nc``: - ``networks/elec_s{simpl}.nc``:
.. image:: ../img/elec_s.png .. image:: ../img/elec_s.png
:scale: 33 % :scale: 33 %
@ -84,7 +84,6 @@ The rule :mod:`simplify_network` does up to four things:
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
from cluster_network import clustering_for_n_clusters, cluster_regions from cluster_network import clustering_for_n_clusters, cluster_regions
@ -102,7 +101,8 @@ import pypsa
from pypsa.io import import_components_from_dataframe, import_series_from_dataframe from pypsa.io import import_components_from_dataframe, import_series_from_dataframe
from pypsa.networkclustering import busmap_by_stubs, aggregategenerators, aggregateoneport from pypsa.networkclustering import busmap_by_stubs, aggregategenerators, aggregateoneport
idx = pd.IndexSlice logger = logging.getLogger(__name__)
def simplify_network_to_380(n): def simplify_network_to_380(n):
## All goes to v_nom == 380 ## All goes to v_nom == 380
@ -139,6 +139,7 @@ def simplify_network_to_380(n):
return n, trafo_map return n, trafo_map
def _prepare_connection_costs_per_link(n): def _prepare_connection_costs_per_link(n):
if n.links.empty: return {} if n.links.empty: return {}
@ -157,6 +158,7 @@ def _prepare_connection_costs_per_link(n):
return connection_costs_per_link return connection_costs_per_link
def _compute_connection_costs_to_bus(n, busmap, connection_costs_per_link=None, buses=None): def _compute_connection_costs_to_bus(n, busmap, connection_costs_per_link=None, buses=None):
if connection_costs_per_link is None: if connection_costs_per_link is None:
connection_costs_per_link = _prepare_connection_costs_per_link(n) connection_costs_per_link = _prepare_connection_costs_per_link(n)
@ -176,6 +178,7 @@ def _compute_connection_costs_to_bus(n, busmap, connection_costs_per_link=None,
return connection_costs_to_bus return connection_costs_to_bus
def _adjust_capital_costs_using_connection_costs(n, connection_costs_to_bus): def _adjust_capital_costs_using_connection_costs(n, connection_costs_to_bus):
for tech in connection_costs_to_bus: for tech in connection_costs_to_bus:
tech_b = n.generators.carrier == tech tech_b = n.generators.carrier == tech
@ -185,6 +188,7 @@ def _adjust_capital_costs_using_connection_costs(n, connection_costs_to_bus):
logger.info("Displacing {} generator(s) and adding connection costs to capital_costs: {} " logger.info("Displacing {} generator(s) and adding connection costs to capital_costs: {} "
.format(tech, ", ".join("{:.0f} Eur/MW/a for `{}`".format(d, b) for b, d in costs.iteritems()))) .format(tech, ", ".join("{:.0f} Eur/MW/a for `{}`".format(d, b) for b, d in costs.iteritems())))
def _aggregate_and_move_components(n, busmap, connection_costs_to_bus, aggregate_one_ports={"Load", "StorageUnit"}): def _aggregate_and_move_components(n, busmap, connection_costs_to_bus, aggregate_one_ports={"Load", "StorageUnit"}):
def replace_components(n, c, df, pnl): def replace_components(n, c, df, pnl):
n.mremove(c, n.df(c).index) n.mremove(c, n.df(c).index)
@ -209,6 +213,7 @@ def _aggregate_and_move_components(n, busmap, connection_costs_to_bus, aggregate
df = n.df(c) df = n.df(c)
n.mremove(c, df.index[df.bus0.isin(buses_to_del) | df.bus1.isin(buses_to_del)]) n.mremove(c, df.index[df.bus0.isin(buses_to_del) | df.bus1.isin(buses_to_del)])
def simplify_links(n): def simplify_links(n):
## Complex multi-node links are folded into end-points ## Complex multi-node links are folded into end-points
logger.info("Simplifying connected link components") logger.info("Simplifying connected link components")
@ -304,6 +309,7 @@ def simplify_links(n):
_aggregate_and_move_components(n, busmap, connection_costs_to_bus) _aggregate_and_move_components(n, busmap, connection_costs_to_bus)
return n, busmap return n, busmap
def remove_stubs(n): def remove_stubs(n):
logger.info("Removing stubs") logger.info("Removing stubs")
@ -315,8 +321,9 @@ def remove_stubs(n):
return n, busmap return n, busmap
def cluster(n, n_clusters): def cluster(n, n_clusters):
logger.info("Clustering to {} buses".format(n_clusters)) logger.info(f"Clustering to {n_clusters} buses")
renewable_carriers = pd.Index([tech renewable_carriers = pd.Index([tech
for tech in n.generators.carrier.unique() for tech in n.generators.carrier.unique()
@ -330,11 +337,12 @@ def cluster(n, n_clusters):
potential_mode = (consense(pd.Series([snakemake.config['renewable'][tech]['potential'] potential_mode = (consense(pd.Series([snakemake.config['renewable'][tech]['potential']
for tech in renewable_carriers])) for tech in renewable_carriers]))
if len(renewable_carriers) > 0 else 'conservative') if len(renewable_carriers) > 0 else 'conservative')
clustering = clustering_for_n_clusters(n, n_clusters, potential_mode=potential_mode, clustering = clustering_for_n_clusters(n, n_clusters, custom_busmap=False, potential_mode=potential_mode,
solver_name=snakemake.config['solving']['solver']['name']) solver_name=snakemake.config['solving']['solver']['name'])
return clustering.network, clustering.busmap return clustering.network, clustering.busmap
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from _helpers import mock_snakemake from _helpers import mock_snakemake
@ -357,8 +365,7 @@ if __name__ == "__main__":
n.export_to_netcdf(snakemake.output.network) n.export_to_netcdf(snakemake.output.network)
busemap_s = reduce(lambda x, y: x.map(y), busmaps[1:], busmaps[0]) busmap_s = reduce(lambda x, y: x.map(y), busmaps[1:], busmaps[0])
with pd.HDFStore(snakemake.output.clustermaps, mode='w') as store: busmap_s.to_csv(snakemake.output.busmap)
store.put('busmap_s', busemap_s, format="table", index=False)
cluster_regions(busmaps, snakemake.input, snakemake.output) cluster_regions(busmaps, snakemake.input, snakemake.output)

View File

@ -10,10 +10,6 @@ Relevant Settings
.. code:: yaml .. code:: yaml
(electricity:)
(BAU_mincapacities:)
(SAFE_reservemargin:)
solving: solving:
tmpdir: tmpdir:
options: options:
@ -28,10 +24,6 @@ Relevant Settings
track_iterations: track_iterations:
solver: solver:
name: name:
(solveroptions):
(plotting:)
(conv_techs:)
.. seealso:: .. seealso::
Documentation of the configuration file ``config.yaml`` at Documentation of the configuration file ``config.yaml`` at
@ -40,12 +32,12 @@ Relevant Settings
Inputs Inputs
------ ------
- ``networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc``: confer :ref:`prepare` - ``networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc``: confer :ref:`prepare`
Outputs Outputs
------- -------
- ``results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc``: Solved PyPSA network including optimisation results - ``results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc``: Solved PyPSA network including optimisation results
.. image:: ../img/results.png .. image:: ../img/results.png
:scale: 40 % :scale: 40 %
@ -85,18 +77,22 @@ Details (and errors made through this heuristic) are discussed in the paper
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import numpy as np import numpy as np
import pandas as pd import pandas as pd
import re
import pypsa import pypsa
from pypsa.linopf import (get_var, define_constraints, linexpr, join_exprs, from pypsa.linopf import (get_var, define_constraints, linexpr, join_exprs,
network_lopf, ilopf) network_lopf, ilopf)
from pathlib import Path from pathlib import Path
from vresutils.benchmark import memory_logger from vresutils.benchmark import memory_logger
logger = logging.getLogger(__name__)
def prepare_network(n, solve_opts): def prepare_network(n, solve_opts):
if 'clip_p_max_pu' in solve_opts: if 'clip_p_max_pu' in solve_opts:
@ -167,6 +163,34 @@ def add_CCL_constraints(n, config):
'<=', maximum, 'agg_p_nom', 'max') '<=', maximum, 'agg_p_nom', 'max')
def add_EQ_constraints(n, o, scaling=1e-1):
float_regex = "[0-9]*\.?[0-9]+"
level = float(re.findall(float_regex, o)[0])
if o[-1] == 'c':
ggrouper = n.generators.bus.map(n.buses.country)
lgrouper = n.loads.bus.map(n.buses.country)
sgrouper = n.storage_units.bus.map(n.buses.country)
else:
ggrouper = n.generators.bus
lgrouper = n.loads.bus
sgrouper = n.storage_units.bus
load = n.snapshot_weightings @ \
n.loads_t.p_set.groupby(lgrouper, axis=1).sum()
inflow = n.snapshot_weightings @ \
n.storage_units_t.inflow.groupby(sgrouper, axis=1).sum()
inflow = inflow.reindex(load.index).fillna(0.)
rhs = scaling * ( level * load - inflow )
lhs_gen = linexpr((n.snapshot_weightings * scaling,
get_var(n, "Generator", "p").T)
).T.groupby(ggrouper, axis=1).apply(join_exprs)
lhs_spill = linexpr((-n.snapshot_weightings * scaling,
get_var(n, "StorageUnit", "spill").T)
).T.groupby(sgrouper, axis=1).apply(join_exprs)
lhs_spill = lhs_spill.reindex(lhs_gen.index).fillna("")
lhs = lhs_gen + lhs_spill
define_constraints(n, lhs, ">=", rhs, "equity", "min")
def add_BAU_constraints(n, config): def add_BAU_constraints(n, config):
mincaps = pd.Series(config['electricity']['BAU_mincapacities']) mincaps = pd.Series(config['electricity']['BAU_mincapacities'])
lhs = (linexpr((1, get_var(n, 'Generator', 'p_nom'))) lhs = (linexpr((1, get_var(n, 'Generator', 'p_nom')))
@ -211,21 +235,25 @@ def extra_functionality(n, snapshots):
add_SAFE_constraints(n, config) add_SAFE_constraints(n, config)
if 'CCL' in opts and n.generators.p_nom_extendable.any(): if 'CCL' in opts and n.generators.p_nom_extendable.any():
add_CCL_constraints(n, config) add_CCL_constraints(n, config)
for o in opts:
if "EQ" in o:
add_EQ_constraints(n, o)
add_battery_constraints(n) add_battery_constraints(n)
def solve_network(n, config, solver_log=None, opts='', **kwargs): def solve_network(n, config, solver_log=None, opts='', **kwargs):
solver_options = config['solving']['solver'].copy() solver_options = config['solving']['solver'].copy()
solver_name = solver_options.pop('name') solver_name = solver_options.pop('name')
track_iterations = config['solving']['options'].get('track_iterations', False) cf_solving = config['solving']['options']
min_iterations = config['solving']['options'].get('min_iterations', 4) track_iterations = cf_solving.get('track_iterations', False)
max_iterations = config['solving']['options'].get('max_iterations', 6) min_iterations = cf_solving.get('min_iterations', 4)
max_iterations = cf_solving.get('max_iterations', 6)
# add to network for extra_functionality # add to network for extra_functionality
n.config = config n.config = config
n.opts = opts n.opts = opts
if config['solving']['options'].get('skip_iterations', False): if cf_solving.get('skip_iterations', False):
network_lopf(n, solver_name=solver_name, solver_options=solver_options, network_lopf(n, solver_name=solver_name, solver_options=solver_options,
extra_functionality=extra_functionality, **kwargs) extra_functionality=extra_functionality, **kwargs)
else: else:
@ -250,8 +278,8 @@ if __name__ == "__main__":
opts = snakemake.wildcards.opts.split('-') opts = snakemake.wildcards.opts.split('-')
solve_opts = snakemake.config['solving']['options'] solve_opts = snakemake.config['solving']['options']
with memory_logger(filename=getattr(snakemake.log, 'memory', None), fn = getattr(snakemake.log, 'memory', None)
interval=30.) as mem: with memory_logger(filename=fn, interval=30.) as mem:
n = pypsa.Network(snakemake.input[0]) n = pypsa.Network(snakemake.input[0])
n = prepare_network(n, solve_opts) n = prepare_network(n, solve_opts)
n = solve_network(n, config=snakemake.config, solver_dir=tmpdir, n = solve_network(n, config=snakemake.config, solver_dir=tmpdir,

View File

@ -32,13 +32,13 @@ Relevant Settings
Inputs Inputs
------ ------
- ``networks/{network}_s{simpl}_{clusters}.nc``: confer :ref:`cluster` - ``networks/elec_s{simpl}_{clusters}.nc``: confer :ref:`cluster`
- ``results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc``: confer :ref:`solve` - ``results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}.nc``: confer :ref:`solve`
Outputs Outputs
------- -------
- ``results/networks/{network}_s{simpl}_{clusters}_ec_l{ll}_{opts}_op.nc``: Solved PyPSA network for optimal dispatch including optimisation results - ``results/networks/elec_s{simpl}_{clusters}_ec_l{ll}_{opts}_op.nc``: Solved PyPSA network for optimal dispatch including optimisation results
Description Description
----------- -----------
@ -46,7 +46,6 @@ Description
""" """
import logging import logging
logger = logging.getLogger(__name__)
from _helpers import configure_logging from _helpers import configure_logging
import pypsa import pypsa
@ -56,6 +55,8 @@ from pathlib import Path
from vresutils.benchmark import memory_logger from vresutils.benchmark import memory_logger
from solve_network import solve_network, prepare_network from solve_network import solve_network, prepare_network
logger = logging.getLogger(__name__)
def set_parameters_from_optimized(n, n_optim): def set_parameters_from_optimized(n, n_optim):
lines_typed_i = n.lines.index[n.lines.type != ''] lines_typed_i = n.lines.index[n.lines.type != '']
n.lines.loc[lines_typed_i, 'num_parallel'] = \ n.lines.loc[lines_typed_i, 'num_parallel'] = \
@ -107,7 +108,8 @@ if __name__ == "__main__":
opts = snakemake.wildcards.opts.split('-') opts = snakemake.wildcards.opts.split('-')
config['solving']['options']['skip_iterations'] = False config['solving']['options']['skip_iterations'] = False
with memory_logger(filename=getattr(snakemake.log, 'memory', None), interval=30.) as mem: fn = getattr(snakemake.log, 'memory', None)
with memory_logger(filename=fn, interval=30.) as mem:
n = prepare_network(n, solve_opts=snakemake.config['solving']['options']) n = prepare_network(n, solve_opts=snakemake.config['solving']['options'])
n = solve_network(n, config, solver_dir=tmpdir, n = solve_network(n, config, solver_dir=tmpdir,
solver_log=snakemake.log.solver, opts=opts) solver_log=snakemake.log.solver, opts=opts)

View File

@ -2,7 +2,7 @@
# #
# SPDX-License-Identifier: CC0-1.0 # SPDX-License-Identifier: CC0-1.0
version: 0.2.0 version: 0.3.0
tutorial: true tutorial: true
logging: logging:
level: INFO level: INFO
@ -11,7 +11,6 @@ logging:
summary_dir: results summary_dir: results
scenario: scenario:
sectors: [E]
simpl: [''] simpl: ['']
ll: ['copt'] ll: ['copt']
clusters: [5] clusters: [5]
@ -32,6 +31,7 @@ enable:
retrieve_cutout: true retrieve_cutout: true
build_natura_raster: false build_natura_raster: false
retrieve_natura_raster: true retrieve_natura_raster: true
custom_busmap: false
electricity: electricity:
voltages: [220., 300., 380.] voltages: [220., 300., 380.]
@ -131,11 +131,13 @@ lines:
300.: "Al/St 240/40 3-bundle 300.0" 300.: "Al/St 240/40 3-bundle 300.0"
380.: "Al/St 240/40 4-bundle 380.0" 380.: "Al/St 240/40 4-bundle 380.0"
s_max_pu: 0.7 s_max_pu: 0.7
s_nom_max: .inf
length_factor: 1.25 length_factor: 1.25
under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity
links: links:
p_max_pu: 1.0 p_max_pu: 1.0
p_nom_max: .inf
include_tyndp: true include_tyndp: true
under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity under_construction: 'zero' # 'zero': set capacity to zero, 'remove': remove, 'keep': with full capacity
@ -145,6 +147,11 @@ transformers:
type: '' type: ''
load: load:
url: https://data.open-power-system-data.org/time_series/2019-06-05/time_series_60min_singleindex.csv
power_statistics: True # only for files from <2019; set false in order to get ENTSOE transparency data
interpolate_limit: 3 # data gaps up until this size are interpolated linearly
time_shift_for_large_gaps: 1w # data gaps up until this size are copied by copying from
manual_adjustments: true # false
scaling_factor: 1.0 scaling_factor: 1.0
costs: costs:
@ -244,67 +251,18 @@ plotting:
'waste' : '#68896b' 'waste' : '#68896b'
'geothermal' : '#ba91b1' 'geothermal' : '#ba91b1'
"OCGT" : "#d35050" "OCGT" : "#d35050"
"OCGT marginal" : "#d35050"
"OCGT-heat" : "#d35050"
"gas boiler" : "#d35050"
"gas boilers" : "#d35050"
"gas boiler marginal" : "#d35050"
"gas-to-power/heat" : "#d35050"
"gas" : "#d35050" "gas" : "#d35050"
"natural gas" : "#d35050" "natural gas" : "#d35050"
"CCGT" : "#b20101" "CCGT" : "#b20101"
"CCGT marginal" : "#b20101"
"Nuclear" : "#ff9000"
"Nuclear marginal" : "#ff9000"
"nuclear" : "#ff9000" "nuclear" : "#ff9000"
"coal" : "#707070" "coal" : "#707070"
"Coal" : "#707070"
"Coal marginal" : "#707070"
"lignite" : "#9e5a01" "lignite" : "#9e5a01"
"Lignite" : "#9e5a01"
"Lignite marginal" : "#9e5a01"
"Oil" : "#262626"
"oil" : "#262626" "oil" : "#262626"
"H2" : "#ea048a" "H2" : "#ea048a"
"hydrogen storage" : "#ea048a" "hydrogen storage" : "#ea048a"
"Sabatier" : "#a31597"
"methanation" : "#a31597"
"helmeth" : "#a31597"
"DAC" : "#d284ff"
"co2 stored" : "#e5e5e5"
"CO2 sequestration" : "#e5e5e5"
"battery" : "#b8ea04" "battery" : "#b8ea04"
"battery storage" : "#b8ea04"
"Li ion" : "#b8ea04"
"BEV charger" : "#e2ff7c"
"V2G" : "#7a9618"
"transport fuel cell" : "#e884be"
"retrofitting" : "#e0d6a8"
"building retrofitting" : "#e0d6a8"
"heat pumps" : "#ff9768"
"heat pump" : "#ff9768"
"air heat pump" : "#ffbea0"
"ground heat pump" : "#ff7a3d"
"power-to-heat" : "#a59e7c"
"power-to-gas" : "#db8585"
"power-to-liquid" : "#a9acd1"
"Fischer-Tropsch" : "#a9acd1"
"resistive heater" : "#aa4925"
"water tanks" : "#401f75"
"hot water storage" : "#401f75"
"hot water charging" : "#351c5e"
"hot water discharging" : "#683ab2"
"CHP" : "#d80a56"
"CHP heat" : "#d80a56"
"CHP electric" : "#d80a56"
"district heating" : "#93864b"
"Ambient" : "#262626"
"Electric load" : "#f9d002" "Electric load" : "#f9d002"
"electricity" : "#f9d002" "electricity" : "#f9d002"
"Heat load" : "#d35050"
"heat" : "#d35050"
"Transport load" : "#235ebc"
"transport" : "#235ebc"
"lines" : "#70af1d" "lines" : "#70af1d"
"transmission lines" : "#70af1d" "transmission lines" : "#70af1d"
"AC-AC" : "#70af1d" "AC-AC" : "#70af1d"
@ -324,17 +282,5 @@ plotting:
hydro: "Reservoir & Dam" hydro: "Reservoir & Dam"
battery: "Battery Storage" battery: "Battery Storage"
H2: "Hydrogen Storage" H2: "Hydrogen Storage"
lines: "Transmission lines" lines: "Transmission Lines"
ror: "Run of river" ror: "Run of River"
nice_names_n:
OCGT: "Open-Cycle\nGas"
CCGT: "Combined-Cycle\nGas"
offwind-ac: "Offshore\nWind (AC)"
offwind-dc: "Offshore\nWind (DC)"
onwind: "Onshore\nWind"
battery: "Battery\nStorage"
H2: "Hydrogen\nStorage"
lines: "Transmission\nlines"
ror: "Run of\nriver"
PHS: "Pumped Hydro\nStorage"
hydro: "Reservoir\n& Dam"