Revision complete (#139)

* ammonia_production: minor cleaning and move into __main__ (#106)

* biomass_potentials: code cleaning and automatic country index inferral (#107)

* Revision: build energy totals (#111)

* blacken

* energy_totals: preliminaries

* energy_totals: update build_swiss

* energy_totals: update build_eurostat

* energy_totals: update build_idees

* energy_totals: update build_energy_totals

* energy_totals: update build_eea_co2

* energy_totals: update build_eurostat_co2

* energy_totals: update build_co2_totals

* energy_totals: update build_transport_data

* energy_totals: add tqdm progressbar to idees

* energy_totals: adjust __main__ section

* energy_totals: handle inputs via Snakefile and config

* energy_totals: handle data and emissions year via config

* energy_totals: fix reading in eurostat for different years

* energy_totals: fix erroneous drop duplicates
This caused problems for waste management in HU and SI

* energy_totals: make scope selection of CO2 or GHG a config option

* Revision: build industrial production per country (#114)

* industry-ppc: format

* industry-ppc: rewrite for performance

* industry-ppc: move reference year to config

* industry-ppct: tidy up and format (#115)

* remove stale industry demand rules (#116)

* industry-epc: rewrite for performance (#117)

* Revision: industrial distribution key (#118)

* industry-distribution: first tidying

* industry-distribution: first tidying

* industry-distribution: fix syntax

* Revision: industrial energy demand per node today (#119)

* industry-epn: minor code cleaning

* industry-epn: remove accidental artifact

* industry-epn: remove accidental artifact II

* industry-ppn: code cleaning (#120)

* minor code cleaning (#121)

* Revision: industry sector ratios (#122)

* sector-ratios: basic reformatting

* sector-ratios: add new read_excel function that filters year already

* sector-ratios: rename jrc to idees

* sector-ratios: rename conv_factor to toe_to_MWh

* sector-ratios: modularise into functions

* Move overriding of component attributes to function and into data (#123)

* move overriding of component attributes to central function and store in separate folder

* fix return of helper.override_component_attrs

* prepare: fix accidental syntax error

* override_component_attrs: bugfix that aligns with pypsa components

* Revision: build population layout (#108)

* population_layouts: move inside __main__ and blacken

* population_layouts: misc code cleaning and multiprocessing

* population_layouts: fix fill_values assignment of urban fractions

* population_layouts: bugfig for UK-GB naming ambiguity

* population_layouts: sort countries alphabetically for better overview

* config: change path to atlite cutout

* Revision: build clustered population layouts (#112)

* population_layouts: move inside __main__ and blacken

* population_layouts: misc code cleaning and multiprocessing

* population_layouts: fix fill_values assignment of urban fractions

* population_layouts: bugfig for UK-GB naming ambiguity

* population_layouts: sort countries alphabetically for better overview

* cl_pop_layout: blacken

* cl_pop_layout: turn GeoDataFrame into GeoSeries + code cleaning

* cl_pop_layout: add fraction column which is repeatedly calculated downstream

* Revision: build various heating-related time series (#113)

* population_layouts: move inside __main__ and blacken

* population_layouts: misc code cleaning and multiprocessing

* population_layouts: fix fill_values assignment of urban fractions

* population_layouts: bugfig for UK-GB naming ambiguity

* population_layouts: sort countries alphabetically for better overview

* cl_pop_layout: blacken

* cl_pop_layout: turn GeoDataFrame into GeoSeries + code cleaning

* gitignore: add .vscode

* heating_profiles: update to new atlite and move into __main__

* heating_profiles: remove extra cutout

* heating_profiles: load regions with .buffer(0) and remove clean_invalid_geometries

* heating_profiles: load regions with .buffer(0) before squeeze()

* heating_profiles: account for transpose of dataarray

* heating_profiles: account for transpose of dataarray in add_exiting_baseyear

* Reduce verbosity of Snakefile (2) (#128)

* tidy Snakefile light

* Snakefile: fix indents

* Snakefile: add missing RDIR

* tidy config by removing quotes and expanding lists (#109)

* bugfix: reorder squeeze() and buffer()

* plot/summary: cosmetic changes including: (#131)

- matplotlibrc for default style and backend
- remove unused config options
- option to configure geomap colors
- option to configure geomap bounds

* solve: align with pypsa-eur using ilopf (#129)

* tidy myopic code scripts (#132)

* use mock_snakemake from pypsa-eur (#133)

* Snakefile: add benchmark files to each rule

* Snakefile: only run build_retro_cost if endogenously optimised

* Snakefile: remove old {network} wildcard constraints

* WIP: Revision: prepare_sector_network (#124)

* population_layouts: move inside __main__ and blacken

* population_layouts: misc code cleaning and multiprocessing

* population_layouts: fix fill_values assignment of urban fractions

* population_layouts: bugfig for UK-GB naming ambiguity

* population_layouts: sort countries alphabetically for better overview

* cl_pop_layout: blacken

* cl_pop_layout: turn GeoDataFrame into GeoSeries + code cleaning

* move overriding of component attributes to central function and store in separate folder

* prepare: sort imports and remove six dependency

* prepare: remove add_emission_prices

* prepare: remove unused set_line_s_max_pu
This is a function from prepare_network

* prepare: remove unused set_line_volume_limit
This is a PyPSA-Eur function from prepare_network

* prepare: tidy add_co2limit

* remove six dependency

* prepare: tidy code first batch

* prepare: extend override_component_attrs to avoid hacky madd

* prepare: remove hacky madd() for individual components

* prepare: tidy shift function

* prepare: nodes and countries from n.buses not pop_layout

* prepare: tidy loading of pop_layout

* prepare: fix prepare_costs function

* prepare: optimise loading of traffic data

* prepare: move localizer into generate_periodic profiles

* prepare: some formatting of transport data

* prepare: eliminate some code duplication

* prepare: fix remove_h2_network
- only try to remove EU H2 store if it exists
- remove readding nodal Stores because they are never removed

* prepare: move cost adjustment to own function

* prepare: fix a syntax error

* prepare: add investment_year to get() assuming global variable

* prepare: move co2_totals out of prepare_data()

* Snakefile: remove unused prepare_sector_network inputs

* prepare: move limit p/s_nom of lines/links into function

* prepare: tidy add_co2limit file handling

* Snakefile: fix tabs

* override_component_attrs: add n/a defaults

* README: Add network picture to make scope clear

* README: Fix date of preprint (was too optimistic...)

* prepare: move some more config options to config.yaml

* prepare: runtime bugfixes

* fix benchmark path

* adjust plot ylims

* add unit attribute to bus, correct cement capture efficiency

* bugfix: land usage constrained missed inplace operation

Co-authored-by: Tom Brown <tom@nworbmot.org>

* add release notes

* remove old fix_branches() function

* deps: make geopy optional, remove unused imports

* increase default BarConvTol

* get ready for upcoming PyPSA release

* re-remove ** bug

* amend release notes

Co-authored-by: Tom Brown <tom@nworbmot.org>
This commit is contained in:
Fabian Neumann 2021-07-01 20:09:04 +02:00 committed by GitHub
parent 96711aab39
commit 1fc1d2a17d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
39 changed files with 5313 additions and 5267 deletions

5
.gitignore vendored
View File

@ -2,9 +2,10 @@
.ipynb_checkpoints .ipynb_checkpoints
__pycache__ __pycache__
gurobi.log gurobi.log
.vscode
/bak /bak
/resources /resources*
/results /results
/networks /networks
/benchmarks /benchmarks
@ -46,4 +47,4 @@ config.yaml
doc/_build doc/_build
*.xls *.xls

308
Snakefile
View File

@ -1,9 +1,9 @@
configfile: "config.yaml" configfile: "config.yaml"
wildcard_constraints: wildcard_constraints:
lv="[a-z0-9\.]+", lv="[a-z0-9\.]+",
network="[a-zA-Z0-9]*",
simpl="[a-zA-Z0-9]*", simpl="[a-zA-Z0-9]*",
clusters="[0-9]+m?", clusters="[0-9]+m?",
sectors="[+a-zA-Z0-9]+", sectors="[+a-zA-Z0-9]+",
@ -11,27 +11,31 @@ wildcard_constraints:
sector_opts="[-+a-zA-Z0-9\.\s]*" sector_opts="[-+a-zA-Z0-9\.\s]*"
SDIR = config['summary_dir'] + '/' + config['run']
RDIR = config['results_dir'] + config['run']
CDIR = config['costs_dir']
subworkflow pypsaeur: subworkflow pypsaeur:
workdir: "../pypsa-eur" workdir: "../pypsa-eur"
snakefile: "../pypsa-eur/Snakefile" snakefile: "../pypsa-eur/Snakefile"
configfile: "../pypsa-eur/config.yaml" configfile: "../pypsa-eur/config.yaml"
rule all:
input:
config['summary_dir'] + '/' + config['run'] + '/graphs/costs.pdf'
rule all:
input: SDIR + '/graphs/costs.pdf'
rule solve_all_networks: rule solve_all_networks:
input: input:
expand(config['results_dir'] + config['run'] + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc", expand(RDIR + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc",
**config['scenario']) **config['scenario'])
rule prepare_sector_networks: rule prepare_sector_networks:
input: input:
expand(config['results_dir'] + config['run'] + "/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc", expand(RDIR + "/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc",
**config['scenario']) **config['scenario'])
rule build_population_layouts: rule build_population_layouts:
@ -43,6 +47,8 @@ rule build_population_layouts:
pop_layout_urban="resources/pop_layout_urban.nc", pop_layout_urban="resources/pop_layout_urban.nc",
pop_layout_rural="resources/pop_layout_rural.nc" pop_layout_rural="resources/pop_layout_rural.nc"
resources: mem_mb=20000 resources: mem_mb=20000
benchmark: "benchmarks/build_population_layouts"
threads: 8
script: "scripts/build_population_layouts.py" script: "scripts/build_population_layouts.py"
@ -55,6 +61,7 @@ rule build_clustered_population_layouts:
output: output:
clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv" clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv"
resources: mem_mb=10000 resources: mem_mb=10000
benchmark: "benchmarks/build_clustered_population_layouts/s{simpl}_{clusters}"
script: "scripts/build_clustered_population_layouts.py" script: "scripts/build_clustered_population_layouts.py"
@ -67,6 +74,7 @@ rule build_simplified_population_layouts:
output: output:
clustered_pop_layout="resources/pop_layout_elec_s{simpl}.csv" clustered_pop_layout="resources/pop_layout_elec_s{simpl}.csv"
resources: mem_mb=10000 resources: mem_mb=10000
benchmark: "benchmarks/build_clustered_population_layouts/s{simpl}"
script: "scripts/build_clustered_population_layouts.py" script: "scripts/build_clustered_population_layouts.py"
@ -81,8 +89,10 @@ rule build_heat_demands:
heat_demand_rural="resources/heat_demand_rural_elec_s{simpl}_{clusters}.nc", heat_demand_rural="resources/heat_demand_rural_elec_s{simpl}_{clusters}.nc",
heat_demand_total="resources/heat_demand_total_elec_s{simpl}_{clusters}.nc" heat_demand_total="resources/heat_demand_total_elec_s{simpl}_{clusters}.nc"
resources: mem_mb=20000 resources: mem_mb=20000
benchmark: "benchmarks/build_heat_demands/s{simpl}_{clusters}"
script: "scripts/build_heat_demand.py" script: "scripts/build_heat_demand.py"
rule build_temperature_profiles: rule build_temperature_profiles:
input: input:
pop_layout_total="resources/pop_layout_total.nc", pop_layout_total="resources/pop_layout_total.nc",
@ -97,6 +107,7 @@ rule build_temperature_profiles:
temp_air_rural="resources/temp_air_rural_elec_s{simpl}_{clusters}.nc", temp_air_rural="resources/temp_air_rural_elec_s{simpl}_{clusters}.nc",
temp_air_urban="resources/temp_air_urban_elec_s{simpl}_{clusters}.nc" temp_air_urban="resources/temp_air_urban_elec_s{simpl}_{clusters}.nc"
resources: mem_mb=20000 resources: mem_mb=20000
benchmark: "benchmarks/build_temperature_profiles/s{simpl}_{clusters}"
script: "scripts/build_temperature_profiles.py" script: "scripts/build_temperature_profiles.py"
@ -116,6 +127,7 @@ rule build_cop_profiles:
cop_air_rural="resources/cop_air_rural_elec_s{simpl}_{clusters}.nc", cop_air_rural="resources/cop_air_rural_elec_s{simpl}_{clusters}.nc",
cop_air_urban="resources/cop_air_urban_elec_s{simpl}_{clusters}.nc" cop_air_urban="resources/cop_air_urban_elec_s{simpl}_{clusters}.nc"
resources: mem_mb=20000 resources: mem_mb=20000
benchmark: "benchmarks/build_cop_profiles/s{simpl}_{clusters}"
script: "scripts/build_cop_profiles.py" script: "scripts/build_cop_profiles.py"
@ -130,21 +142,32 @@ rule build_solar_thermal_profiles:
solar_thermal_urban="resources/solar_thermal_urban_elec_s{simpl}_{clusters}.nc", solar_thermal_urban="resources/solar_thermal_urban_elec_s{simpl}_{clusters}.nc",
solar_thermal_rural="resources/solar_thermal_rural_elec_s{simpl}_{clusters}.nc" solar_thermal_rural="resources/solar_thermal_rural_elec_s{simpl}_{clusters}.nc"
resources: mem_mb=20000 resources: mem_mb=20000
benchmark: "benchmarks/build_solar_thermal_profiles/s{simpl}_{clusters}"
script: "scripts/build_solar_thermal_profiles.py" script: "scripts/build_solar_thermal_profiles.py"
def input_eurostat(w):
# 2016 includes BA, 2017 does not
report_year = config["energy"]["eurostat_report_year"]
return f"data/eurostat-energy_balances-june_{report_year}_edition"
rule build_energy_totals: rule build_energy_totals:
input: input:
nuts3_shapes=pypsaeur('resources/nuts3_shapes.geojson') nuts3_shapes=pypsaeur('resources/nuts3_shapes.geojson'),
co2="data/eea/UNFCCC_v23.csv",
swiss="data/switzerland-sfoe/switzerland-new_format.csv",
idees="data/jrc-idees-2015",
eurostat=input_eurostat
output: output:
energy_name='resources/energy_totals.csv', energy_name='resources/energy_totals.csv',
co2_name='resources/co2_totals.csv', co2_name='resources/co2_totals.csv',
transport_name='resources/transport_data.csv' transport_name='resources/transport_data.csv'
threads: 1 threads: 16
resources: mem_mb=10000 resources: mem_mb=10000
benchmark: "benchmarks/build_energy_totals"
script: 'scripts/build_energy_totals.py' script: 'scripts/build_energy_totals.py'
rule build_biomass_potentials: rule build_biomass_potentials:
input: input:
jrc_potentials="data/biomass/JRC Biomass Potentials.xlsx" jrc_potentials="data/biomass/JRC Biomass Potentials.xlsx"
@ -153,8 +176,10 @@ rule build_biomass_potentials:
biomass_potentials='resources/biomass_potentials.csv' biomass_potentials='resources/biomass_potentials.csv'
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_biomass_potentials"
script: 'scripts/build_biomass_potentials.py' script: 'scripts/build_biomass_potentials.py'
rule build_ammonia_production: rule build_ammonia_production:
input: input:
usgs="data/myb1-2017-nitro.xls" usgs="data/myb1-2017-nitro.xls"
@ -162,26 +187,32 @@ rule build_ammonia_production:
ammonia_production="resources/ammonia_production.csv" ammonia_production="resources/ammonia_production.csv"
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_ammonia_production"
script: 'scripts/build_ammonia_production.py' script: 'scripts/build_ammonia_production.py'
rule build_industry_sector_ratios: rule build_industry_sector_ratios:
input: input:
ammonia_production="resources/ammonia_production.csv" ammonia_production="resources/ammonia_production.csv",
idees="data/jrc-idees-2015"
output: output:
industry_sector_ratios="resources/industry_sector_ratios.csv" industry_sector_ratios="resources/industry_sector_ratios.csv"
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_industry_sector_ratios"
script: 'scripts/build_industry_sector_ratios.py' script: 'scripts/build_industry_sector_ratios.py'
rule build_industrial_production_per_country: rule build_industrial_production_per_country:
input: input:
ammonia_production="resources/ammonia_production.csv" ammonia_production="resources/ammonia_production.csv",
jrc="data/jrc-idees-2015",
eurostat="data/eurostat-energy_balances-may_2018_edition",
output: output:
industrial_production_per_country="resources/industrial_production_per_country.csv" industrial_production_per_country="resources/industrial_production_per_country.csv"
threads: 1 threads: 8
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_industrial_production_per_country"
script: 'scripts/build_industrial_production_per_country.py' script: 'scripts/build_industrial_production_per_country.py'
@ -192,25 +223,23 @@ rule build_industrial_production_per_country_tomorrow:
industrial_production_per_country_tomorrow="resources/industrial_production_per_country_tomorrow.csv" industrial_production_per_country_tomorrow="resources/industrial_production_per_country_tomorrow.csv"
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_industrial_production_per_country_tomorrow"
script: 'scripts/build_industrial_production_per_country_tomorrow.py' script: 'scripts/build_industrial_production_per_country_tomorrow.py'
rule build_industrial_distribution_key: rule build_industrial_distribution_key:
input: input:
regions_onshore=pypsaeur('resources/regions_onshore_elec_s{simpl}_{clusters}.geojson'),
clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv", clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv",
europe_shape=pypsaeur('resources/europe_shape.geojson'),
hotmaps_industrial_database="data/Industrial_Database.csv", hotmaps_industrial_database="data/Industrial_Database.csv",
network=pypsaeur('networks/elec_s{simpl}_{clusters}.nc')
output: output:
industrial_distribution_key="resources/industrial_distribution_key_elec_s{simpl}_{clusters}.csv" industrial_distribution_key="resources/industrial_distribution_key_elec_s{simpl}_{clusters}.csv"
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_industrial_distribution_key/s{simpl}_{clusters}"
script: 'scripts/build_industrial_distribution_key.py' script: 'scripts/build_industrial_distribution_key.py'
rule build_industrial_production_per_node: rule build_industrial_production_per_node:
input: input:
industrial_distribution_key="resources/industrial_distribution_key_elec_s{simpl}_{clusters}.csv", industrial_distribution_key="resources/industrial_distribution_key_elec_s{simpl}_{clusters}.csv",
@ -219,6 +248,7 @@ rule build_industrial_production_per_node:
industrial_production_per_node="resources/industrial_production_elec_s{simpl}_{clusters}.csv" industrial_production_per_node="resources/industrial_production_elec_s{simpl}_{clusters}.csv"
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_industrial_production_per_node/s{simpl}_{clusters}"
script: 'scripts/build_industrial_production_per_node.py' script: 'scripts/build_industrial_production_per_node.py'
@ -231,17 +261,20 @@ rule build_industrial_energy_demand_per_node:
industrial_energy_demand_per_node="resources/industrial_energy_demand_elec_s{simpl}_{clusters}.csv" industrial_energy_demand_per_node="resources/industrial_energy_demand_elec_s{simpl}_{clusters}.csv"
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_industrial_energy_demand_per_node/s{simpl}_{clusters}"
script: 'scripts/build_industrial_energy_demand_per_node.py' script: 'scripts/build_industrial_energy_demand_per_node.py'
rule build_industrial_energy_demand_per_country_today: rule build_industrial_energy_demand_per_country_today:
input: input:
jrc="data/jrc-idees-2015",
ammonia_production="resources/ammonia_production.csv", ammonia_production="resources/ammonia_production.csv",
industrial_production_per_country="resources/industrial_production_per_country.csv" industrial_production_per_country="resources/industrial_production_per_country.csv"
output: output:
industrial_energy_demand_per_country_today="resources/industrial_energy_demand_per_country_today.csv" industrial_energy_demand_per_country_today="resources/industrial_energy_demand_per_country_today.csv"
threads: 1 threads: 8
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_industrial_energy_demand_per_country_today"
script: 'scripts/build_industrial_energy_demand_per_country_today.py' script: 'scripts/build_industrial_energy_demand_per_country_today.py'
@ -253,64 +286,49 @@ rule build_industrial_energy_demand_per_node_today:
industrial_energy_demand_per_node_today="resources/industrial_energy_demand_today_elec_s{simpl}_{clusters}.csv" industrial_energy_demand_per_node_today="resources/industrial_energy_demand_today_elec_s{simpl}_{clusters}.csv"
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
benchmark: "benchmarks/build_industrial_energy_demand_per_node_today/s{simpl}_{clusters}"
script: 'scripts/build_industrial_energy_demand_per_node_today.py' script: 'scripts/build_industrial_energy_demand_per_node_today.py'
if config["sector"]["retrofitting"]["retro_endogen"]:
rule build_industrial_energy_demand_per_country: rule build_retro_cost:
input: input:
industry_sector_ratios="resources/industry_sector_ratios.csv", building_stock="data/retro/data_building_stock.csv",
industrial_production_per_country="resources/industrial_production_per_country_tomorrow.csv" data_tabula="data/retro/tabula-calculator-calcsetbuilding.csv",
output: air_temperature = "resources/temp_air_total_elec_s{simpl}_{clusters}.nc",
industrial_energy_demand_per_country="resources/industrial_energy_demand_per_country.csv" u_values_PL="data/retro/u_values_poland.csv",
threads: 1 tax_w="data/retro/electricity_taxes_eu.csv",
resources: mem_mb=1000 construction_index="data/retro/comparative_level_investment.csv",
script: 'scripts/build_industrial_energy_demand_per_country.py' floor_area_missing="data/retro/floor_area_missing.csv",
clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv",
cost_germany="data/retro/retro_cost_germany.csv",
rule build_industrial_demand: window_assumptions="data/retro/window_assumptions.csv",
input: output:
clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv", retro_cost="resources/retro_cost_elec_s{simpl}_{clusters}.csv",
industrial_demand_per_country="resources/industrial_energy_demand_per_country.csv" floor_area="resources/floor_area_elec_s{simpl}_{clusters}.csv"
output: resources: mem_mb=1000
industrial_demand="resources/industrial_demand_elec_s{simpl}_{clusters}.csv" benchmark: "benchmarks/build_retro_cost/s{simpl}_{clusters}"
threads: 1 script: "scripts/build_retro_cost.py"
resources: mem_mb=1000 build_retro_cost_output = rules.build_retro_cost.output
script: 'scripts/build_industrial_demand.py' else:
build_retro_cost_output = {}
rule build_retro_cost:
input:
building_stock="data/retro/data_building_stock.csv",
data_tabula="data/retro/tabula-calculator-calcsetbuilding.csv",
air_temperature = "resources/temp_air_total_elec_s{simpl}_{clusters}.nc",
u_values_PL="data/retro/u_values_poland.csv",
tax_w="data/retro/electricity_taxes_eu.csv",
construction_index="data/retro/comparative_level_investment.csv",
floor_area_missing="data/retro/floor_area_missing.csv",
clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv",
cost_germany="data/retro/retro_cost_germany.csv",
window_assumptions="data/retro/window_assumptions.csv",
output:
retro_cost="resources/retro_cost_elec_s{simpl}_{clusters}.csv",
floor_area="resources/floor_area_elec_s{simpl}_{clusters}.csv"
resources: mem_mb=1000
script: "scripts/build_retro_cost.py"
rule prepare_sector_network: rule prepare_sector_network:
input: input:
overrides="data/override_component_attrs",
network=pypsaeur('networks/elec_s{simpl}_{clusters}_ec_lv{lv}_{opts}.nc'), network=pypsaeur('networks/elec_s{simpl}_{clusters}_ec_lv{lv}_{opts}.nc'),
energy_totals_name='resources/energy_totals.csv', energy_totals_name='resources/energy_totals.csv',
co2_totals_name='resources/co2_totals.csv', co2_totals_name='resources/co2_totals.csv',
transport_name='resources/transport_data.csv', transport_name='resources/transport_data.csv',
traffic_data = "data/emobility/", traffic_data_KFZ = "data/emobility/KFZ__count",
traffic_data_Pkw = "data/emobility/Pkw__count",
biomass_potentials='resources/biomass_potentials.csv', biomass_potentials='resources/biomass_potentials.csv',
timezone_mappings='data/timezone_mappings.csv',
heat_profile="data/heat_load_profile_BDEW.csv", heat_profile="data/heat_load_profile_BDEW.csv",
costs=config['costs_dir'] + "costs_{planning_horizons}.csv", costs=CDIR + "costs_{planning_horizons}.csv",
h2_cavern = "data/hydrogen_salt_cavern_potentials.csv",
profile_offwind_ac=pypsaeur("resources/profile_offwind-ac.nc"), profile_offwind_ac=pypsaeur("resources/profile_offwind-ac.nc"),
profile_offwind_dc=pypsaeur("resources/profile_offwind-dc.nc"), profile_offwind_dc=pypsaeur("resources/profile_offwind-dc.nc"),
h2_cavern="data/hydrogen_salt_cavern_potentials.csv",
busmap_s=pypsaeur("resources/busmap_elec_s{simpl}.csv"), busmap_s=pypsaeur("resources/busmap_elec_s{simpl}.csv"),
busmap=pypsaeur("resources/busmap_elec_s{simpl}_{clusters}.csv"), busmap=pypsaeur("resources/busmap_elec_s{simpl}_{clusters}.csv"),
clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv", clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv",
@ -334,97 +352,101 @@ rule prepare_sector_network:
solar_thermal_total="resources/solar_thermal_total_elec_s{simpl}_{clusters}.nc", solar_thermal_total="resources/solar_thermal_total_elec_s{simpl}_{clusters}.nc",
solar_thermal_urban="resources/solar_thermal_urban_elec_s{simpl}_{clusters}.nc", solar_thermal_urban="resources/solar_thermal_urban_elec_s{simpl}_{clusters}.nc",
solar_thermal_rural="resources/solar_thermal_rural_elec_s{simpl}_{clusters}.nc", solar_thermal_rural="resources/solar_thermal_rural_elec_s{simpl}_{clusters}.nc",
retro_cost_energy = "resources/retro_cost_elec_s{simpl}_{clusters}.csv", **build_retro_cost_output
floor_area = "resources/floor_area_elec_s{simpl}_{clusters}.csv" output: RDIR + '/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc'
output: config['results_dir'] + config['run'] + '/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc'
threads: 1 threads: 1
resources: mem_mb=2000 resources: mem_mb=2000
benchmark: config['results_dir'] + config['run'] + "/benchmarks/prepare_network/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}" benchmark: RDIR + "/benchmarks/prepare_network/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}"
script: "scripts/prepare_sector_network.py" script: "scripts/prepare_sector_network.py"
rule plot_network: rule plot_network:
input: input:
network=config['results_dir'] + config['run'] + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc" overrides="data/override_component_attrs",
network=RDIR + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc"
output: output:
map=config['results_dir'] + config['run'] + "/maps/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}-costs-all_{planning_horizons}.pdf", map=RDIR + "/maps/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}-costs-all_{planning_horizons}.pdf",
today=config['results_dir'] + config['run'] + "/maps/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}-today.pdf" today=RDIR + "/maps/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}-today.pdf"
threads: 2 threads: 2
resources: mem_mb=10000 resources: mem_mb=10000
benchmark: RDIR + "/benchmarks/plot_network/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}"
script: "scripts/plot_network.py" script: "scripts/plot_network.py"
rule copy_config: rule copy_config:
output: output: SDIR + '/configs/config.yaml'
config=config['summary_dir'] + '/' + config['run'] + '/configs/config.yaml'
threads: 1 threads: 1
resources: mem_mb=1000 resources: mem_mb=1000
script: benchmark: SDIR + "/benchmarks/copy_config"
'scripts/copy_config.py' script: "scripts/copy_config.py"
rule make_summary: rule make_summary:
input: input:
networks=expand(config['results_dir'] + config['run'] + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc", overrides="data/override_component_attrs",
**config['scenario']), networks=expand(
costs=config['costs_dir'] + "costs_{}.csv".format(config['scenario']['planning_horizons'][0]), RDIR + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc",
plots=expand(config['results_dir'] + config['run'] + "/maps/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}-costs-all_{planning_horizons}.pdf", **config['scenario']
**config['scenario']) ),
#heat_demand_name='data/heating/daily_heat_demand.h5' costs=CDIR + "costs_{}.csv".format(config['scenario']['planning_horizons'][0]),
plots=expand(
RDIR + "/maps/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}-costs-all_{planning_horizons}.pdf",
**config['scenario']
)
output: output:
nodal_costs=config['summary_dir'] + '/' + config['run'] + '/csvs/nodal_costs.csv', nodal_costs=SDIR + '/csvs/nodal_costs.csv',
nodal_capacities=config['summary_dir'] + '/' + config['run'] + '/csvs/nodal_capacities.csv', nodal_capacities=SDIR + '/csvs/nodal_capacities.csv',
nodal_cfs=config['summary_dir'] + '/' + config['run'] + '/csvs/nodal_cfs.csv', nodal_cfs=SDIR + '/csvs/nodal_cfs.csv',
cfs=config['summary_dir'] + '/' + config['run'] + '/csvs/cfs.csv', cfs=SDIR + '/csvs/cfs.csv',
costs=config['summary_dir'] + '/' + config['run'] + '/csvs/costs.csv', costs=SDIR + '/csvs/costs.csv',
capacities=config['summary_dir'] + '/' + config['run'] + '/csvs/capacities.csv', capacities=SDIR + '/csvs/capacities.csv',
curtailment=config['summary_dir'] + '/' + config['run'] + '/csvs/curtailment.csv', curtailment=SDIR + '/csvs/curtailment.csv',
energy=config['summary_dir'] + '/' + config['run'] + '/csvs/energy.csv', energy=SDIR + '/csvs/energy.csv',
supply=config['summary_dir'] + '/' + config['run'] + '/csvs/supply.csv', supply=SDIR + '/csvs/supply.csv',
supply_energy=config['summary_dir'] + '/' + config['run'] + '/csvs/supply_energy.csv', supply_energy=SDIR + '/csvs/supply_energy.csv',
prices=config['summary_dir'] + '/' + config['run'] + '/csvs/prices.csv', prices=SDIR + '/csvs/prices.csv',
weighted_prices=config['summary_dir'] + '/' + config['run'] + '/csvs/weighted_prices.csv', weighted_prices=SDIR + '/csvs/weighted_prices.csv',
market_values=config['summary_dir'] + '/' + config['run'] + '/csvs/market_values.csv', market_values=SDIR + '/csvs/market_values.csv',
price_statistics=config['summary_dir'] + '/' + config['run'] + '/csvs/price_statistics.csv', price_statistics=SDIR + '/csvs/price_statistics.csv',
metrics=config['summary_dir'] + '/' + config['run'] + '/csvs/metrics.csv' metrics=SDIR + '/csvs/metrics.csv'
threads: 2 threads: 2
resources: mem_mb=10000 resources: mem_mb=10000
script: benchmark: SDIR + "/benchmarks/make_summary"
'scripts/make_summary.py' script: "scripts/make_summary.py"
rule plot_summary: rule plot_summary:
input: input:
costs=config['summary_dir'] + '/' + config['run'] + '/csvs/costs.csv', costs=SDIR + '/csvs/costs.csv',
energy=config['summary_dir'] + '/' + config['run'] + '/csvs/energy.csv', energy=SDIR + '/csvs/energy.csv',
balances=config['summary_dir'] + '/' + config['run'] + '/csvs/supply_energy.csv' balances=SDIR + '/csvs/supply_energy.csv'
output: output:
costs=config['summary_dir'] + '/' + config['run'] + '/graphs/costs.pdf', costs=SDIR + '/graphs/costs.pdf',
energy=config['summary_dir'] + '/' + config['run'] + '/graphs/energy.pdf', energy=SDIR + '/graphs/energy.pdf',
balances=config['summary_dir'] + '/' + config['run'] + '/graphs/balances-energy.pdf' balances=SDIR + '/graphs/balances-energy.pdf'
threads: 2 threads: 2
resources: mem_mb=10000 resources: mem_mb=10000
script: benchmark: SDIR + "/benchmarks/plot_summary"
'scripts/plot_summary.py' script: "scripts/plot_summary.py"
if config["foresight"] == "overnight": if config["foresight"] == "overnight":
rule solve_network: rule solve_network:
input: input:
network=config['results_dir'] + config['run'] + "/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc", overrides="data/override_component_attrs",
costs=config['costs_dir'] + "costs_{planning_horizons}.csv", network=RDIR + "/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc",
config=config['summary_dir'] + '/' + config['run'] + '/configs/config.yaml' costs=CDIR + "costs_{planning_horizons}.csv",
output: config['results_dir'] + config['run'] + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc" config=SDIR + '/configs/config.yaml'
output: RDIR + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc"
shadow: "shallow" shadow: "shallow"
log: log:
solver=config['results_dir'] + config['run'] + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_solver.log", solver=RDIR + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_solver.log",
python=config['results_dir'] + config['run'] + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_python.log", python=RDIR + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_python.log",
memory=config['results_dir'] + config['run'] + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_memory.log" memory=RDIR + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_memory.log"
benchmark: config['results_dir'] + config['run'] + "/benchmarks/solve_network/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}"
threads: 4 threads: 4
resources: mem_mb=config['solving']['mem'] resources: mem_mb=config['solving']['mem']
# group: "solve" # with group, threads is ignored https://bitbucket.org/snakemake/snakemake/issues/971/group-job-description-does-not-contain benchmark: RDIR + "/benchmarks/solve_network/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}"
script: "scripts/solve_network.py" script: "scripts/solve_network.py"
@ -432,53 +454,67 @@ if config["foresight"] == "myopic":
rule add_existing_baseyear: rule add_existing_baseyear:
input: input:
network=config['results_dir'] + config['run'] + '/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc', overrides="data/override_component_attrs",
network=RDIR + '/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc',
powerplants=pypsaeur('resources/powerplants.csv'), powerplants=pypsaeur('resources/powerplants.csv'),
busmap_s=pypsaeur("resources/busmap_elec_s{simpl}.csv"), busmap_s=pypsaeur("resources/busmap_elec_s{simpl}.csv"),
busmap=pypsaeur("resources/busmap_elec_s{simpl}_{clusters}.csv"), busmap=pypsaeur("resources/busmap_elec_s{simpl}_{clusters}.csv"),
clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv", clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv",
costs=config['costs_dir'] + "costs_{}.csv".format(config['scenario']['planning_horizons'][0]), costs=CDIR + "costs_{}.csv".format(config['scenario']['planning_horizons'][0]),
cop_soil_total="resources/cop_soil_total_elec_s{simpl}_{clusters}.nc", cop_soil_total="resources/cop_soil_total_elec_s{simpl}_{clusters}.nc",
cop_air_total="resources/cop_air_total_elec_s{simpl}_{clusters}.nc" cop_air_total="resources/cop_air_total_elec_s{simpl}_{clusters}.nc",
output: config['results_dir'] + config['run'] + '/prenetworks-brownfield/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc' existing_heating='data/existing_infrastructure/existing_heating_raw.csv',
country_codes='data/Country_codes.csv',
existing_solar='data/existing_infrastructure/solar_capacity_IRENA.csv',
existing_onwind='data/existing_infrastructure/onwind_capacity_IRENA.csv',
existing_offwind='data/existing_infrastructure/offwind_capacity_IRENA.csv',
output: RDIR + '/prenetworks-brownfield/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc'
wildcard_constraints: wildcard_constraints:
planning_horizons=config['scenario']['planning_horizons'][0] #only applies to baseyear planning_horizons=config['scenario']['planning_horizons'][0] #only applies to baseyear
threads: 1 threads: 1
resources: mem_mb=2000 resources: mem_mb=2000
benchmark: RDIR + '/benchmarks/add_existing_baseyear/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}'
script: "scripts/add_existing_baseyear.py" script: "scripts/add_existing_baseyear.py"
def process_input(wildcards):
i = config["scenario"]["planning_horizons"].index(int(wildcards.planning_horizons)) def solved_previous_horizon(wildcards):
return config['results_dir'] + config['run'] + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_" + str(config["scenario"]["planning_horizons"][i-1]) + ".nc" planning_horizons = config["scenario"]["planning_horizons"]
i = planning_horizons.index(int(wildcards.planning_horizons))
planning_horizon_p = str(planning_horizons[i-1])
return RDIR + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_" + planning_horizon_p + ".nc"
rule add_brownfield: rule add_brownfield:
input: input:
network=config['results_dir'] + config['run'] + '/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc', overrides="data/override_component_attrs",
network_p=process_input, #solved network at previous time step network=RDIR + '/prenetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc',
costs=config['costs_dir'] + "costs_{planning_horizons}.csv", network_p=solved_previous_horizon, #solved network at previous time step
costs=CDIR + "costs_{planning_horizons}.csv",
cop_soil_total="resources/cop_soil_total_elec_s{simpl}_{clusters}.nc", cop_soil_total="resources/cop_soil_total_elec_s{simpl}_{clusters}.nc",
cop_air_total="resources/cop_air_total_elec_s{simpl}_{clusters}.nc" cop_air_total="resources/cop_air_total_elec_s{simpl}_{clusters}.nc"
output: RDIR + "/prenetworks-brownfield/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc"
output: config['results_dir'] + config['run'] + "/prenetworks-brownfield/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc"
threads: 4 threads: 4
resources: mem_mb=10000 resources: mem_mb=10000
benchmark: RDIR + '/benchmarks/add_brownfield/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}'
script: "scripts/add_brownfield.py" script: "scripts/add_brownfield.py"
ruleorder: add_existing_baseyear > add_brownfield ruleorder: add_existing_baseyear > add_brownfield
rule solve_network_myopic: rule solve_network_myopic:
input: input:
network=config['results_dir'] + config['run'] + "/prenetworks-brownfield/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc", overrides="data/override_component_attrs",
costs=config['costs_dir'] + "costs_{planning_horizons}.csv", network=RDIR + "/prenetworks-brownfield/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc",
config=config['summary_dir'] + '/' + config['run'] + '/configs/config.yaml' costs=CDIR + "costs_{planning_horizons}.csv",
output: config['results_dir'] + config['run'] + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc" config=SDIR + '/configs/config.yaml'
output: RDIR + "/postnetworks/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}.nc"
shadow: "shallow" shadow: "shallow"
log: log:
solver=config['results_dir'] + config['run'] + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_solver.log", solver=RDIR + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_solver.log",
python=config['results_dir'] + config['run'] + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_python.log", python=RDIR + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_python.log",
memory=config['results_dir'] + config['run'] + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_memory.log" memory=RDIR + "/logs/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}_memory.log"
benchmark: config['results_dir'] + config['run'] + "/benchmarks/solve_network/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}"
threads: 4 threads: 4
resources: mem_mb=config['solving']['mem'] resources: mem_mb=config['solving']['mem']
benchmark: RDIR + "/benchmarks/solve_network/elec_s{simpl}_{clusters}_lv{lv}_{opts}_{sector_opts}_{planning_horizons}"
script: "scripts/solve_network.py" script: "scripts/solve_network.py"

View File

@ -2,20 +2,26 @@ version: 0.5.0
logging_level: INFO logging_level: INFO
results_dir: 'results/' results_dir: results/
summary_dir: results summary_dir: results
costs_dir: '../technology-data/outputs/' costs_dir: ../technology-data/outputs/
run: 'your-run-name' # use this to keep track of runs with different settings run: your-run-name # use this to keep track of runs with different settings
foresight: 'overnight' # options are overnight, myopic, perfect (perfect is not yet implemented) foresight: overnight # options are overnight, myopic, perfect (perfect is not yet implemented)
# if you use myopic or perfect foresight, set the investment years in "planning_horizons" below # if you use myopic or perfect foresight, set the investment years in "planning_horizons" below
scenario: scenario:
sectors: [E] # ignore this legacy setting simpl: # only relevant for PyPSA-Eur
simpl: [''] # only relevant for PyPSA-Eur - ''
lv: [1.0,1.5] # allowed transmission line volume expansion, can be any float >= 1.0 (today) or "opt" lv: # allowed transmission line volume expansion, can be any float >= 1.0 (today) or "opt"
clusters: [45,50] # number of nodes in Europe, any integer between 37 (1 node per country-zone) and several hundred - 1.0
opts: [''] # only relevant for PyPSA-Eur - 1.5
sector_opts: [Co2L0-3H-T-H-B-I-solar+p3-dist1] # this is where the main scenario settings are clusters: # number of nodes in Europe, any integer between 37 (1 node per country-zone) and several hundred
- 45
- 50
opts: # only relevant for PyPSA-Eur
- ''
sector_opts: # this is where the main scenario settings are
- Co2L0-3H-T-H-B-I-solar+p3-dist1
# to really understand the options here, look in scripts/prepare_sector_network.py # to really understand the options here, look in scripts/prepare_sector_network.py
# Co2Lx specifies the CO2 target in x% of the 1990 values; default will give default (5%); # Co2Lx specifies the CO2 target in x% of the 1990 values; default will give default (5%);
# Co2L0p25 will give 25% CO2 emissions; Co2Lm0p05 will give 5% negative emissions # Co2L0p25 will give 25% CO2 emissions; Co2Lm0p05 will give 5% negative emissions
@ -30,7 +36,8 @@ scenario:
# planning_horizons), be:beta decay; ex:exponential decay # planning_horizons), be:beta decay; ex:exponential decay
# cb40ex0 distributes a carbon budget of 40 GtCO2 following an exponential # cb40ex0 distributes a carbon budget of 40 GtCO2 following an exponential
# decay with initial growth rate 0 # decay with initial growth rate 0
planning_horizons : [2030] # investment years for myopic and perfect; or costs year for overnight planning_horizons: # investment years for myopic and perfect; or costs year for overnight
- 2030
# for example, set to [2020, 2030, 2040, 2050] for myopic foresight # for example, set to [2020, 2030, 2040, 2050] for myopic foresight
# CO2 budget as a fraction of 1990 emissions # CO2 budget as a fraction of 1990 emissions
@ -50,11 +57,10 @@ snapshots:
# arguments to pd.date_range # arguments to pd.date_range
start: "2013-01-01" start: "2013-01-01"
end: "2014-01-01" end: "2014-01-01"
closed: 'left' # end is not inclusive closed: left # end is not inclusive
atlite: atlite:
cutout_dir: '../pypsa-eur/cutouts' cutout: ../pypsa-eur/cutouts/europe-2013-era5.nc
cutout_name: "europe-2013-era5"
# this information is NOT used but needed as an argument for # this information is NOT used but needed as an argument for
# pypsa-eur/scripts/add_electricity.py/load_costs in make_summary.py # pypsa-eur/scripts/add_electricity.py/load_costs in make_summary.py
@ -67,102 +73,174 @@ electricity:
# some technologies are removed because they are implemented differently # some technologies are removed because they are implemented differently
# or have different year-dependent costs in PyPSA-Eur-Sec # or have different year-dependent costs in PyPSA-Eur-Sec
pypsa_eur: pypsa_eur:
"Bus": ["AC"] Bus:
"Link": ["DC"] - AC
"Generator": ["onwind", "offwind-ac", "offwind-dc", "solar", "ror"] Link:
"StorageUnit": ["PHS","hydro"] - DC
"Store": [] Generator:
- onwind
- offwind-ac
- offwind-dc
- solar
- ror
StorageUnit:
- PHS
- hydro
Store: []
energy:
energy_totals_year: 2011
base_emissions_year: 1990
eurostat_report_year: 2016
emissions: CO2 # "CO2" or "All greenhouse gases - (CO2 equivalent)"
biomass: biomass:
year: 2030 year: 2030
scenario: "Med" scenario: Med
classes: classes:
solid biomass: ['Primary agricultural residues', 'Forestry energy residue', 'Secondary forestry residues', 'Secondary Forestry residues sawdust', 'Forestry residues from landscape care biomass', 'Municipal waste'] solid biomass:
not included: ['Bioethanol sugar beet biomass', 'Rapeseeds for biodiesel', 'sunflower and soya for Biodiesel', 'Starchy crops biomass', 'Grassy crops biomass', 'Willow biomass', 'Poplar biomass potential', 'Roundwood fuelwood', 'Roundwood Chips & Pellets'] - Primary agricultural residues
biogas: ['Manure biomass potential', 'Sludge biomass'] - Forestry energy residue
- Secondary forestry residues
- Secondary Forestry residues sawdust
- Forestry residues from landscape care biomass
- Municipal waste
not included:
- Bioethanol sugar beet biomass
- Rapeseeds for biodiesel
- sunflower and soya for Biodiesel
- Starchy crops biomass
- Grassy crops biomass
- Willow biomass
- Poplar biomass potential
- Roundwood fuelwood
- Roundwood Chips & Pellets
biogas:
- Manure biomass potential
- Sludge biomass
solar_thermal:
clearsky_model: simple # should be "simple" or "enhanced"?
orientation:
slope: 45.
azimuth: 180.
# only relevant for foresight = myopic or perfect # only relevant for foresight = myopic or perfect
existing_capacities: existing_capacities:
grouping_years: [1980, 1985, 1990, 1995, 2000, 2005, 2010, 2015, 2019] grouping_years: [1980, 1985, 1990, 1995, 2000, 2005, 2010, 2015, 2019]
threshold_capacity: 10 threshold_capacity: 10
conventional_carriers: ['lignite', 'coal', 'oil', 'uranium'] conventional_carriers:
- lignite
- coal
- oil
- uranium
sector: sector:
'central' : True central: true
'central_fraction' : 0.6 central_fraction: 0.6
'bev_dsm_restriction_value' : 0.75 #Set to 0 for no restriction on BEV DSM bev_dsm_restriction_value: 0.75 #Set to 0 for no restriction on BEV DSM
'bev_dsm_restriction_time' : 7 #Time at which SOC of BEV has to be dsm_restriction_value bev_dsm_restriction_time: 7 #Time at which SOC of BEV has to be dsm_restriction_value
'transport_heating_deadband_upper' : 20. transport_heating_deadband_upper: 20.
'transport_heating_deadband_lower' : 15. transport_heating_deadband_lower: 15.
'ICE_lower_degree_factor' : 0.375 #in per cent increase in fuel consumption per degree above deadband ICE_lower_degree_factor: 0.375 #in per cent increase in fuel consumption per degree above deadband
'ICE_upper_degree_factor' : 1.6 ICE_upper_degree_factor: 1.6
'EV_lower_degree_factor' : 0.98 EV_lower_degree_factor: 0.98
'EV_upper_degree_factor' : 0.63 EV_upper_degree_factor: 0.63
'district_heating_loss' : 0.15 district_heating_loss: 0.15
'bev_dsm' : True #turns on EV battery bev_dsm: true #turns on EV battery
'bev_availability' : 0.5 #How many cars do smart charging bev_availability: 0.5 #How many cars do smart charging
'v2g' : True #allows feed-in to grid from EV battery bev_energy: 0.05 #average battery size in MWh
bev_charge_efficiency: 0.9 #BEV (dis-)charging efficiency
bev_plug_to_wheel_efficiency: 0.2 #kWh/km from EPA https://www.fueleconomy.gov/feg/ for Tesla Model S
bev_charge_rate: 0.011 #3-phase charger with 11 kW
bev_avail_max: 0.95
bev_avail_mean: 0.8
v2g: true #allows feed-in to grid from EV battery
#what is not EV or FCEV is oil-fuelled ICE #what is not EV or FCEV is oil-fuelled ICE
'land_transport_fuel_cell_share': # 1 means all FCEVs land_transport_fuel_cell_share: # 1 means all FCEVs
2020: 0 2020: 0
2030: 0.05 2030: 0.05
2040: 0.1 2040: 0.1
2050: 0.15 2050: 0.15
'land_transport_electric_share': # 1 means all EVs land_transport_electric_share: # 1 means all EVs
2020: 0 2020: 0
2030: 0.25 2030: 0.25
2040: 0.6 2040: 0.6
2050: 0.85 2050: 0.85
'transport_fuel_cell_efficiency': 0.5 transport_fuel_cell_efficiency: 0.5
'transport_internal_combustion_efficiency': 0.3 transport_internal_combustion_efficiency: 0.3
'shipping_average_efficiency' : 0.4 #For conversion of fuel oil to propulsion in 2011 shipping_average_efficiency: 0.4 #For conversion of fuel oil to propulsion in 2011
'time_dep_hp_cop' : True #time dependent heat pump coefficient of performance time_dep_hp_cop: true #time dependent heat pump coefficient of performance
'heat_pump_sink_T' : 55. # Celsius, based on DTU / large area radiators; used in build_cop_profiles.py heat_pump_sink_T: 55. # Celsius, based on DTU / large area radiators; used in build_cop_profiles.py
# conservatively high to cover hot water and space heating in poorly-insulated buildings # conservatively high to cover hot water and space heating in poorly-insulated buildings
'reduce_space_heat_exogenously': True # reduces space heat demand by a given factor (applied before losses in DH) reduce_space_heat_exogenously: true # reduces space heat demand by a given factor (applied before losses in DH)
# this can represent e.g. building renovation, building demolition, or if # this can represent e.g. building renovation, building demolition, or if
# the factor is negative: increasing floor area, increased thermal comfort, population growth # the factor is negative: increasing floor area, increased thermal comfort, population growth
'reduce_space_heat_exogenously_factor': # per unit reduction in space heat demand reduce_space_heat_exogenously_factor: # per unit reduction in space heat demand
# the default factors are determined by the LTS scenario from http://tool.european-calculator.eu/app/buildings/building-types-area/?levers=1ddd4444421213bdbbbddd44444ffffff11f411111221111211l212221 # the default factors are determined by the LTS scenario from http://tool.european-calculator.eu/app/buildings/building-types-area/?levers=1ddd4444421213bdbbbddd44444ffffff11f411111221111211l212221
2020: 0.10 # this results in a space heat demand reduction of 10% 2020: 0.10 # this results in a space heat demand reduction of 10%
2025: 0.09 # first heat demand increases compared to 2020 because of larger floor area per capita 2025: 0.09 # first heat demand increases compared to 2020 because of larger floor area per capita
2030: 0.09 2030: 0.09
2035: 0.11 2035: 0.11
2040: 0.16 2040: 0.16
2045: 0.21 2045: 0.21
2050: 0.29 2050: 0.29
'retrofitting' : # co-optimises building renovation to reduce space heat demand retrofitting : # co-optimises building renovation to reduce space heat demand
'retro_endogen': False # co-optimise space heat savings retro_endogen: false # co-optimise space heat savings
'cost_factor' : 1.0 # weight costs for building renovation cost_factor: 1.0 # weight costs for building renovation
'interest_rate': 0.04 # for investment in building components interest_rate: 0.04 # for investment in building components
'annualise_cost': True # annualise the investment costs annualise_cost: true # annualise the investment costs
'tax_weighting': False # weight costs depending on taxes in countries tax_weighting: false # weight costs depending on taxes in countries
'construction_index': True # weight costs depending on labour/material costs per country construction_index: true # weight costs depending on labour/material costs per country
'tes' : True tes: true
'tes_tau' : 3. tes_tau: # 180 day time constant for centralised, 3 day for decentralised
'boilers' : True decentral: 3
'oil_boilers': False central: 180
'chp' : True boilers: true
'micro_chp' : False oil_boilers: false
'solar_thermal' : True chp: true
'solar_cf_correction': 0.788457 # = >>> 1/1.2683 micro_chp: false
'marginal_cost_storage' : 0. #1e-4 solar_thermal: true
'methanation' : True solar_cf_correction: 0.788457 # = >>> 1/1.2683
'helmeth' : True marginal_cost_storage: 0. #1e-4
'dac' : True methanation: true
'co2_vent' : True helmeth: true
'SMR' : True dac: true
'co2_sequestration_potential' : 200 #MtCO2/a sequestration potential for Europe co2_vent: true
'co2_sequestration_cost' : 20 #EUR/tCO2 for transport and sequestration of CO2 SMR: true
'cc_fraction' : 0.9 # default fraction of CO2 captured with post-combustion capture co2_sequestration_potential: 200 #MtCO2/a sequestration potential for Europe
'hydrogen_underground_storage' : True co2_sequestration_cost: 20 #EUR/tCO2 for transport and sequestration of CO2
'use_fischer_tropsch_waste_heat' : True cc_fraction: 0.9 # default fraction of CO2 captured with post-combustion capture
'use_fuel_cell_waste_heat' : True hydrogen_underground_storage: true
'electricity_distribution_grid' : False use_fischer_tropsch_waste_heat: true
'electricity_distribution_grid_cost_factor' : 1.0 #multiplies cost in data/costs.csv use_fuel_cell_waste_heat: true
'electricity_grid_connection' : True # only applies to onshore wind and utility PV electricity_distribution_grid: false
'gas_distribution_grid' : True electricity_distribution_grid_cost_factor: 1.0 #multiplies cost in data/costs.csv
'gas_distribution_grid_cost_factor' : 1.0 #multiplies cost in data/costs.csv electricity_grid_connection: true # only applies to onshore wind and utility PV
gas_distribution_grid: true
gas_distribution_grid_cost_factor: 1.0 #multiplies cost in data/costs.csv
conventional_generation: # generator : carrier
OCGT: gas
industry:
St_primary_fraction: 0.3 # fraction of steel produced via primary route (DRI + EAF) versus secondary route (EAF); today fraction is 0.6
H2_DRI: 1.7 #H2 consumption in Direct Reduced Iron (DRI), MWh_H2,LHV/ton_Steel from 51kgH2/tSt in Vogl et al (2018) doi:10.1016/j.jclepro.2018.08.279
elec_DRI: 0.322 #electricity consumption in Direct Reduced Iron (DRI) shaft, MWh/tSt HYBRIT brochure https://ssabwebsitecdn.azureedge.net/-/media/hybrit/files/hybrit_brochure.pdf
Al_primary_fraction: 0.2 # fraction of aluminium produced via the primary route versus scrap; today fraction is 0.4
MWh_CH4_per_tNH3_SMR: 10.8 # 2012's demand from https://ec.europa.eu/docsroom/documents/4165/attachments/1/translations/en/renditions/pdf
MWh_elec_per_tNH3_SMR: 0.7 # same source, assuming 94-6% split methane-elec of total energy demand 11.5 MWh/tNH3
MWh_H2_per_tNH3_electrolysis: 6.5 # from https://doi.org/10.1016/j.joule.2018.04.017, around 0.197 tH2/tHN3 (>3/17 since some H2 lost and used for energy)
MWh_elec_per_tNH3_electrolysis: 1.17 # from https://doi.org/10.1016/j.joule.2018.04.017 Table 13 (air separation and HB)
NH3_process_emissions: 24.5 # in MtCO2/a from SMR for H2 production for NH3 from UNFCCC for 2015 for EU28
petrochemical_process_emissions: 25.5 # in MtCO2/a for petrochemical and other from UNFCCC for 2015 for EU28
HVC_primary_fraction: 1.0 #fraction of current non-ammonia basic chemicals produced via primary route
hotmaps_locate_missing: false
reference_year: 2015
costs: costs:
lifetime: 25 #default lifetime lifetime: 25 #default lifetime
@ -173,8 +251,8 @@ costs:
# Marginal and capital costs can be overwritten # Marginal and capital costs can be overwritten
# capital_cost: # capital_cost:
# Wind: Bla # onwind: 500
marginal_cost: # marginal_cost:
solar: 0.01 solar: 0.01
onwind: 0.015 onwind: 0.015
offwind: 0.015 offwind: 0.015
@ -196,17 +274,17 @@ solving:
clip_p_max_pu: 1.e-2 clip_p_max_pu: 1.e-2
load_shedding: false load_shedding: false
noisy_costs: true noisy_costs: true
skip_iterations: true
min_iterations: 1 track_iterations: false
max_iterations: 1 min_iterations: 4
# nhours: 1 max_iterations: 6
solver: solver:
name: gurobi name: gurobi
threads: 4 threads: 4
method: 2 # barrier method: 2 # barrier
crossover: 0 crossover: 0
BarConvTol: 1.e-5 BarConvTol: 1.e-6
Seed: 123 Seed: 123
AggFill: 0 AggFill: 0
PreDual: 0 PreDual: 0
@ -221,182 +299,175 @@ solving:
#feasopt_tolerance: 1.e-6 #feasopt_tolerance: 1.e-6
mem: 30000 #memory in MB; 20 GB enough for 50+B+I+H2; 100 GB for 181+B+I+H2 mem: 30000 #memory in MB; 20 GB enough for 50+B+I+H2; 100 GB for 181+B+I+H2
industry:
'St_primary_fraction' : 0.3 # fraction of steel produced via primary route (DRI + EAF) versus secondary route (EAF); today fraction is 0.6
'H2_DRI' : 1.7 #H2 consumption in Direct Reduced Iron (DRI), MWh_H2,LHV/ton_Steel from 51kgH2/tSt in Vogl et al (2018) doi:10.1016/j.jclepro.2018.08.279
'elec_DRI' : 0.322 #electricity consumption in Direct Reduced Iron (DRI) shaft, MWh/tSt HYBRIT brochure https://ssabwebsitecdn.azureedge.net/-/media/hybrit/files/hybrit_brochure.pdf
'Al_primary_fraction' : 0.2 # fraction of aluminium produced via the primary route versus scrap; today fraction is 0.4
'MWh_CH4_per_tNH3_SMR' : 10.8 # 2012's demand from https://ec.europa.eu/docsroom/documents/4165/attachments/1/translations/en/renditions/pdf
'MWh_elec_per_tNH3_SMR' : 0.7 # same source, assuming 94-6% split methane-elec of total energy demand 11.5 MWh/tNH3
'MWh_H2_per_tNH3_electrolysis' : 6.5 # from https://doi.org/10.1016/j.joule.2018.04.017, around 0.197 tH2/tHN3 (>3/17 since some H2 lost and used for energy)
'MWh_elec_per_tNH3_electrolysis' : 1.17 # from https://doi.org/10.1016/j.joule.2018.04.017 Table 13 (air separation and HB)
'NH3_process_emissions' : 24.5 # in MtCO2/a from SMR for H2 production for NH3 from UNFCCC for 2015 for EU28
'petrochemical_process_emissions' : 25.5 # in MtCO2/a for petrochemical and other from UNFCCC for 2015 for EU28
'HVC_primary_fraction' : 1.0 #fraction of current non-ammonia basic chemicals produced via primary route
plotting: plotting:
map: map:
figsize: [7, 7] boundaries: [-11, 30, 34, 71]
boundaries: [-10.2, 29, 35, 72] color_geomap:
p_nom: ocean: white
bus_size_factor: 5.e+4 land: whitesmoke
linewidth_factor: 3.e+3 # 1.e+3 #3.e+3 costs_max: 1000
costs_max: 1200
costs_threshold: 1 costs_threshold: 1
energy_max: 20000
energy_min: -20000
energy_max: 20000. energy_threshold: 50
energy_min: -15000. vre_techs:
energy_threshold: 50. - onwind
- offwind-ac
- offwind-dc
vre_techs: ["onwind", "offwind-ac", "offwind-dc", "solar", "ror"] - solar
renewable_storage_techs: ["PHS","hydro"] - ror
conv_techs: ["OCGT", "CCGT", "Nuclear", "Coal"] renewable_storage_techs:
storage_techs: ["hydro+PHS", "battery", "H2"] - PHS
# store_techs: ["Li ion", "water tanks"] - hydro
load_carriers: ["AC load"] #, "heat load", "Li ion load"] conv_techs:
AC_carriers: ["AC line", "AC transformer"] - OCGT
link_carriers: ["DC line", "Converter AC-DC"] - CCGT
heat_links: ["heat pump", "resistive heater", "CHP heat", "CHP electric", - Nuclear
"gas boiler", "central heat pump", "central resistive heater", "central CHP heat", - Coal
"central CHP electric", "central gas boiler"] storage_techs:
heat_generators: ["gas boiler", "central gas boiler", "solar thermal collector", "central solar thermal collector"] - hydro+PHS
- battery
- H2
load_carriers:
- AC load
AC_carriers:
- AC line
- AC transformer
link_carriers:
- DC line
- Converter AC-DC
heat_links:
- heat pump
- resistive heater
- CHP heat
- CHP electric
- gas boiler
- central heat pump
- central resistive heater
- central CHP heat
- central CHP electric
- central gas boiler
heat_generators:
- gas boiler
- central gas boiler
- solar thermal collector
- central solar thermal collector
tech_colors: tech_colors:
"onwind" : "b" onwind: "#235ebc"
"onshore wind" : "b" onshore wind: "#235ebc"
'offwind' : "c" offwind: "#6895dd"
'offshore wind' : "c" offshore wind: "#6895dd"
'offwind-ac' : "c" offwind-ac: "#6895dd"
'offshore wind (AC)' : "c" offshore wind (AC): "#6895dd"
'offwind-dc' : "#009999" offwind-dc: "#74c6f2"
'offshore wind (DC)' : "#009999" offshore wind (DC): "#74c6f2"
'wave' : "#004444" wave: '#004444'
"hydro" : "#3B5323" hydro: '#3B5323'
"hydro reservoir" : "#3B5323" hydro reservoir: '#3B5323'
"ror" : "#78AB46" ror: '#78AB46'
"run of river" : "#78AB46" run of river: '#78AB46'
'hydroelectricity' : '#006400' hydroelectricity: '#006400'
'solar' : "y" solar: "#f9d002"
'solar PV' : "y" solar PV: "#f9d002"
'solar thermal' : 'coral' solar thermal: coral
'solar rooftop' : '#e6b800' solar rooftop: '#ffef60'
"OCGT" : "wheat" OCGT: wheat
"OCGT marginal" : "sandybrown" OCGT marginal: sandybrown
"OCGT-heat" : "orange" OCGT-heat: '#ee8340'
"gas boiler" : "orange" gas boiler: '#ee8340'
"gas boilers" : "orange" gas boilers: '#ee8340'
"gas boiler marginal" : "orange" gas boiler marginal: '#ee8340'
"gas-to-power/heat" : "orange" gas-to-power/heat: '#ee8340'
"gas" : "brown" gas: brown
"natural gas" : "brown" natural gas: brown
"SMR" : "#4F4F2F" SMR: '#4F4F2F'
"oil" : "#B5A642" oil: '#B5A642'
"oil boiler" : "#B5A677" oil boiler: '#B5A677'
"lines" : "k" lines: k
"transmission lines" : "k" transmission lines: k
"H2" : "m" H2: m
"hydrogen storage" : "m" hydrogen storage: m
"battery" : "slategray" battery: slategray
"battery storage" : "slategray" battery storage: slategray
"home battery" : "#614700" home battery: '#614700'
"home battery storage" : "#614700" home battery storage: '#614700'
"Nuclear" : "r" Nuclear: r
"Nuclear marginal" : "r" Nuclear marginal: r
"nuclear" : "r" nuclear: r
"uranium" : "r" uranium: r
"Coal" : "k" Coal: k
"coal" : "k" coal: k
"Coal marginal" : "k" Coal marginal: k
"Lignite" : "grey" Lignite: grey
"lignite" : "grey" lignite: grey
"Lignite marginal" : "grey" Lignite marginal: grey
"CCGT" : "orange" CCGT: '#ee8340'
"CCGT marginal" : "orange" CCGT marginal: '#ee8340'
"heat pumps" : "#76EE00" heat pumps: '#76EE00'
"heat pump" : "#76EE00" heat pump: '#76EE00'
"air heat pump" : "#76EE00" air heat pump: '#76EE00'
"ground heat pump" : "#40AA00" ground heat pump: '#40AA00'
"power-to-heat" : "#40AA00" power-to-heat: '#40AA00'
"resistive heater" : "pink" resistive heater: pink
"Sabatier" : "#FF1493" Sabatier: '#FF1493'
"methanation" : "#FF1493" methanation: '#FF1493'
"power-to-gas" : "#FF1493" power-to-gas: '#FF1493'
"power-to-liquid" : "#FFAAE9" power-to-liquid: '#FFAAE9'
"helmeth" : "#7D0552" helmeth: '#7D0552'
"helmeth" : "#7D0552" DAC: '#E74C3C'
"DAC" : "#E74C3C" co2 stored: '#123456'
"co2 stored" : "#123456" CO2 sequestration: '#123456'
"CO2 sequestration" : "#123456" CC: k
"CC" : "k" co2: '#123456'
"co2" : "#123456" co2 vent: '#654321'
"co2 vent" : "#654321" solid biomass for industry co2 from atmosphere: '#654321'
"solid biomass for industry co2 from atmosphere" : "#654321" solid biomass for industry co2 to stored: '#654321'
"solid biomass for industry co2 to stored": "#654321" gas for industry co2 to atmosphere: '#654321'
"gas for industry co2 to atmosphere": "#654321" gas for industry co2 to stored: '#654321'
"gas for industry co2 to stored": "#654321" Fischer-Tropsch: '#44DD33'
"Fischer-Tropsch" : "#44DD33" kerosene for aviation: '#44BB11'
"kerosene for aviation": "#44BB11" naphtha for industry: '#44FF55'
"naphtha for industry" : "#44FF55" land transport oil: '#44DD33'
"land transport oil" : "#44DD33" water tanks: '#BBBBBB'
"water tanks" : "#BBBBBB" hot water storage: '#BBBBBB'
"hot water storage" : "#BBBBBB" hot water charging: '#BBBBBB'
"hot water charging" : "#BBBBBB" hot water discharging: '#999999'
"hot water discharging" : "#999999" CHP: r
"CHP" : "r" CHP heat: r
"CHP heat" : "r" CHP electric: r
"CHP electric" : "r" PHS: g
"PHS" : "g" Ambient: k
"Ambient" : "k" Electric load: b
"Electric load" : "b" Heat load: r
"Heat load" : "r" heat: darkred
"heat" : "darkred" rural heat: '#880000'
"rural heat" : "#880000" central heat: '#b22222'
"central heat" : "#b22222" decentral heat: '#800000'
"decentral heat" : "#800000" low-temperature heat for industry: '#991111'
"low-temperature heat for industry" : "#991111" process heat: '#FF3333'
"process heat" : "#FF3333" heat demand: darkred
"heat demand" : "darkred" electric demand: k
"electric demand" : "k" Li ion: grey
"Li ion" : "grey" district heating: '#CC4E5C'
"district heating" : "#CC4E5C" retrofitting: purple
"retrofitting" : "purple" building retrofitting: purple
"building retrofitting" : "purple" BEV charger: grey
"BEV charger" : "grey" V2G: grey
"V2G" : "grey" land transport EV: grey
"land transport EV" : "grey" electricity: k
"electricity" : "k" gas for industry: '#333333'
"gas for industry" : "#333333" solid biomass for industry: '#555555'
"solid biomass for industry" : "#555555" industry electricity: '#222222'
"industry electricity" : "#222222" industry new electricity: '#222222'
"industry new electricity" : "#222222" process emissions to stored: '#444444'
"process emissions to stored" : "#444444" process emissions to atmosphere: '#888888'
"process emissions to atmosphere" : "#888888" process emissions: '#222222'
"process emissions" : "#222222" oil emissions: '#666666'
"oil emissions" : "#666666" land transport oil emissions: '#666666'
"land transport oil emissions" : "#666666" land transport fuel cell: '#AAAAAA'
"land transport fuel cell" : "#AAAAAA" biogas: '#800000'
"biogas" : "#800000" solid biomass: '#DAA520'
"solid biomass" : "#DAA520" today: '#D2691E'
"today" : "#D2691E" shipping: '#6495ED'
"shipping" : "#6495ED" electricity distribution grid: '#333333'
"electricity distribution grid" : "#333333"
nice_names:
# OCGT: "Gas"
# OCGT marginal: "Gas (marginal)"
offwind: "offshore wind"
onwind: "onshore wind"
battery: "Battery storage"
lines: "Transmission lines"
AC line: "AC lines"
AC-AC: "DC lines"
ror: "Run of river"
nice_names_n:
offwind: "offshore\nwind"
onwind: "onshore\nwind"
# OCGT: "Gas"
H2: "Hydrogen\nstorage"
# OCGT marginal: "Gas (marginal)"
lines: "transmission\nlines"
ror: "run of river"

View File

@ -0,0 +1,3 @@
attribute,type,unit,default,description,status
location,string,n/a,n/a,Reference to original electricity bus,Input (optional)
unit,string,n/a,MWh,Unit of the bus (descriptive only), Input (optional)
1 attribute type unit default description status
2 location string n/a n/a Reference to original electricity bus Input (optional)
3 unit string n/a MWh Unit of the bus (descriptive only) Input (optional)

View File

@ -0,0 +1,3 @@
attribute,type,unit,default,description,status
build_year,integer,year,n/a,build year,Input (optional)
lifetime,float,years,n/a,lifetime,Input (optional)
1 attribute type unit default description status
2 build_year integer year n/a build year Input (optional)
3 lifetime float years n/a lifetime Input (optional)

View File

@ -0,0 +1,13 @@
attribute,type,unit,default,description,status
bus2,string,n/a,n/a,2nd bus,Input (optional)
bus3,string,n/a,n/a,3rd bus,Input (optional)
bus4,string,n/a,n/a,4th bus,Input (optional)
efficiency2,static or series,per unit,1.,2nd bus efficiency,Input (optional)
efficiency3,static or series,per unit,1.,3rd bus efficiency,Input (optional)
efficiency4,static or series,per unit,1.,4th bus efficiency,Input (optional)
p2,series,MW,0.,2nd bus output,Output
p3,series,MW,0.,3rd bus output,Output
p4,series,MW,0.,4th bus output,Output
build_year,integer,year,n/a,build year,Input (optional)
lifetime,float,years,n/a,lifetime,Input (optional)
carrier,string,n/a,n/a,carrier,Input (optional)
1 attribute type unit default description status
2 bus2 string n/a n/a 2nd bus Input (optional)
3 bus3 string n/a n/a 3rd bus Input (optional)
4 bus4 string n/a n/a 4th bus Input (optional)
5 efficiency2 static or series per unit 1. 2nd bus efficiency Input (optional)
6 efficiency3 static or series per unit 1. 3rd bus efficiency Input (optional)
7 efficiency4 static or series per unit 1. 4th bus efficiency Input (optional)
8 p2 series MW 0. 2nd bus output Output
9 p3 series MW 0. 3rd bus output Output
10 p4 series MW 0. 4th bus output Output
11 build_year integer year n/a build year Input (optional)
12 lifetime float years n/a lifetime Input (optional)
13 carrier string n/a n/a carrier Input (optional)

View File

@ -0,0 +1,2 @@
attribute,type,unit,default,description,status
carrier,string,n/a,n/a,carrier,Input (optional)
1 attribute type unit default description status
2 carrier string n/a n/a carrier Input (optional)

View File

@ -0,0 +1,4 @@
attribute,type,unit,default,description,status
build_year,integer,year,n/a,build year,Input (optional)
lifetime,float,years,n/a,lifetime,Input (optional)
carrier,string,n/a,n/a,carrier,Input (optional)
1 attribute type unit default description status
2 build_year integer year n/a build year Input (optional)
3 lifetime float years n/a lifetime Input (optional)
4 carrier string n/a n/a carrier Input (optional)

View File

@ -5,7 +5,59 @@ Release Notes
Future release Future release
============== ==============
* Include new features here. .. note::
This unreleased version currently requires the master branches of PyPSA, PyPSA-Eur, and the technology-data repository.
* Extended use of ``multiprocessing`` for much better performance
(from up to 20 minutes to less than one minute).
* Compatibility with ``atlite>=0.2``. Older versions of ``atlite`` will no longer work.
* Handle most input files (or base directories) via ``snakemake.input``.
* Use of ``mock_snakemake`` from PyPSA-Eur.
* Update ``solve_network`` rule to match implementation in PyPSA-Eur by using ``n.ilopf()`` and remove outdated code using ``pyomo``.
Allows the new setting to skip iterated impedance updates with ``solving: options: skip_iterations: true``.
* The component attributes that are to be overridden are now stored in the folder
``data/override_component_attrs`` analogous to ``pypsa/component_attrs``.
This reduces verbosity and also allows circumventing the ``n.madd()`` hack
for individual components with non-default attributes.
This data is also tracked in the Snakefile.
A function ``helper.override_component_attrs`` was added that loads this data
and can pass the overridden component attributes into ``pypsa.Network()``:
>>> from helper import override_component_attrs
>>> overrides = override_component_attrs(snakemake.input.overrides)
>>> n = pypsa.Network("mynetwork.nc", override_component_attrs=overrides)
* Add various parameters to ``config.default.yaml`` which were previously hardcoded inside the scripts
(e.g. energy reference years, BEV settings, solar thermal collector models, geomap colours).
* Removed stale industry demand rules ``build_industrial_energy_demand_per_country``
and ``build_industrial_demand``. These are superseded with more regionally resolved rules.
* Use simpler and shorter ``gdf.sjoin()`` function to allocate industrial sites
from the Hotmaps database to onshore regions. This change also fixes a bug:
The previous version allocated sites to the closest bus,
but at country borders (where Voronoi cells are distorted by the borders),
this had resulted in e.g. a Spanish site close to the French border
being wrongly allocated to the French bus if the bus center was closer.
* Bugfix: Corrected calculation of "gas for industry" carbon capture efficiency.
* Retrofitting rule is now only triggered if endogeneously optimised.
* Show progress in build rules with ``tqdm`` progress bars.
* Reduced verbosity of ``Snakefile`` through directory prefixes.
* Improve legibility of ``config.default.yaml`` and remove unused options.
* Add optional function to use ``geopy`` to locate entries of the Hotmaps database of industrial sites
with missing location based on city and country, which reduces missing entries by half. It can be
activated by setting ``industry: hotmaps_locate_missing: true``, takes a few minutes longer,
and should only be used if spatial resolution is coarser than city level.
* Use the country-specific time zone mappings from ``pytz`` rather than a manual mapping.
* A function ``add_carrier_buses()`` was added to the ``prepare_network`` rule to reduce code duplication.
* In the ``prepare_network`` rule the cost and potential adjustment was moved into an
own function ``maybe_adjust_costs_and_potentials()``.
* Use ``matplotlibrc`` to set the default plotting style and backend``.
* Added benchmark files for each rule.
* Implements changes to ``n.snapshot_weightings`` in upcoming PyPSA version (cf. `PyPSA/#227 <https://github.com/PyPSA/PyPSA/pull/227>`_).
* New dependencies: ``tqdm``, ``atlite>=0.2.4``, ``pytz`` and ``geopy`` (optional).
These are included in the environment specifications of PyPSA-Eur.
* Consistent use of ``__main__`` block and further unspecific code cleaning.
PyPSA-Eur-Sec 0.5.0 (21st May 2021) PyPSA-Eur-Sec 0.5.0 (21st May 2021)

4
matplotlibrc Normal file
View File

@ -0,0 +1,4 @@
backend: Agg
font.family: sans-serif
font.sans-serif: Ubuntu, DejaVu Sans
image.cmap: viridis

View File

@ -2,43 +2,16 @@
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
import pandas as pd import pandas as pd
idx = pd.IndexSlice idx = pd.IndexSlice
import numpy as np
import scipy as sp
import xarray as xr
import re, os
from six import iteritems, string_types
import pypsa import pypsa
import yaml import yaml
import pytz
from add_existing_baseyear import add_build_year_to_new_assets from add_existing_baseyear import add_build_year_to_new_assets
from helper import override_component_attrs
#First tell PyPSA that links can have multiple outputs by
#overriding the component_attrs. This can be done for
#as many buses as you need with format busi for i = 2,3,4,5,....
#See https://pypsa.org/doc/components.html#link-with-multiple-outputs-or-inputs
override_component_attrs = pypsa.descriptors.Dict({k : v.copy() for k,v in pypsa.components.component_attrs.items()})
override_component_attrs["Link"].loc["bus2"] = ["string",np.nan,np.nan,"2nd bus","Input (optional)"]
override_component_attrs["Link"].loc["bus3"] = ["string",np.nan,np.nan,"3rd bus","Input (optional)"]
override_component_attrs["Link"].loc["efficiency2"] = ["static or series","per unit",1.,"2nd bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["efficiency3"] = ["static or series","per unit",1.,"3rd bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["p2"] = ["series","MW",0.,"2nd bus output","Output"]
override_component_attrs["Link"].loc["p3"] = ["series","MW",0.,"3rd bus output","Output"]
override_component_attrs["Link"].loc["build_year"] = ["integer","year",np.nan,"build year","Input (optional)"]
override_component_attrs["Link"].loc["lifetime"] = ["float","years",np.nan,"build year","Input (optional)"]
override_component_attrs["Generator"].loc["build_year"] = ["integer","year",np.nan,"build year","Input (optional)"]
override_component_attrs["Generator"].loc["lifetime"] = ["float","years",np.nan,"build year","Input (optional)"]
override_component_attrs["Store"].loc["build_year"] = ["integer","year",np.nan,"build year","Input (optional)"]
override_component_attrs["Store"].loc["lifetime"] = ["float","years",np.nan,"build year","Input (optional)"]
def add_brownfield(n, n_p, year): def add_brownfield(n, n_p, year):
@ -48,72 +21,85 @@ def add_brownfield(n, n_p, year):
attr = "e" if c.name == "Store" else "p" attr = "e" if c.name == "Store" else "p"
#first, remove generators, links and stores that track CO2 or global EU values # first, remove generators, links and stores that track
#since these are already in n # CO2 or global EU values since these are already in n
n_p.mremove(c.name, n_p.mremove(
c.df.index[c.df.lifetime.isna()]) c.name,
c.df.index[c.df.lifetime.isna()]
)
#remove assets whose build_year + lifetime < year # remove assets whose build_year + lifetime < year
n_p.mremove(c.name, n_p.mremove(
c.df.index[c.df.build_year + c.df.lifetime < year]) c.name,
c.df.index[c.df.build_year + c.df.lifetime < year]
)
#remove assets if their optimized nominal capacity is lower than a threshold # remove assets if their optimized nominal capacity is lower than a threshold
#since CHP heat Link is proportional to CHP electric Link, make sure threshold is compatible # since CHP heat Link is proportional to CHP electric Link, make sure threshold is compatible
chp_heat = c.df.index[c.df[attr + "_nom_extendable"] & c.df.index.str.contains("urban central") & c.df.index.str.contains("CHP") & c.df.index.str.contains("heat")] chp_heat = c.df.index[(
c.df[attr + "_nom_extendable"]
& c.df.index.str.contains("urban central")
& c.df.index.str.contains("CHP")
& c.df.index.str.contains("heat")
)]
threshold = snakemake.config['existing_capacities']['threshold_capacity']
if not chp_heat.empty: if not chp_heat.empty:
n_p.mremove(c.name, threshold_chp_heat = (threshold
chp_heat[c.df.loc[chp_heat, attr + "_nom_opt"] < snakemake.config['existing_capacities']['threshold_capacity']*c.df.efficiency[chp_heat.str.replace("heat","electric")].values*c.df.p_nom_ratio[chp_heat.str.replace("heat","electric")].values/c.df.efficiency[chp_heat].values]) * c.df.efficiency[chp_heat.str.replace("heat", "electric")].values
n_p.mremove(c.name, * c.df.p_nom_ratio[chp_heat.str.replace("heat", "electric")].values
c.df.index[c.df[attr + "_nom_extendable"] & ~c.df.index.isin(chp_heat) & (c.df[attr + "_nom_opt"] < snakemake.config['existing_capacities']['threshold_capacity'])]) / c.df.efficiency[chp_heat].values
)
n_p.mremove(
c.name,
chp_heat[c.df.loc[chp_heat, attr + "_nom_opt"] < threshold_chp_heat]
)
n_p.mremove(
c.name,
c.df.index[c.df[attr + "_nom_extendable"] & ~c.df.index.isin(chp_heat) & (c.df[attr + "_nom_opt"] < threshold)]
)
#copy over assets but fix their capacity # copy over assets but fix their capacity
c.df[attr + "_nom"] = c.df[attr + "_nom_opt"] c.df[attr + "_nom"] = c.df[attr + "_nom_opt"]
c.df[attr + "_nom_extendable"] = False c.df[attr + "_nom_extendable"] = False
n.import_components_from_dataframe(c.df, n.import_components_from_dataframe(c.df, c.name)
c.name)
#copy time-dependent # copy time-dependent
for tattr in n.component_attrs[c.name].index[(n.component_attrs[c.name].type.str.contains("series") & selection = (
n.component_attrs[c.name].status.str.contains("Input"))]: n.component_attrs[c.name].type.str.contains("series")
n.import_series_from_dataframe(c.pnl[tattr], & n.component_attrs[c.name].status.str.contains("Input")
c.name, )
tattr) for tattr in n.component_attrs[c.name].index[selection]:
n.import_series_from_dataframe(c.pnl[tattr], c.name, tattr)
if __name__ == "__main__": if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from vresutils.snakemake import MockSnakemake from helper import mock_snakemake
snakemake = MockSnakemake( snakemake = mock_snakemake(
wildcards=dict(network='elec', simpl='', clusters='37', lv='1.0', 'add_brownfield',
sector_opts='Co2L0-168H-T-H-B-I-solar3-dist1', simpl='',
co2_budget_name='go', clusters=48,
planning_horizons='2030'), lv=1.0,
input=dict(network='pypsa-eur-sec/results/test/prenetworks/elec_s{simpl}_{clusters}_lv{lv}__{sector_opts}_{co2_budget_name}_{planning_horizons}.nc', sector_opts='Co2L0-168H-T-H-B-I-solar3-dist1',
network_p='pypsa-eur-sec/results/test/postnetworks/elec_s{simpl}_{clusters}_lv{lv}__{sector_opts}_{co2_budget_name}_2020.nc', planning_horizons=2030,
costs='pypsa-eur-sec/data/costs/costs_{planning_horizons}.csv',
cop_air_total="pypsa-eur-sec/resources/cop_air_total_elec_s{simpl}_{clusters}.nc",
cop_soil_total="pypsa-eur-sec/resources/cop_soil_total_elec_s{simpl}_{clusters}.nc"),
output=['pypsa-eur-sec/results/test/prenetworks_brownfield/elec_s{simpl}_{clusters}_lv{lv}__{sector_opts}_{planning_horizons}.nc']
) )
import yaml
with open('config.yaml', encoding='utf8') as f:
snakemake.config = yaml.safe_load(f)
print(snakemake.input.network_p) print(snakemake.input.network_p)
logging.basicConfig(level=snakemake.config['logging_level']) logging.basicConfig(level=snakemake.config['logging_level'])
year=int(snakemake.wildcards.planning_horizons) year = int(snakemake.wildcards.planning_horizons)
n = pypsa.Network(snakemake.input.network, overrides = override_component_attrs(snakemake.input.overrides)
override_component_attrs=override_component_attrs) n = pypsa.Network(snakemake.input.network, override_component_attrs=overrides)
add_build_year_to_new_assets(n, year) add_build_year_to_new_assets(n, year)
n_p = pypsa.Network(snakemake.input.network_p, n_p = pypsa.Network(snakemake.input.network_p, override_component_attrs=overrides)
override_component_attrs=override_component_attrs)
#%%
add_brownfield(n, n_p, year) add_brownfield(n, n_p, year)
n.export_to_netcdf(snakemake.output[0]) n.export_to_netcdf(snakemake.output[0])

View File

@ -2,259 +2,244 @@
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
import pandas as pd import pandas as pd
idx = pd.IndexSlice idx = pd.IndexSlice
import numpy as np import numpy as np
import scipy as sp
import xarray as xr import xarray as xr
import re, os
from six import iteritems, string_types
import pypsa import pypsa
import yaml import yaml
import pytz
from vresutils.costdata import annuity
from prepare_sector_network import prepare_costs from prepare_sector_network import prepare_costs
from helper import override_component_attrs
#First tell PyPSA that links can have multiple outputs by
#overriding the component_attrs. This can be done for
#as many buses as you need with format busi for i = 2,3,4,5,....
#See https://pypsa.org/doc/components.html#link-with-multiple-outputs-or-inputs
override_component_attrs = pypsa.descriptors.Dict({k : v.copy() for k,v in pypsa.components.component_attrs.items()})
override_component_attrs["Link"].loc["bus2"] = ["string",np.nan,np.nan,"2nd bus","Input (optional)"]
override_component_attrs["Link"].loc["bus3"] = ["string",np.nan,np.nan,"3rd bus","Input (optional)"]
override_component_attrs["Link"].loc["efficiency2"] = ["static or series","per unit",1.,"2nd bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["efficiency3"] = ["static or series","per unit",1.,"3rd bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["p2"] = ["series","MW",0.,"2nd bus output","Output"]
override_component_attrs["Link"].loc["p3"] = ["series","MW",0.,"3rd bus output","Output"]
override_component_attrs["Link"].loc["build_year"] = ["integer","year",np.nan,"build year","Input (optional)"]
override_component_attrs["Link"].loc["lifetime"] = ["float","years",np.nan,"build year","Input (optional)"]
override_component_attrs["Generator"].loc["build_year"] = ["integer","year",np.nan,"build year","Input (optional)"]
override_component_attrs["Generator"].loc["lifetime"] = ["float","years",np.nan,"build year","Input (optional)"]
override_component_attrs["Store"].loc["build_year"] = ["integer","year",np.nan,"build year","Input (optional)"]
override_component_attrs["Store"].loc["lifetime"] = ["float","years",np.nan,"build year","Input (optional)"]
def add_build_year_to_new_assets(n, baseyear): def add_build_year_to_new_assets(n, baseyear):
""" """
Parameters Parameters
---------- ----------
n : network n : pypsa.Network
baseyear : int
baseyear: year in which optimized assets are built year in which optimized assets are built
""" """
#Give assets with lifetimes and no build year the build year baseyear # Give assets with lifetimes and no build year the build year baseyear
for c in n.iterate_components(["Link", "Generator", "Store"]): for c in n.iterate_components(["Link", "Generator", "Store"]):
assets = c.df.index[~c.df.lifetime.isna() & c.df.build_year.isna()] assets = c.df.index[~c.df.lifetime.isna() & c.df.build_year.isna()]
c.df.loc[assets, "build_year"] = baseyear c.df.loc[assets, "build_year"] = baseyear
#add -baseyear to name # add -baseyear to name
rename = pd.Series(c.df.index, c.df.index) rename = pd.Series(c.df.index, c.df.index)
rename[assets] += "-" + str(baseyear) rename[assets] += "-" + str(baseyear)
c.df.rename(index=rename, inplace=True) c.df.rename(index=rename, inplace=True)
#rename time-dependent # rename time-dependent
for attr in n.component_attrs[c.name].index[(n.component_attrs[c.name].type.str.contains("series") & selection = (
n.component_attrs[c.name].status.str.contains("Input"))]: n.component_attrs[c.name].type.str.contains("series")
& n.component_attrs[c.name].status.str.contains("Input")
)
for attr in n.component_attrs[c.name].index[selection]:
c.pnl[attr].rename(columns=rename, inplace=True) c.pnl[attr].rename(columns=rename, inplace=True)
def add_existing_renewables(df_agg): def add_existing_renewables(df_agg):
""" """
Append existing renewables to the df_agg pd.DataFrame Append existing renewables to the df_agg pd.DataFrame
with the conventional power plants. with the conventional power plants.
""" """
cc = pd.read_csv('data/Country_codes.csv', cc = pd.read_csv(snakemake.input.country_codes, index_col=0)
index_col=0)
carriers = {"solar" : "solar", carriers = {
"onwind" : "onwind", "solar": "solar",
"offwind" : "offwind-ac"} "onwind": "onwind",
"offwind": "offwind-ac"
}
for tech in ['solar', 'onwind', 'offwind']: for tech in ['solar', 'onwind', 'offwind']:
carrier = carriers[tech] carrier = carriers[tech]
df = pd.read_csv('data/existing_infrastructure/{}_capacity_IRENA.csv'.format(tech),
index_col=0) df = pd.read_csv(snakemake.input[f"existing_{tech}"], index_col=0).fillna(0.)
df = df.fillna(0.)
df.columns = df.columns.astype(int) df.columns = df.columns.astype(int)
df.rename(index={'Czechia':'Czech Republic', rename_countries = {
'UK':'United Kingdom', 'Czechia': 'Czech Republic',
'Bosnia Herzg':'Bosnia Herzegovina', 'UK': 'United Kingdom',
'North Macedonia': 'Macedonia'}, inplace=True) 'Bosnia Herzg': 'Bosnia Herzegovina',
'North Macedonia': 'Macedonia'
}
df.rename(index=rename_countries, inplace=True)
df.rename(index=cc["2 letter code (ISO-3166-2)"], inplace=True) df.rename(index=cc["2 letter code (ISO-3166-2)"], inplace=True)
# calculate yearly differences # calculate yearly differences
df.insert(loc=0, value=.0, column='1999') df.insert(loc=0, value=.0, column='1999')
df = df.diff(axis=1).drop('1999', axis=1) df = df.diff(axis=1).drop('1999', axis=1).clip(lower=0)
df = df.clip(lower=0)
# distribute capacities among nodes according to capacity factor
#distribute capacities among nodes according to capacity factor # weighting with nodal_fraction
#weighting with nodal_fraction
elec_buses = n.buses.index[n.buses.carrier == "AC"].union(n.buses.index[n.buses.carrier == "DC"]) elec_buses = n.buses.index[n.buses.carrier == "AC"].union(n.buses.index[n.buses.carrier == "DC"])
nodal_fraction = pd.Series(0.,elec_buses) nodal_fraction = pd.Series(0., elec_buses)
for country in n.buses.loc[elec_buses,"country"].unique(): for country in n.buses.loc[elec_buses, "country"].unique():
gens = n.generators.index[(n.generators.index.str[:2] == country) & (n.generators.carrier == carrier)] gens = n.generators.index[(n.generators.index.str[:2] == country) & (n.generators.carrier == carrier)]
cfs = n.generators_t.p_max_pu[gens].mean() cfs = n.generators_t.p_max_pu[gens].mean()
cfs_key = cfs/cfs.sum() cfs_key = cfs / cfs.sum()
nodal_fraction.loc[n.generators.loc[gens,"bus"]] = cfs_key.values nodal_fraction.loc[n.generators.loc[gens, "bus"]] = cfs_key.values
nodal_df = df.loc[n.buses.loc[elec_buses,"country"]] nodal_df = df.loc[n.buses.loc[elec_buses, "country"]]
nodal_df.index = elec_buses nodal_df.index = elec_buses
nodal_df = nodal_df.multiply(nodal_fraction,axis=0) nodal_df = nodal_df.multiply(nodal_fraction, axis=0)
for year in nodal_df.columns: for year in nodal_df.columns:
for node in nodal_df.index: for node in nodal_df.index:
name = f"{node}-{tech}-{year}" name = f"{node}-{tech}-{year}"
capacity = nodal_df.loc[node,year] capacity = nodal_df.loc[node, year]
if capacity > 0.: if capacity > 0.:
df_agg.at[name,"Fueltype"] = tech df_agg.at[name, "Fueltype"] = tech
df_agg.at[name,"Capacity"] = capacity df_agg.at[name, "Capacity"] = capacity
df_agg.at[name,"DateIn"] = year df_agg.at[name, "DateIn"] = year
df_agg.at[name,"cluster_bus"] = node df_agg.at[name, "cluster_bus"] = node
def add_power_capacities_installed_before_baseyear(n, grouping_years, costs, baseyear): def add_power_capacities_installed_before_baseyear(n, grouping_years, costs, baseyear):
""" """
Parameters Parameters
---------- ----------
n : network n : pypsa.Network
grouping_years :
grouping_years : intervals to group existing capacities intervals to group existing capacities
costs :
costs : to read lifetime to estimate YearDecomissioning to read lifetime to estimate YearDecomissioning
baseyear : int
""" """
print("adding power capacities installed before baseyear") print("adding power capacities installed before baseyear from powerplants.csv")
### add conventional capacities using 'powerplants.csv'
df_agg = pd.read_csv(snakemake.input.powerplants, index_col=0) df_agg = pd.read_csv(snakemake.input.powerplants, index_col=0)
rename_fuel = {'Hard Coal':'coal', rename_fuel = {
'Lignite':'lignite', 'Hard Coal': 'coal',
'Nuclear':'nuclear', 'Lignite': 'lignite',
'Oil':'oil', 'Nuclear': 'nuclear',
'OCGT':'OCGT', 'Oil': 'oil',
'CCGT':'CCGT', 'OCGT': 'OCGT',
'Natural Gas':'gas',} 'CCGT': 'CCGT',
fueltype_to_drop = ['Hydro', 'Natural Gas': 'gas'
'Wind', }
'Solar',
'Geothermal',
'Bioenergy',
'Waste',
'Other',
'CCGT, Thermal']
technology_to_drop = ['Pv',
'Storage Technologies']
df_agg.drop(df_agg.index[df_agg.Fueltype.isin(fueltype_to_drop)],inplace=True) fueltype_to_drop = [
df_agg.drop(df_agg.index[df_agg.Technology.isin(technology_to_drop)],inplace=True) 'Hydro',
'Wind',
'Solar',
'Geothermal',
'Bioenergy',
'Waste',
'Other',
'CCGT, Thermal'
]
technology_to_drop = [
'Pv',
'Storage Technologies'
]
df_agg.drop(df_agg.index[df_agg.Fueltype.isin(fueltype_to_drop)], inplace=True)
df_agg.drop(df_agg.index[df_agg.Technology.isin(technology_to_drop)], inplace=True)
df_agg.Fueltype = df_agg.Fueltype.map(rename_fuel) df_agg.Fueltype = df_agg.Fueltype.map(rename_fuel)
#assign clustered bus # assign clustered bus
busmap_s = pd.read_csv(snakemake.input.busmap_s, index_col=0).squeeze() busmap_s = pd.read_csv(snakemake.input.busmap_s, index_col=0, squeeze=True)
busmap = pd.read_csv(snakemake.input.busmap, index_col=0).squeeze() busmap = pd.read_csv(snakemake.input.busmap, index_col=0, squeeze=True)
clustermaps = busmap_s.map(busmap) clustermaps = busmap_s.map(busmap)
clustermaps.index = clustermaps.index.astype(int) clustermaps.index = clustermaps.index.astype(int)
df_agg["cluster_bus"] = df_agg.bus.map(clustermaps) df_agg["cluster_bus"] = df_agg.bus.map(clustermaps)
# include renewables in df_agg
#include renewables in df_agg
add_existing_renewables(df_agg) add_existing_renewables(df_agg)
df_agg["grouping_year"] = np.take(grouping_years, df_agg["grouping_year"] = np.take(
np.digitize(df_agg.DateIn, grouping_years,
grouping_years, np.digitize(df_agg.DateIn, grouping_years, right=True)
right=True)) )
df = df_agg.pivot_table(index=["grouping_year",'Fueltype'], columns='cluster_bus', df = df_agg.pivot_table(
values='Capacity', aggfunc='sum') index=["grouping_year", 'Fueltype'],
columns='cluster_bus',
values='Capacity',
aggfunc='sum'
)
carrier = {"OCGT" : "gas", carrier = {
"CCGT" : "gas", "OCGT": "gas",
"coal" : "coal", "CCGT": "gas",
"oil" : "oil", "coal": "coal",
"lignite" : "lignite", "oil": "oil",
"nuclear" : "uranium"} "lignite": "lignite",
"nuclear": "uranium"
}
for grouping_year, generator in df.index: for grouping_year, generator in df.index:
#capacity is the capacity in MW at each node for this
# capacity is the capacity in MW at each node for this
capacity = df.loc[grouping_year, generator] capacity = df.loc[grouping_year, generator]
capacity = capacity[~capacity.isna()] capacity = capacity[~capacity.isna()]
capacity = capacity[capacity > snakemake.config['existing_capacities']['threshold_capacity']] capacity = capacity[capacity > snakemake.config['existing_capacities']['threshold_capacity']]
if generator in ['solar', 'onwind', 'offwind']: if generator in ['solar', 'onwind', 'offwind']:
if generator =='offwind':
p_max_pu=n.generators_t.p_max_pu[capacity.index + ' offwind-ac' + '-' + str(baseyear)] rename = {"offwind": "offwind-ac"}
else: p_max_pu=n.generators_t.p_max_pu[capacity.index + ' ' + rename.get(generator, generator) + '-' + str(baseyear)]
p_max_pu=n.generators_t.p_max_pu[capacity.index + ' ' + generator + '-' + str(baseyear)]
n.madd("Generator", n.madd("Generator",
capacity.index, capacity.index,
suffix=' ' + generator +"-"+ str(grouping_year), suffix=' ' + generator +"-"+ str(grouping_year),
bus=capacity.index, bus=capacity.index,
carrier=generator, carrier=generator,
p_nom=capacity, p_nom=capacity,
marginal_cost=costs.at[generator,'VOM'], marginal_cost=costs.at[generator, 'VOM'],
capital_cost=costs.at[generator,'fixed'], capital_cost=costs.at[generator, 'fixed'],
efficiency=costs.at[generator, 'efficiency'], efficiency=costs.at[generator, 'efficiency'],
p_max_pu=p_max_pu.rename(columns=n.generators.bus), p_max_pu=p_max_pu.rename(columns=n.generators.bus),
build_year=grouping_year, build_year=grouping_year,
lifetime=costs.at[generator,'lifetime']) lifetime=costs.at[generator, 'lifetime']
)
else: else:
n.madd("Link", n.madd("Link",
capacity.index, capacity.index,
suffix= " " + generator +"-" + str(grouping_year), suffix= " " + generator +"-" + str(grouping_year),
bus0="EU " + carrier[generator], bus0="EU " + carrier[generator],
bus1=capacity.index, bus1=capacity.index,
bus2="co2 atmosphere", bus2="co2 atmosphere",
carrier=generator, carrier=generator,
marginal_cost=costs.at[generator,'efficiency']*costs.at[generator,'VOM'], #NB: VOM is per MWel marginal_cost=costs.at[generator, 'efficiency'] * costs.at[generator, 'VOM'], #NB: VOM is per MWel
capital_cost=costs.at[generator,'efficiency']*costs.at[generator,'fixed'], #NB: fixed cost is per MWel capital_cost=costs.at[generator, 'efficiency'] * costs.at[generator, 'fixed'], #NB: fixed cost is per MWel
p_nom=capacity/costs.at[generator,'efficiency'], p_nom=capacity / costs.at[generator, 'efficiency'],
efficiency=costs.at[generator,'efficiency'], efficiency=costs.at[generator, 'efficiency'],
efficiency2=costs.at[carrier[generator],'CO2 intensity'], efficiency2=costs.at[carrier[generator], 'CO2 intensity'],
build_year=grouping_year, build_year=grouping_year,
lifetime=costs.at[generator,'lifetime']) lifetime=costs.at[generator, 'lifetime']
)
def add_heating_capacities_installed_before_baseyear(n, baseyear, grouping_years, ashp_cop, gshp_cop, time_dep_hp_cop, costs, default_lifetime): def add_heating_capacities_installed_before_baseyear(n, baseyear, grouping_years, ashp_cop, gshp_cop, time_dep_hp_cop, costs, default_lifetime):
""" """
Parameters Parameters
---------- ----------
n : network n : pypsa.Network
baseyear : last year covered in the existing capacities database
baseyear: last year covered in the existing capacities database
grouping_years : intervals to group existing capacities grouping_years : intervals to group existing capacities
linear decommissioning of heating capacities from 2020 to 2045 is
linear decommissioning of heating capacities from 2020 to 2045 is currently assumed heating capacities split between residential and
currently assumed services proportional to heating load in both 50% capacities
in rural busess 50% in urban buses
heating capacities split between residential and services proportional
to heating load in both
50% capacities in rural busess 50% in urban buses
""" """
print("adding heating capacities installed before baseyear") print("adding heating capacities installed before baseyear")
@ -263,43 +248,42 @@ def add_heating_capacities_installed_before_baseyear(n, baseyear, grouping_years
# heating/cooling fuel deployment (fossil/renewables) " # heating/cooling fuel deployment (fossil/renewables) "
# https://ec.europa.eu/energy/studies/mapping-and-analyses-current-and-future-2020-2030-heatingcooling-fuel-deployment_en?redir=1 # https://ec.europa.eu/energy/studies/mapping-and-analyses-current-and-future-2020-2030-heatingcooling-fuel-deployment_en?redir=1
# file: "WP2_DataAnnex_1_BuildingTechs_ForPublication_201603.xls" -> "existing_heating_raw.csv". # file: "WP2_DataAnnex_1_BuildingTechs_ForPublication_201603.xls" -> "existing_heating_raw.csv".
# TODO start from original file
# retrieve existing heating capacities # retrieve existing heating capacities
techs = ['gas boiler', techs = [
'oil boiler', 'gas boiler',
'resistive heater', 'oil boiler',
'air heat pump', 'resistive heater',
'ground heat pump'] 'air heat pump',
df = pd.read_csv('data/existing_infrastructure/existing_heating_raw.csv', 'ground heat pump'
index_col=0, ]
header=0) df = pd.read_csv(snakemake.input.existing_heating, index_col=0, header=0)
# data for Albania, Montenegro and Macedonia not included in database
df.loc['Albania']=np.nan
df.loc['Montenegro']=np.nan
df.loc['Macedonia']=np.nan
df.fillna(0, inplace=True)
df *= 1e3 # GW to MW
cc = pd.read_csv('data/Country_codes.csv', # data for Albania, Montenegro and Macedonia not included in database
index_col=0) df.loc['Albania'] = np.nan
df.loc['Montenegro'] = np.nan
df.loc['Macedonia'] = np.nan
df.fillna(0., inplace=True)
# convert GW to MW
df *= 1e3
cc = pd.read_csv(snakemake.input.country_codes, index_col=0)
df.rename(index=cc["2 letter code (ISO-3166-2)"], inplace=True) df.rename(index=cc["2 letter code (ISO-3166-2)"], inplace=True)
# coal and oil boilers are assimilated to oil boilers # coal and oil boilers are assimilated to oil boilers
df['oil boiler'] =df['oil boiler'] + df['coal boiler'] df['oil boiler'] = df['oil boiler'] + df['coal boiler']
df.drop(['coal boiler'], axis=1, inplace=True) df.drop(['coal boiler'], axis=1, inplace=True)
# distribute technologies to nodes by population # distribute technologies to nodes by population
pop_layout = pd.read_csv(snakemake.input.clustered_pop_layout, pop_layout = pd.read_csv(snakemake.input.clustered_pop_layout, index_col=0)
index_col=0)
pop_layout["ct"] = pop_layout.index.str[:2]
ct_total = pop_layout.total.groupby(pop_layout["ct"]).sum()
pop_layout["ct_total"] = pop_layout["ct"].map(ct_total.get)
pop_layout["fraction"] = pop_layout["total"]/pop_layout["ct_total"]
nodal_df = df.loc[pop_layout.ct] nodal_df = df.loc[pop_layout.ct]
nodal_df.index = pop_layout.index nodal_df.index = pop_layout.index
nodal_df = nodal_df.multiply(pop_layout.fraction,axis=0) nodal_df = nodal_df.multiply(pop_layout.fraction, axis=0)
# split existing capacities between residential and services # split existing capacities between residential and services
# proportional to energy demand # proportional to energy demand
@ -309,122 +293,126 @@ def add_heating_capacities_installed_before_baseyear(n, baseyear, grouping_years
for node in nodal_df.index], index=nodal_df.index) for node in nodal_df.index], index=nodal_df.index)
for tech in techs: for tech in techs:
nodal_df['residential ' + tech] = nodal_df[tech]*ratio_residential nodal_df['residential ' + tech] = nodal_df[tech] * ratio_residential
nodal_df['services ' + tech] = nodal_df[tech]*(1-ratio_residential) nodal_df['services ' + tech] = nodal_df[tech] * (1 - ratio_residential)
nodes={} names = [
p_nom={} "residential rural",
for name in ["residential rural", "services rural",
"services rural", "residential urban decentral",
"residential urban decentral", "services urban decentral",
"services urban decentral", "urban central"
"urban central"]: ]
nodes = {}
p_nom = {}
for name in names:
name_type = "central" if name == "urban central" else "decentral" name_type = "central" if name == "urban central" else "decentral"
nodes[name] = pd.Index([n.buses.at[index,"location"] for index in n.buses.index[n.buses.index.str.contains(name) & n.buses.index.str.contains('heat')]]) nodes[name] = pd.Index([n.buses.at[index, "location"] for index in n.buses.index[n.buses.index.str.contains(name) & n.buses.index.str.contains('heat')]])
heat_pump_type = "air" if "urban" in name else "ground" heat_pump_type = "air" if "urban" in name else "ground"
heat_type= "residential" if "residential" in name else "services" heat_type= "residential" if "residential" in name else "services"
if name == "urban central": if name == "urban central":
p_nom[name]=nodal_df['air heat pump'][nodes[name]] p_nom[name] = nodal_df['air heat pump'][nodes[name]]
else: else:
p_nom[name] = nodal_df['{} {} heat pump'.format(heat_type, heat_pump_type)][nodes[name]] p_nom[name] = nodal_df[f'{heat_type} {heat_pump_type} heat pump'][nodes[name]]
# Add heat pumps # Add heat pumps
costs_name = "{} {}-sourced heat pump".format("decentral", heat_pump_type) costs_name = f"decentral {heat_pump_type}-sourced heat pump"
cop = {"air": ashp_cop, "ground": gshp_cop}
if time_dep_hp_cop:
efficiency = cop[heat_pump_type][nodes[name]]
else:
efficiency = costs.at[costs_name, 'efficiency']
for i, grouping_year in enumerate(grouping_years):
cop = {"air" : ashp_cop, "ground" : gshp_cop}
efficiency = cop[heat_pump_type][nodes[name]] if time_dep_hp_cop else costs.at[costs_name,'efficiency']
for i,grouping_year in enumerate(grouping_years):
if int(grouping_year) + default_lifetime <= int(baseyear): if int(grouping_year) + default_lifetime <= int(baseyear):
ratio=0 ratio = 0
else: else:
#installation is assumed to be linear for the past 25 years (default lifetime) # installation is assumed to be linear for the past 25 years (default lifetime)
ratio = (int(grouping_year)-int(grouping_years[i-1]))/default_lifetime ratio = (int(grouping_year) - int(grouping_years[i-1])) / default_lifetime
n.madd("Link", n.madd("Link",
nodes[name], nodes[name],
suffix=" {} {} heat pump-{}".format(name,heat_pump_type, grouping_year), suffix=f" {name} {heat_pump_type} heat pump-{grouping_year}",
bus0=nodes[name], bus0=nodes[name],
bus1=nodes[name] + " " + name + " heat", bus1=nodes[name] + " " + name + " heat",
carrier="{} {} heat pump".format(name,heat_pump_type), carrier=f"{name} {heat_pump_type} heat pump",
efficiency=efficiency, efficiency=efficiency,
capital_cost=costs.at[costs_name,'efficiency']*costs.at[costs_name,'fixed'], capital_cost=costs.at[costs_name, 'efficiency'] * costs.at[costs_name, 'fixed'],
p_nom=p_nom[name]*ratio/costs.at[costs_name,'efficiency'], p_nom=p_nom[name] * ratio / costs.at[costs_name, 'efficiency'],
build_year=int(grouping_year), build_year=int(grouping_year),
lifetime=costs.at[costs_name,'lifetime']) lifetime=costs.at[costs_name, 'lifetime']
)
# add resistive heater, gas boilers and oil boilers # add resistive heater, gas boilers and oil boilers
# (50% capacities to rural buses, 50% to urban buses) # (50% capacities to rural buses, 50% to urban buses)
n.madd("Link", n.madd("Link",
nodes[name], nodes[name],
suffix= " " + name + " resistive heater-{}".format(grouping_year), suffix=f" {name} resistive heater-{grouping_year}",
bus0=nodes[name], bus0=nodes[name],
bus1=nodes[name] + " " + name + " heat", bus1=nodes[name] + " " + name + " heat",
carrier=name + " resistive heater", carrier=name + " resistive heater",
efficiency=costs.at[name_type + ' resistive heater','efficiency'], efficiency=costs.at[name_type + ' resistive heater', 'efficiency'],
capital_cost=costs.at[name_type + ' resistive heater','efficiency']*costs.at[name_type + ' resistive heater','fixed'], capital_cost=costs.at[name_type + ' resistive heater', 'efficiency'] * costs.at[name_type + ' resistive heater', 'fixed'],
p_nom=0.5*nodal_df['{} resistive heater'.format(heat_type)][nodes[name]]*ratio/costs.at[name_type + ' resistive heater','efficiency'], p_nom=0.5 * nodal_df[f'{heat_type} resistive heater'][nodes[name]] * ratio / costs.at[name_type + ' resistive heater', 'efficiency'],
build_year=int(grouping_year), build_year=int(grouping_year),
lifetime=costs.at[costs_name,'lifetime']) lifetime=costs.at[costs_name, 'lifetime']
)
n.madd("Link", n.madd("Link",
nodes[name], nodes[name],
suffix= " " + name + " gas boiler-{}".format(grouping_year), suffix= f" {name} gas boiler-{grouping_year}",
bus0=["EU gas"]*len(nodes[name]), bus0="EU gas",
bus1=nodes[name] + " " + name + " heat", bus1=nodes[name] + " " + name + " heat",
bus2="co2 atmosphere", bus2="co2 atmosphere",
carrier=name + " gas boiler", carrier=name + " gas boiler",
efficiency=costs.at[name_type + ' gas boiler','efficiency'], efficiency=costs.at[name_type + ' gas boiler', 'efficiency'],
efficiency2=costs.at['gas','CO2 intensity'], efficiency2=costs.at['gas', 'CO2 intensity'],
capital_cost=costs.at[name_type + ' gas boiler','efficiency']*costs.at[name_type + ' gas boiler','fixed'], capital_cost=costs.at[name_type + ' gas boiler', 'efficiency'] * costs.at[name_type + ' gas boiler', 'fixed'],
p_nom=0.5*nodal_df['{} gas boiler'.format(heat_type)][nodes[name]]*ratio/costs.at[name_type + ' gas boiler','efficiency'], p_nom=0.5*nodal_df[f'{heat_type} gas boiler'][nodes[name]] * ratio / costs.at[name_type + ' gas boiler', 'efficiency'],
build_year=int(grouping_year), build_year=int(grouping_year),
lifetime=costs.at[name_type + ' gas boiler','lifetime']) lifetime=costs.at[name_type + ' gas boiler', 'lifetime']
)
n.madd("Link", n.madd("Link",
nodes[name], nodes[name],
suffix=" " + name + " oil boiler-{}".format(grouping_year), suffix=f" {name} oil boiler-{grouping_year}",
bus0=["EU oil"]*len(nodes[name]), bus0="EU oil",
bus1=nodes[name] + " " + name + " heat", bus1=nodes[name] + " " + name + " heat",
bus2="co2 atmosphere", bus2="co2 atmosphere",
carrier=name + " oil boiler", carrier=name + " oil boiler",
efficiency=costs.at['decentral oil boiler','efficiency'], efficiency=costs.at['decentral oil boiler', 'efficiency'],
efficiency2=costs.at['oil','CO2 intensity'], efficiency2=costs.at['oil', 'CO2 intensity'],
capital_cost=costs.at['decentral oil boiler','efficiency']*costs.at['decentral oil boiler','fixed'], capital_cost=costs.at['decentral oil boiler', 'efficiency'] * costs.at['decentral oil boiler', 'fixed'],
p_nom=0.5*nodal_df['{} oil boiler'.format(heat_type)][nodes[name]]*ratio/costs.at['decentral oil boiler','efficiency'], p_nom=0.5 * nodal_df[f'{heat_type} oil boiler'][nodes[name]] * ratio / costs.at['decentral oil boiler', 'efficiency'],
build_year=int(grouping_year), build_year=int(grouping_year),
lifetime=costs.at[name_type + ' gas boiler','lifetime']) lifetime=costs.at[name_type + ' gas boiler', 'lifetime']
)
# delete links with p_nom=nan corresponding to extra nodes in country # delete links with p_nom=nan corresponding to extra nodes in country
n.mremove("Link", [index for index in n.links.index.to_list() if str(grouping_year) in index and np.isnan(n.links.p_nom[index])]) n.mremove("Link", [index for index in n.links.index.to_list() if str(grouping_year) in index and np.isnan(n.links.p_nom[index])])
# delete links if their lifetime is over and p_nom=0 # delete links if their lifetime is over and p_nom=0
n.mremove("Link", [index for index in n.links.index.to_list() if str(grouping_year) in index and n.links.p_nom[index]<snakemake.config['existing_capacities']['threshold_capacity']]) threshold = snakemake.config['existing_capacities']['threshold_capacity']
n.mremove("Link", [index for index in n.links.index.to_list() if str(grouping_year) in index and n.links.p_nom[index] < threshold])
if __name__ == "__main__": if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from vresutils.snakemake import MockSnakemake from helper import mock_snakemake
snakemake = MockSnakemake( snakemake = mock_snakemake(
wildcards=dict(network='elec', simpl='', clusters='45', lv='1.0', 'add_existing_baseyear',
sector_opts='Co2L0-3H-T-H-B-I-solar3-dist1', simpl='',
planning_horizons='2020'), clusters=45,
input=dict(network='pypsa-eur-sec/results/version-2/prenetworks/elec_s{simpl}_{clusters}_lv{lv}__{sector_opts}_{planning_horizons}.nc', lv=1.0,
powerplants='pypsa-eur/resources/powerplants.csv', sector_opts='Co2L0-168H-T-H-B-I-solar3-dist1',
busmap_s='pypsa-eur/resources/busmap_elec_s{simpl}.csv', planning_horizons=2020,
busmap='pypsa-eur/resources/busmap_elec_s{simpl}_{clusters}.csv',
costs='technology_data/outputs/costs_{planning_horizons}.csv',
cop_air_total="pypsa-eur-sec/resources/cop_air_total_elec_s{simpl}_{clusters}.nc",
cop_soil_total="pypsa-eur-sec/resources/cop_soil_total_elec_s{simpl}_{clusters}.nc",
clustered_pop_layout="pypsa-eur-sec/resources/pop_layout_elec_s{simpl}_{clusters}.csv",),
output=['pypsa-eur-sec/results/version-2/prenetworks_brownfield/elec_s{simpl}_{clusters}_lv{lv}__{sector_opts}_{planning_horizons}.nc'],
) )
import yaml
with open('config.yaml', encoding='utf8') as f:
snakemake.config = yaml.safe_load(f)
logging.basicConfig(level=snakemake.config['logging_level']) logging.basicConfig(level=snakemake.config['logging_level'])
@ -433,25 +421,27 @@ if __name__ == "__main__":
baseyear= snakemake.config['scenario']["planning_horizons"][0] baseyear= snakemake.config['scenario']["planning_horizons"][0]
n = pypsa.Network(snakemake.input.network, overrides = override_component_attrs(snakemake.input.overrides)
override_component_attrs=override_component_attrs) n = pypsa.Network(snakemake.input.network, override_component_attrs=overrides)
add_build_year_to_new_assets(n, baseyear) add_build_year_to_new_assets(n, baseyear)
Nyears = n.snapshot_weightings.sum()/8760. Nyears = n.snapshot_weightings.generators.sum() / 8760.
costs = prepare_costs(snakemake.input.costs, costs = prepare_costs(
snakemake.config['costs']['USD2013_to_EUR2013'], snakemake.input.costs,
snakemake.config['costs']['discountrate'], snakemake.config['costs']['USD2013_to_EUR2013'],
Nyears, snakemake.config['costs']['discountrate'],
snakemake.config['costs']['lifetime']) Nyears,
snakemake.config['costs']['lifetime']
)
grouping_years=snakemake.config['existing_capacities']['grouping_years'] grouping_years=snakemake.config['existing_capacities']['grouping_years']
add_power_capacities_installed_before_baseyear(n, grouping_years, costs, baseyear) add_power_capacities_installed_before_baseyear(n, grouping_years, costs, baseyear)
if "H" in opts: if "H" in opts:
time_dep_hp_cop = options["time_dep_hp_cop"] time_dep_hp_cop = options["time_dep_hp_cop"]
ashp_cop = xr.open_dataarray(snakemake.input.cop_air_total).T.to_pandas().reindex(index=n.snapshots) ashp_cop = xr.open_dataarray(snakemake.input.cop_air_total).to_pandas().reindex(index=n.snapshots)
gshp_cop = xr.open_dataarray(snakemake.input.cop_soil_total).T.to_pandas().reindex(index=n.snapshots) gshp_cop = xr.open_dataarray(snakemake.input.cop_soil_total).to_pandas().reindex(index=n.snapshots)
default_lifetime = snakemake.config['costs']['lifetime'] default_lifetime = snakemake.config['costs']['lifetime']
add_heating_capacities_installed_before_baseyear(n, baseyear, grouping_years, ashp_cop, gshp_cop, time_dep_hp_cop, costs, default_lifetime) add_heating_capacities_installed_before_baseyear(n, baseyear, grouping_years, ashp_cop, gshp_cop, time_dep_hp_cop, costs, default_lifetime)

View File

@ -1,45 +1,53 @@
"""Build ammonia production."""
import pandas as pd import pandas as pd
ammonia = pd.read_excel(snakemake.input.usgs, country_to_alpha2 = {
sheet_name="T12", "Austriae": "AT",
skiprows=5, "Bulgaria": "BG",
header=0, "Belgiume": "BE",
index_col=0, "Croatia": "HR",
skipfooter=19) "Czechia": "CZ",
"Estonia": "EE",
rename = {"Austriae" : "AT", "Finland": "FI",
"Bulgaria" : "BG", "France": "FR",
"Belgiume" : "BE", "Germany": "DE",
"Croatia" : "HR", "Greece": "GR",
"Czechia" : "CZ", "Hungarye": "HU",
"Estonia" : "EE", "Italye": "IT",
"Finland" : "FI", "Lithuania": "LT",
"France" : "FR", "Netherlands": "NL",
"Germany" : "DE", "Norwaye": "NO",
"Greece" : "GR", "Poland": "PL",
"Hungarye" : "HU", "Romania": "RO",
"Italye" : "IT", "Serbia": "RS",
"Lithuania" : "LT", "Slovakia": "SK",
"Netherlands" : "NL", "Spain": "ES",
"Norwaye" : "NO", "Switzerland": "CH",
"Poland" : "PL", "United Kingdom": "GB",
"Romania" : "RO",
"Serbia" : "RS",
"Slovakia" : "SK",
"Spain" : "ES",
"Switzerland" : "CH",
"United Kingdom" : "GB",
} }
ammonia = ammonia.rename(rename) if __name__ == '__main__':
if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake('build_ammonia_production')
ammonia = ammonia.loc[rename.values(),[str(i) for i in range(2013,2018)]].astype(float) ammonia = pd.read_excel(snakemake.input.usgs,
sheet_name="T12",
skiprows=5,
header=0,
index_col=0,
skipfooter=19)
#convert from ktonN to ktonNH3 ammonia.rename(country_to_alpha2, inplace=True)
ammonia = ammonia*17/14
ammonia.index.name = "ktonNH3/a" years = [str(i) for i in range(2013, 2018)]
countries = country_to_alpha2.values()
ammonia = ammonia.loc[countries, years].astype(float)
ammonia.to_csv(snakemake.output.ammonia_production) # convert from ktonN to ktonNH3
ammonia *= 17 / 14
ammonia.index.name = "ktonNH3/a"
ammonia.to_csv(snakemake.output.ammonia_production)

View File

@ -1,72 +1,68 @@
# coding: utf-8
import pandas as pd import pandas as pd
idx = pd.IndexSlice rename = {"UK" : "GB", "BH" : "BA"}
def build_biomass_potentials(): def build_biomass_potentials():
#delete empty column C from this sheet first before reading it in config = snakemake.config['biomass']
year = config["year"]
scenario = config["scenario"]
df = pd.read_excel(snakemake.input.jrc_potentials, df = pd.read_excel(snakemake.input.jrc_potentials,
"Potentials (PJ)", "Potentials (PJ)",
index_col=[0,1]) index_col=[0,1])
df.rename(columns={"Unnamed: 18":"Municipal waste"},inplace=True) df.rename(columns={"Unnamed: 18": "Municipal waste"}, inplace=True)
df.drop(columns="Total",inplace=True) df.drop(columns="Total", inplace=True)
df.replace("-",0.,inplace=True) df.replace("-", 0., inplace=True)
df_dict = {} column = df.iloc[:,0]
countries = column.where(column.str.isalpha()).pad()
countries = [rename.get(ct, ct) for ct in countries]
countries_i = pd.Index(countries, name='country')
df.set_index(countries_i, append=True, inplace=True)
for i in range(36): df.drop(index='MS', level=0, inplace=True)
df_dict[df.iloc[i*16,1]] = df.iloc[1+i*16:(i+1)*16].astype(float)
#convert from PJ to MWh # convert from PJ to MWh
df_new = pd.concat(df_dict).rename({"UK" : "GB", "BH" : "BA"})/3.6*1e6 df = df / 3.6 * 1e6
df_new.index.name = "MWh/a"
df_new.to_csv(snakemake.output.biomass_potentials_all)
# solid biomass includes: Primary agricultural residues (MINBIOAGRW1), df.to_csv(snakemake.output.biomass_potentials_all)
# Forestry energy residue (MINBIOFRSF1),
# Secondary forestry residues (MINBIOWOOW1),
# Secondary Forestry residues sawdust (MINBIOWOO1a)',
# Forestry residues from landscape care biomass (MINBIOFRSF1a),
# Municipal waste (MINBIOMUN1)',
# biogas includes : Manure biomass potential (MINBIOGAS1), # solid biomass includes:
# Sludge biomass (MINBIOSLU1) # Primary agricultural residues (MINBIOAGRW1),
# Forestry energy residue (MINBIOFRSF1),
# Secondary forestry residues (MINBIOWOOW1),
# Secondary Forestry residues sawdust (MINBIOWOO1a)',
# Forestry residues from landscape care biomass (MINBIOFRSF1a),
# Municipal waste (MINBIOMUN1)',
us_type = pd.Series("", df_new.columns) # biogas includes:
# Manure biomass potential (MINBIOGAS1),
# Sludge biomass (MINBIOSLU1),
for k,v in snakemake.config['biomass']['classes'].items(): df = df.loc[year, scenario, :]
us_type.loc[v] = k
biomass_potentials = df_new.swaplevel(0,2).loc[snakemake.config['biomass']['scenario'],snakemake.config['biomass']['year']].groupby(us_type,axis=1).sum() grouper = {v: k for k, vv in config["classes"].items() for v in vv}
biomass_potentials.index.name = "MWh/a" df = df.groupby(grouper, axis=1).sum()
biomass_potentials.to_csv(snakemake.output.biomass_potentials)
df.index.name = "MWh/a"
df.to_csv(snakemake.output.biomass_potentials)
if __name__ == "__main__": if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from vresutils import Dict from helper import mock_snakemake
import yaml snakemake = mock_snakemake('build_biomass_potentials')
snakemake = Dict()
snakemake.input = Dict()
snakemake.input['jrc_potentials'] = "data/biomass/JRC Biomass Potentials.xlsx"
snakemake.output = Dict()
snakemake.output['biomass_potentials'] = 'data/biomass_potentials.csv'
snakemake.output['biomass_potentials_all']='resources/biomass_potentials_all.csv'
with open('config.yaml', encoding='utf8') as f:
snakemake.config = yaml.safe_load(f)
# This is a hack, to be replaced once snakemake is unicode-conform # This is a hack, to be replaced once snakemake is unicode-conform
if 'Secondary Forestry residues sawdust' in snakemake.config['biomass']['classes']['solid biomass']: solid_biomass = snakemake.config['biomass']['classes']['solid biomass']
snakemake.config['biomass']['classes']['solid biomass'].remove('Secondary Forestry residues sawdust') if 'Secondary Forestry residues sawdust' in solid_biomass:
snakemake.config['biomass']['classes']['solid biomass'].append('Secondary Forestry residues sawdust') solid_biomass.remove('Secondary Forestry residues sawdust')
solid_biomass.append('Secondary Forestry residues sawdust')
build_biomass_potentials() build_biomass_potentials()

View File

@ -1,32 +1,36 @@
"""Build clustered population layouts."""
import geopandas as gpd import geopandas as gpd
import xarray as xr import xarray as xr
import pandas as pd import pandas as pd
import atlite import atlite
import helper
cutout = atlite.Cutout(snakemake.config['atlite']['cutout_name'], if __name__ == '__main__':
cutout_dir=snakemake.config['atlite']['cutout_dir']) if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake(
'build_clustered_population_layouts',
simpl='',
clusters=48,
)
cutout = atlite.Cutout(snakemake.config['atlite']['cutout'])
clustered_busregions_as_geopd = gpd.read_file(snakemake.input.regions_onshore).set_index('name', drop=True) clustered_regions = gpd.read_file(
snakemake.input.regions_onshore).set_index('name').buffer(0).squeeze()
clustered_busregions = pd.Series(clustered_busregions_as_geopd.geometry, index=clustered_busregions_as_geopd.index) I = cutout.indicatormatrix(clustered_regions)
helper.clean_invalid_geometries(clustered_busregions) pop = {}
for item in ["total", "urban", "rural"]:
pop_layout = xr.open_dataarray(snakemake.input[f'pop_layout_{item}'])
pop[item] = I.dot(pop_layout.stack(spatial=('y', 'x')))
I = cutout.indicatormatrix(clustered_busregions) pop = pd.DataFrame(pop, index=clustered_regions.index)
pop["ct"] = pop.index.str[:2]
country_population = pop.total.groupby(pop.ct).sum()
pop["fraction"] = pop.total / pop.ct.map(country_population)
items = ["total","urban","rural"] pop.to_csv(snakemake.output.clustered_pop_layout)
pop = pd.DataFrame(columns=items,
index=clustered_busregions.index)
for item in items:
pop_layout = xr.open_dataarray(snakemake.input['pop_layout_'+item])
pop[item] = I.dot(pop_layout.stack(spatial=('y', 'x')))
pop.to_csv(snakemake.output.clustered_pop_layout)

View File

@ -1,22 +1,40 @@
"""Build COP time series for air- or ground-sourced heat pumps."""
import xarray as xr import xarray as xr
#quadratic regression based on Staffell et al. (2012)
#https://doi.org/10.1039/C2EE22653G
# COP is function of temp difference source to sink def coefficient_of_performance(delta_T, source='air'):
"""
cop_f = {"air" : lambda d_t: 6.81 -0.121*d_t + 0.000630*d_t**2, COP is function of temp difference source to sink.
"soil" : lambda d_t: 8.77 -0.150*d_t + 0.000734*d_t**2} The quadratic regression is based on Staffell et al. (2012)
https://doi.org/10.1039/C2EE22653G.
"""
if source == 'air':
return 6.81 - 0.121 * delta_T + 0.000630 * delta_T**2
elif source == 'soil':
return 8.77 - 0.150 * delta_T + 0.000734 * delta_T**2
else:
raise NotImplementedError("'source' must be one of ['air', 'soil']")
for area in ["total", "urban", "rural"]: if __name__ == '__main__':
for source in ["air", "soil"]: if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake(
'build_cop_profiles',
simpl='',
clusters=48,
)
source_T = xr.open_dataarray(snakemake.input["temp_{}_{}".format(source,area)]) for area in ["total", "urban", "rural"]:
delta_T = snakemake.config['sector']['heat_pump_sink_T'] - source_T for source in ["air", "soil"]:
cop = cop_f[source](delta_T) source_T = xr.open_dataarray(
snakemake.input[f"temp_{source}_{area}"])
cop.to_netcdf(snakemake.output["cop_{}_{}".format(source,area)]) delta_T = snakemake.config['sector']['heat_pump_sink_T'] - source_T
cop = coefficient_of_performance(delta_T, source)
cop.to_netcdf(snakemake.output[f"cop_{source}_{area}"])

File diff suppressed because it is too large Load Diff

View File

@ -1,42 +1,46 @@
"""Build heat demand time series."""
import geopandas as gpd import geopandas as gpd
import atlite import atlite
import pandas as pd import pandas as pd
import xarray as xr import xarray as xr
import scipy as sp import numpy as np
import helper
if 'snakemake' not in globals(): if __name__ == '__main__':
from vresutils import Dict if 'snakemake' not in globals():
import yaml from helper import mock_snakemake
snakemake = Dict() snakemake = mock_snakemake(
with open('config.yaml') as f: 'build_heat_demands',
snakemake.config = yaml.safe_load(f) simpl='',
snakemake.input = Dict() clusters=48,
snakemake.output = Dict() )
time = pd.date_range(freq='m', **snakemake.config['snapshots']) if 'snakemake' not in globals():
params = dict(years=slice(*time.year[[0, -1]]), months=slice(*time.month[[0, -1]])) from vresutils import Dict
import yaml
snakemake = Dict()
with open('config.yaml') as f:
snakemake.config = yaml.safe_load(f)
snakemake.input = Dict()
snakemake.output = Dict()
cutout = atlite.Cutout(snakemake.config['atlite']['cutout_name'], time = pd.date_range(freq='h', **snakemake.config['snapshots'])
cutout_dir=snakemake.config['atlite']['cutout_dir'], cutout_config = snakemake.config['atlite']['cutout']
**params) cutout = atlite.Cutout(cutout_config).sel(time=time)
clustered_busregions_as_geopd = gpd.read_file(snakemake.input.regions_onshore).set_index('name', drop=True) clustered_regions = gpd.read_file(
snakemake.input.regions_onshore).set_index('name').buffer(0).squeeze()
clustered_busregions = pd.Series(clustered_busregions_as_geopd.geometry, index=clustered_busregions_as_geopd.index) I = cutout.indicatormatrix(clustered_regions)
helper.clean_invalid_geometries(clustered_busregions) for area in ["rural", "urban", "total"]:
I = cutout.indicatormatrix(clustered_busregions) pop_layout = xr.open_dataarray(snakemake.input[f'pop_layout_{area}'])
stacked_pop = pop_layout.stack(spatial=('y', 'x'))
M = I.T.dot(np.diag(I.dot(stacked_pop)))
for item in ["rural","urban","total"]: heat_demand = cutout.heat_demand(
matrix=M.T, index=clustered_regions.index)
pop_layout = xr.open_dataarray(snakemake.input['pop_layout_'+item]) heat_demand.to_netcdf(snakemake.output[f"heat_demand_{area}"])
M = I.T.dot(sp.diag(I.dot(pop_layout.stack(spatial=('y', 'x')))))
heat_demand = cutout.heat_demand(matrix=M.T,index=clustered_busregions.index)
heat_demand.to_netcdf(snakemake.output["heat_demand_"+item])

View File

@ -1,39 +0,0 @@
import pandas as pd
idx = pd.IndexSlice
def build_industrial_demand():
pop_layout = pd.read_csv(snakemake.input.clustered_pop_layout,index_col=0)
pop_layout["ct"] = pop_layout.index.str[:2]
ct_total = pop_layout.total.groupby(pop_layout["ct"]).sum()
pop_layout["ct_total"] = pop_layout["ct"].map(ct_total)
pop_layout["fraction"] = pop_layout["total"]/pop_layout["ct_total"]
industrial_demand_per_country = pd.read_csv(snakemake.input.industrial_demand_per_country,index_col=0)
industrial_demand = industrial_demand_per_country.loc[pop_layout.ct].fillna(0.)
industrial_demand.index = pop_layout.index
industrial_demand = industrial_demand.multiply(pop_layout.fraction,axis=0)
industrial_demand.to_csv(snakemake.output.industrial_demand)
if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals():
from vresutils import Dict
import yaml
snakemake = Dict()
snakemake.input = Dict()
snakemake.input['clustered_pop_layout'] = "resources/pop_layout_elec_s_128.csv"
snakemake.input['industrial_demand_per_country']="resources/industrial_demand_per_country.csv"
snakemake.output = Dict()
snakemake.output['industrial_demand'] = "resources/industrial_demand_elec_s_128.csv"
with open('config.yaml', encoding='utf8') as f:
snakemake.config = yaml.safe_load(f)
build_industrial_demand()

View File

@ -1,153 +1,131 @@
"""Build industrial distribution keys from hotmaps database."""
import pypsa import uuid
import pandas as pd import pandas as pd
import geopandas as gpd import geopandas as gpd
from shapely import wkt, prepared from itertools import product
from scipy.spatial import cKDTree as KDTree
def prepare_hotmaps_database(): def locate_missing_industrial_sites(df):
"""
Locate industrial sites without valid locations based on
city and countries. Should only be used if the model's
spatial resolution is coarser than individual cities.
"""
df = pd.read_csv(snakemake.input.hotmaps_industrial_database, try:
sep=";", from geopy.geocoders import Nominatim
index_col=0) from geopy.extra.rate_limiter import RateLimiter
except:
raise ModuleNotFoundError("Optional dependency 'geopy' not found."
"Install via 'conda install -c conda-forge geopy'"
"or set 'industry: hotmaps_locate_missing: false'.")
#remove those sites without valid geometries locator = Nominatim(user_agent=str(uuid.uuid4()))
df.drop(df.index[df.geom.isna()], geocode = RateLimiter(locator.geocode, min_delay_seconds=2)
inplace=True)
#parse geometry def locate_missing(s):
#https://geopandas.org/gallery/create_geopandas_from_pandas.html?highlight=parse#from-wkt-format
df["Coordinates"] = df.geom.apply(lambda x : wkt.loads(x[x.find(";POINT")+1:]))
gdf = gpd.GeoDataFrame(df, geometry='Coordinates') if pd.isna(s.City) or s.City == "CONFIDENTIAL":
return None
europe_shape = gpd.read_file(snakemake.input.europe_shape).loc[0, 'geometry'] loc = geocode([s.City, s.Country], geometry='wkt')
europe_shape_prepped = prepared.prep(europe_shape) if loc is not None:
not_in_europe = gdf.index[~gdf.geometry.apply(europe_shape_prepped.contains)] print(f"Found:\t{loc}\nFor:\t{s['City']}, {s['Country']}\n")
print("Removing the following industrial facilities since they are not in European area:") return f"POINT({loc.longitude} {loc.latitude})"
print(gdf.loc[not_in_europe]) else:
gdf.drop(not_in_europe, return None
inplace=True)
country_to_code = { missing = df.index[df.geom.isna()]
'Belgium' : 'BE', df.loc[missing, 'coordinates'] = df.loc[missing].apply(locate_missing, axis=1)
'Bulgaria' : 'BG',
'Czech Republic' : 'CZ',
'Denmark' : 'DK',
'Germany' : 'DE',
'Estonia' : 'EE',
'Ireland' : 'IE',
'Greece' : 'GR',
'Spain' : 'ES',
'France' : 'FR',
'Croatia' : 'HR',
'Italy' : 'IT',
'Cyprus' : 'CY',
'Latvia' : 'LV',
'Lithuania' : 'LT',
'Luxembourg' : 'LU',
'Hungary' : 'HU',
'Malta' : 'MA',
'Netherland' : 'NL',
'Austria' : 'AT',
'Poland' : 'PL',
'Portugal' : 'PT',
'Romania' : 'RO',
'Slovenia' : 'SI',
'Slovakia' : 'SK',
'Finland' : 'FI',
'Sweden' : 'SE',
'United Kingdom' : 'GB',
'Iceland' : 'IS',
'Norway' : 'NO',
'Montenegro' : 'ME',
'FYR of Macedonia' : 'MK',
'Albania' : 'AL',
'Serbia' : 'RS',
'Turkey' : 'TU',
'Bosnia and Herzegovina' : 'BA',
'Switzerland' : 'CH',
'Liechtenstein' : 'AT',
}
gdf["country_code"] = gdf.Country.map(country_to_code)
if gdf["country_code"].isna().any(): # report stats
print("Warning, some countries not assigned an ISO code") num_still_missing = df.coordinates.isna().sum()
num_found = len(missing) - num_still_missing
share_missing = len(missing) / len(df) * 100
share_still_missing = num_still_missing / len(df) * 100
print(f"Found {num_found} missing locations.",
f"Share of missing locations reduced from {share_missing:.2f}% to {share_still_missing:.2f}%.")
gdf["x"] = gdf.geometry.x return df
gdf["y"] = gdf.geometry.y
def prepare_hotmaps_database(regions):
"""
Load hotmaps database of industrial sites and map onto bus regions.
"""
df = pd.read_csv(snakemake.input.hotmaps_industrial_database, sep=";", index_col=0)
df[["srid", "coordinates"]] = df.geom.str.split(';', expand=True)
if snakemake.config['industry'].get('hotmaps_locate_missing', False):
df = locate_missing_industrial_sites(df)
# remove those sites without valid locations
df.drop(df.index[df.coordinates.isna()], inplace=True)
df['coordinates'] = gpd.GeoSeries.from_wkt(df['coordinates'])
gdf = gpd.GeoDataFrame(df, geometry='coordinates', crs="EPSG:4326")
gdf = gpd.sjoin(gdf, regions, how="inner", op='within')
gdf.rename(columns={"index_right": "bus"}, inplace=True)
gdf["country"] = gdf.bus.str[:2]
return gdf return gdf
def assign_buses(gdf): def build_nodal_distribution_key(hotmaps, regions):
"""Build nodal distribution keys for each sector."""
gdf["bus"] = "" sectors = hotmaps.Subsector.unique()
countries = regions.index.str[:2].unique()
for c in n.buses.country.unique(): keys = pd.DataFrame(index=regions.index, columns=sectors, dtype=float)
buses_i = n.buses.index[n.buses.country == c]
kdtree = KDTree(n.buses.loc[buses_i, ['x','y']].values)
industry_i = gdf.index[(gdf.country_code == c)] pop = pd.read_csv(snakemake.input.clustered_pop_layout, index_col=0)
pop['country'] = pop.index.str[:2]
ct_total = pop.total.groupby(pop['country']).sum()
keys['population'] = pop.total / pop.country.map(ct_total)
if industry_i.empty: for sector, country in product(sectors, countries):
print("Skipping country with no industry:",c)
else:
tree_i = kdtree.query(gdf.loc[industry_i, ['x','y']].values)[1]
gdf.loc[industry_i, 'bus'] = buses_i[tree_i]
if (gdf.bus == "").any(): regions_ct = regions.index[regions.index.str.contains(country)]
print("Some industrial facilities have empty buses")
if gdf.bus.isna().any():
print("Some industrial facilities have NaN buses")
facilities = hotmaps.query("country == @country and Subsector == @sector")
def build_nodal_distribution_key(gdf): if not facilities.empty:
emissions = facilities["Emissions_ETS_2014"]
sectors = ['Iron and steel','Chemical industry','Cement','Non-metallic mineral products','Glass','Paper and printing','Non-ferrous metals'] if emissions.sum() == 0:
key = pd.Series(1 / len(facilities), facilities.index)
distribution_keys = pd.DataFrame(index=n.buses.index,
columns=sectors,
dtype=float)
pop_layout = pd.read_csv(snakemake.input.clustered_pop_layout,index_col=0)
pop_layout["ct"] = pop_layout.index.str[:2]
ct_total = pop_layout.total.groupby(pop_layout["ct"]).sum()
pop_layout["ct_total"] = pop_layout["ct"].map(ct_total)
distribution_keys["population"] = pop_layout["total"]/pop_layout["ct_total"]
for c in n.buses.country.unique():
buses = n.buses.index[n.buses.country == c]
for sector in sectors:
facilities = gdf.index[(gdf.country_code == c) & (gdf.Subsector == sector)]
if not facilities.empty:
emissions = gdf.loc[facilities,"Emissions_ETS_2014"]
if emissions.sum() == 0:
distribution_key = pd.Series(1/len(facilities),
facilities)
else:
#BEWARE: this is a strong assumption
emissions = emissions.fillna(emissions.mean())
distribution_key = emissions/emissions.sum()
distribution_key = distribution_key.groupby(gdf.loc[facilities,"bus"]).sum().reindex(buses,fill_value=0.)
else: else:
distribution_key = distribution_keys.loc[buses,"population"] #BEWARE: this is a strong assumption
emissions = emissions.fillna(emissions.mean())
key = emissions / emissions.sum()
key = key.groupby(facilities.bus).sum().reindex(regions_ct, fill_value=0.)
else:
key = keys.loc[regions_ct, 'population']
if abs(distribution_key.sum() - 1) > 1e-4: keys.loc[regions_ct, sector] = key
print(c,sector,distribution_key)
distribution_keys.loc[buses,sector] = distribution_key return keys
distribution_keys.to_csv(snakemake.output.industrial_distribution_key)
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake(
'build_industrial_distribution_key',
simpl='',
clusters=48,
)
regions = gpd.read_file(snakemake.input.regions_onshore).set_index('name')
n = pypsa.Network(snakemake.input.network) hotmaps = prepare_hotmaps_database(regions)
hotmaps_database = prepare_hotmaps_database() keys = build_nodal_distribution_key(hotmaps, regions)
assign_buses(hotmaps_database) keys.to_csv(snakemake.output.industrial_distribution_key)
build_nodal_distribution_key(hotmaps_database)

View File

@ -1,83 +0,0 @@
import pandas as pd
import numpy as np
tj_to_ktoe = 0.0238845
ktoe_to_twh = 0.01163
eb_base_dir = "data/eurostat-energy_balances-may_2018_edition"
jrc_base_dir = "data/jrc-idees-2015"
# import EU ratios df as csv
industry_sector_ratios=pd.read_csv(snakemake.input.industry_sector_ratios,
index_col=0)
#material demand per country and industry (kton/a)
countries_production = pd.read_csv(snakemake.input.industrial_production_per_country, index_col=0)
#Annual energy consumption in Switzerland by sector in 2015 (in TJ)
#From: Energieverbrauch in der Industrie und im Dienstleistungssektor, Der Bundesrat
#http://www.bfe.admin.ch/themen/00526/00541/00543/index.html?lang=de&dossier_id=00775
dic_Switzerland ={'Iron and steel': 7889.,
'Chemicals Industry': 26871.,
'Non-metallic mineral products': 15513.+3820.,
'Pulp, paper and printing': 12004.,
'Food, beverages and tobacco': 17728.,
'Non Ferrous Metals': 3037.,
'Transport Equipment': 14993.,
'Machinery Equipment': 4724.,
'Textiles and leather': 1742.,
'Wood and wood products': 0.,
'Other Industrial Sectors': 10825.,
'current electricity': 53760.}
eb_names={'NO':'Norway', 'AL':'Albania', 'BA':'Bosnia and Herzegovina',
'MK':'FYR of Macedonia', 'GE':'Georgia', 'IS':'Iceland',
'KO':'Kosovo', 'MD':'Moldova', 'ME':'Montenegro', 'RS':'Serbia',
'UA':'Ukraine', 'TR':'Turkey', }
jrc_names = {"GR" : "EL",
"GB" : "UK"}
#final energy consumption per country and industry (TWh/a)
countries_df = countries_production.dot(industry_sector_ratios.T)
countries_df*= 0.001 #GWh -> TWh (ktCO2 -> MtCO2)
non_EU = ['NO', 'CH', 'ME', 'MK', 'RS', 'BA', 'AL']
# save current electricity consumption
for country in countries_df.index:
if country in non_EU:
if country == 'CH':
countries_df.loc[country, 'current electricity']=dic_Switzerland['current electricity']*tj_to_ktoe*ktoe_to_twh
else:
excel_balances = pd.read_excel('{}/{}.XLSX'.format(eb_base_dir,eb_names[country]),
sheet_name='2016', index_col=1,header=0, skiprows=1 ,squeeze=True)
countries_df.loc[country, 'current electricity'] = excel_balances.loc['Industry', 'Electricity']*ktoe_to_twh
else:
excel_out = pd.read_excel('{}/JRC-IDEES-2015_Industry_{}.xlsx'.format(jrc_base_dir,jrc_names.get(country,country)),
sheet_name='Ind_Summary',index_col=0,header=0,squeeze=True) # the summary sheet
s_out = excel_out.iloc[27:48,-1]
countries_df.loc[country, 'current electricity'] = s_out['Electricity']*ktoe_to_twh
rename_sectors = {'elec':'electricity',
'biomass':'solid biomass',
'heat':'low-temperature heat'}
countries_df.rename(columns=rename_sectors,inplace=True)
countries_df.index.name = "TWh/a (MtCO2/a)"
countries_df.to_csv(snakemake.output.industrial_energy_demand_per_country,
float_format='%.2f')

View File

@ -1,140 +1,165 @@
"""Build industrial energy demand per country."""
import pandas as pd import pandas as pd
import multiprocessing as mp
# sub-sectors as used in PyPSA-Eur-Sec and listed in JRC-IDEES industry sheets from tqdm import tqdm
sub_sectors = {'Iron and steel' : ['Integrated steelworks','Electric arc'],
'Non-ferrous metals' : ['Alumina production','Aluminium - primary production','Aluminium - secondary production','Other non-ferrous metals'],
'Chemicals' : ['Basic chemicals', 'Other chemicals', 'Pharmaceutical products etc.', 'Basic chemicals feedstock'],
'Non-metalic mineral' : ['Cement','Ceramics & other NMM','Glass production'],
'Printing' : ['Pulp production','Paper production','Printing and media reproduction'],
'Food' : ['Food, beverages and tobacco'],
'Transport equipment' : ['Transport Equipment'],
'Machinery equipment' : ['Machinery Equipment'],
'Textiles and leather' : ['Textiles and leather'],
'Wood and wood products' : ['Wood and wood products'],
'Other Industrial Sectors' : ['Other Industrial Sectors'],
}
# name in JRC-IDEES Energy Balances
eb_sheet_name = {'Integrated steelworks' : 'cisb',
'Electric arc' : 'cise',
'Alumina production' : 'cnfa',
'Aluminium - primary production' : 'cnfp',
'Aluminium - secondary production' : 'cnfs',
'Other non-ferrous metals' : 'cnfo',
'Basic chemicals' : 'cbch',
'Other chemicals' : 'coch',
'Pharmaceutical products etc.' : 'cpha',
'Basic chemicals feedstock' : 'cpch',
'Cement' : 'ccem',
'Ceramics & other NMM' : 'ccer',
'Glass production' : 'cgla',
'Pulp production' : 'cpul',
'Paper production' : 'cpap',
'Printing and media reproduction' : 'cprp',
'Food, beverages and tobacco' : 'cfbt',
'Transport Equipment' : 'ctre',
'Machinery Equipment' : 'cmae',
'Textiles and leather' : 'ctel',
'Wood and wood products' : 'cwwp',
'Mining and quarrying' : 'cmiq',
'Construction' : 'ccon',
'Non-specified': 'cnsi',
}
fuels = {'all' : ['All Products'],
'solid' : ['Solid Fuels'],
'liquid' : ['Total petroleum products (without biofuels)'],
'gas' : ['Gases'],
'heat' : ['Nuclear heat','Derived heat'],
'biomass' : ['Biomass and Renewable wastes'],
'waste' : ['Wastes (non-renewable)'],
'electricity' : ['Electricity'],
}
ktoe_to_twh = 0.011630 ktoe_to_twh = 0.011630
# name in JRC-IDEES Energy Balances
sector_sheets = {'Integrated steelworks': 'cisb',
'Electric arc': 'cise',
'Alumina production': 'cnfa',
'Aluminium - primary production': 'cnfp',
'Aluminium - secondary production': 'cnfs',
'Other non-ferrous metals': 'cnfo',
'Basic chemicals': 'cbch',
'Other chemicals': 'coch',
'Pharmaceutical products etc.': 'cpha',
'Basic chemicals feedstock': 'cpch',
'Cement': 'ccem',
'Ceramics & other NMM': 'ccer',
'Glass production': 'cgla',
'Pulp production': 'cpul',
'Paper production': 'cpap',
'Printing and media reproduction': 'cprp',
'Food, beverages and tobacco': 'cfbt',
'Transport Equipment': 'ctre',
'Machinery Equipment': 'cmae',
'Textiles and leather': 'ctel',
'Wood and wood products': 'cwwp',
'Mining and quarrying': 'cmiq',
'Construction': 'ccon',
'Non-specified': 'cnsi',
}
fuels = {'All Products': 'all',
'Solid Fuels': 'solid',
'Total petroleum products (without biofuels)': 'liquid',
'Gases': 'gas',
'Nuclear heat': 'heat',
'Derived heat': 'heat',
'Biomass and Renewable wastes': 'biomass',
'Wastes (non-renewable)': 'waste',
'Electricity': 'electricity'
}
eu28 = ['FR', 'DE', 'GB', 'IT', 'ES', 'PL', 'SE', 'NL', 'BE', 'FI', eu28 = ['FR', 'DE', 'GB', 'IT', 'ES', 'PL', 'SE', 'NL', 'BE', 'FI',
'DK', 'PT', 'RO', 'AT', 'BG', 'EE', 'GR', 'LV', 'CZ', 'DK', 'PT', 'RO', 'AT', 'BG', 'EE', 'GR', 'LV', 'CZ',
'HU', 'IE', 'SK', 'LT', 'HR', 'LU', 'SI', 'CY', 'MT'] 'HU', 'IE', 'SK', 'LT', 'HR', 'LU', 'SI', 'CY', 'MT']
jrc_names = {"GR" : "EL", jrc_names = {"GR": "EL", "GB": "UK"}
"GB" : "UK"}
year = 2015
summaries = {}
#for some reason the Energy Balances list Other Industrial Sectors separately
ois_subs = ['Mining and quarrying','Construction','Non-specified']
#MtNH3/a def industrial_energy_demand_per_country(country):
ammonia = pd.read_csv(snakemake.input.ammonia_production,
index_col=0)/1e3 jrc_dir = snakemake.input.jrc
jrc_country = jrc_names.get(country, country)
fn = f'{jrc_dir}/JRC-IDEES-2015_EnergyBalance_{jrc_country}.xlsx'
sheets = list(sector_sheets.values())
df_dict = pd.read_excel(fn, sheet_name=sheets, index_col=0)
def get_subsector_data(sheet):
df = df_dict[sheet][year].groupby(fuels).sum()
df['other'] = df['all'] - df.loc[df.index != 'all'].sum()
return df
df = pd.concat({sub: get_subsector_data(sheet)
for sub, sheet in sector_sheets.items()}, axis=1)
sel = ['Mining and quarrying', 'Construction', 'Non-specified']
df['Other Industrial Sectors'] = df[sel].sum(axis=1)
df['Basic chemicals'] += df['Basic chemicals feedstock']
df.drop(columns=sel+['Basic chemicals feedstock'], index='all', inplace=True)
df *= ktoe_to_twh
return df
def add_ammonia_energy_demand(demand):
for ct in eu28: # MtNH3/a
print(ct) fn = snakemake.input.ammonia_production
filename = 'data/jrc-idees-2015/JRC-IDEES-2015_EnergyBalance_{}.xlsx'.format(jrc_names.get(ct,ct)) ammonia = pd.read_csv(fn, index_col=0)[str(year)] / 1e3
summary = pd.DataFrame(index=list(fuels.keys()) + ['other']) def ammonia_by_fuel(x):
for sector in sub_sectors: fuels = {'gas': config['MWh_CH4_per_tNH3_SMR'],
if sector == 'Other Industrial Sectors': 'electricity': config['MWh_elec_per_tNH3_SMR']}
subs = ois_subs
else:
subs = sub_sectors[sector]
for sub in subs: return pd.Series({k: x*v for k,v in fuels.items()})
df = pd.read_excel(filename,
sheet_name=eb_sheet_name[sub],
index_col=0)
s = df[year].astype(float) ammonia = ammonia.apply(ammonia_by_fuel).T
for fuel in fuels: demand['Ammonia'] = ammonia.unstack().reindex(index=demand.index, fill_value=0.)
summary.at[fuel,sub] = s[fuels[fuel]].sum()
summary.at['other',sub] = summary.at['all',sub] - summary.loc[summary.index.symmetric_difference(['all','other']),sub].sum()
summary['Other Industrial Sectors'] = summary[ois_subs].sum(axis=1) demand['Basic chemicals (without ammonia)'] = demand["Basic chemicals"] - demand["Ammonia"]
summary.drop(columns=ois_subs,inplace=True)
summary.drop(index=['all'],inplace=True) demand['Basic chemicals (without ammonia)'].clip(lower=0, inplace=True)
demand.drop(columns='Basic chemicals', inplace=True)
summary *= ktoe_to_twh return demand
summary['Basic chemicals'] += summary['Basic chemicals feedstock']
summary.drop(columns=['Basic chemicals feedstock'], inplace=True)
summary['Ammonia'] = 0.
summary.at['gas','Ammonia'] = snakemake.config['industry']['MWh_CH4_per_tNH3_SMR']*ammonia[str(year)].get(ct,0.)
summary.at['electricity','Ammonia'] = snakemake.config['industry']['MWh_elec_per_tNH3_SMR']*ammonia[str(year)].get(ct,0.)
summary['Basic chemicals (without ammonia)'] = summary['Basic chemicals'] - summary['Ammonia']
summary.loc[summary['Basic chemicals (without ammonia)'] < 0, 'Basic chemicals (without ammonia)'] = 0.
summary.drop(columns=['Basic chemicals'], inplace=True)
summaries[ct] = summary
final_summary = pd.concat(summaries,axis=1)
# add in the non-EU28 based on their output (which is derived from their energy too)
# output in MtMaterial/a
output = pd.read_csv(snakemake.input.industrial_production_per_country,
index_col=0)/1e3
eu28_averages = final_summary.groupby(level=1,axis=1).sum().divide(output.loc[eu28].sum(),axis=1)
non_eu28 = output.index.symmetric_difference(eu28)
for ct in non_eu28:
print(ct)
final_summary = pd.concat((final_summary,pd.concat({ct : eu28_averages.multiply(output.loc[ct],axis=1)},axis=1)),axis=1)
final_summary.index.name = 'TWh/a' def add_non_eu28_industrial_energy_demand(demand):
final_summary.to_csv(snakemake.output.industrial_energy_demand_per_country_today) # output in MtMaterial/a
fn = snakemake.input.industrial_production_per_country
production = pd.read_csv(fn, index_col=0) / 1e3
eu28_production = production.loc[eu28].sum()
eu28_energy = demand.groupby(level=1).sum()
eu28_averages = eu28_energy / eu28_production
non_eu28 = production.index.symmetric_difference(eu28)
demand_non_eu28 = pd.concat({k: v * eu28_averages
for k, v in production.loc[non_eu28].iterrows()})
return pd.concat([demand, demand_non_eu28])
def industrial_energy_demand(countries):
nprocesses = snakemake.threads
func = industrial_energy_demand_per_country
tqdm_kwargs = dict(ascii=False, unit=' country', total=len(countries),
desc="Build industrial energy demand")
with mp.Pool(processes=nprocesses) as pool:
demand_l = list(tqdm(pool.imap(func, countries), **tqdm_kwargs))
demand = pd.concat(demand_l, keys=countries)
return demand
if __name__ == '__main__':
if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake('build_industrial_energy_demand_per_country_today')
config = snakemake.config['industry']
year = config.get('reference_year', 2015)
demand = industrial_energy_demand(eu28)
demand = add_ammonia_energy_demand(demand)
demand = add_non_eu28_industrial_energy_demand(demand)
# for format compatibility
demand = demand.stack(dropna=False).unstack(level=[0,2])
# style and annotation
demand.index.name = 'TWh/a'
demand.sort_index(axis=1, inplace=True)
fn = snakemake.output.industrial_energy_demand_per_country_today
demand.to_csv(fn)

View File

@ -1,33 +1,44 @@
"""Build industrial energy demand per node."""
import pandas as pd import pandas as pd
import numpy as np
# import EU ratios df as csv if __name__ == '__main__':
industry_sector_ratios=pd.read_csv(snakemake.input.industry_sector_ratios, if 'snakemake' not in globals():
index_col=0) from helper import mock_snakemake
snakemake = mock_snakemake(
'build_industrial_energy_demand_per_node',
simpl='',
clusters=48,
)
# import EU ratios df as csv
fn = snakemake.input.industry_sector_ratios
industry_sector_ratios = pd.read_csv(fn, index_col=0)
#material demand per node and industry (kton/a) # material demand per node and industry (kton/a)
nodal_production = pd.read_csv(snakemake.input.industrial_production_per_node, fn = snakemake.input.industrial_production_per_node
index_col=0) nodal_production = pd.read_csv(fn, index_col=0)
#energy demand today to get current electricity # energy demand today to get current electricity
nodal_today = pd.read_csv(snakemake.input.industrial_energy_demand_per_node_today, fn = snakemake.input.industrial_energy_demand_per_node_today
index_col=0) nodal_today = pd.read_csv(fn, index_col=0)
#final energy consumption per node and industry (TWh/a) # final energy consumption per node and industry (TWh/a)
nodal_df = nodal_production.dot(industry_sector_ratios.T) nodal_df = nodal_production.dot(industry_sector_ratios.T)
nodal_df*= 0.001 #GWh -> TWh (ktCO2 -> MtCO2)
# convert GWh to TWh and ktCO2 to MtCO2
nodal_df *= 0.001
rename_sectors = {
'elec': 'electricity',
'biomass': 'solid biomass',
'heat': 'low-temperature heat'
}
nodal_df.rename(columns=rename_sectors, inplace=True)
rename_sectors = {'elec':'electricity', nodal_df["current electricity"] = nodal_today["electricity"]
'biomass':'solid biomass',
'heat':'low-temperature heat'}
nodal_df.rename(columns=rename_sectors,inplace=True) nodal_df.index.name = "TWh/a (MtCO2/a)"
nodal_df["current electricity"] = nodal_today["electricity"] fn = snakemake.output.industrial_energy_demand_per_node
nodal_df.to_csv(fn, float_format='%.2f')
nodal_df.index.name = "TWh/a (MtCO2/a)"
nodal_df.to_csv(snakemake.output.industrial_energy_demand_per_node,
float_format='%.2f')

View File

@ -1,54 +1,73 @@
"""Build industrial energy demand per node."""
import pandas as pd import pandas as pd
import numpy as np import numpy as np
from itertools import product
def build_nodal_demand(): # map JRC/our sectors to hotmaps sector, where mapping exist
sector_mapping = {
'Electric arc': 'Iron and steel',
'Integrated steelworks': 'Iron and steel',
'DRI + Electric arc': 'Iron and steel',
'Ammonia': 'Chemical industry',
'Basic chemicals (without ammonia)': 'Chemical industry',
'Other chemicals': 'Chemical industry',
'Pharmaceutical products etc.': 'Chemical industry',
'Cement': 'Cement',
'Ceramics & other NMM': 'Non-metallic mineral products',
'Glass production': 'Glass',
'Pulp production': 'Paper and printing',
'Paper production': 'Paper and printing',
'Printing and media reproduction': 'Paper and printing',
'Alumina production': 'Non-ferrous metals',
'Aluminium - primary production': 'Non-ferrous metals',
'Aluminium - secondary production': 'Non-ferrous metals',
'Other non-ferrous metals': 'Non-ferrous metals',
}
industrial_demand = pd.read_csv(snakemake.input.industrial_energy_demand_per_country_today,
header=[0,1],
index_col=0)
distribution_keys = pd.read_csv(snakemake.input.industrial_distribution_key, def build_nodal_industrial_energy_demand():
index_col=0)
distribution_keys["country"] = distribution_keys.index.str[:2]
nodal_demand = pd.DataFrame(0., fn = snakemake.input.industrial_energy_demand_per_country_today
index=distribution_keys.index, industrial_demand = pd.read_csv(fn, header=[0, 1], index_col=0)
columns=industrial_demand.index,
dtype=float)
#map JRC/our sectors to hotmaps sector, where mapping exist fn = snakemake.input.industrial_distribution_key
sector_mapping = {'Electric arc' : 'Iron and steel', keys = pd.read_csv(fn, index_col=0)
'Integrated steelworks' : 'Iron and steel', keys["country"] = keys.index.str[:2]
'DRI + Electric arc' : 'Iron and steel',
'Ammonia' : 'Chemical industry',
'Basic chemicals (without ammonia)' : 'Chemical industry',
'Other chemicals' : 'Chemical industry',
'Pharmaceutical products etc.' : 'Chemical industry',
'Cement' : 'Cement',
'Ceramics & other NMM' : 'Non-metallic mineral products',
'Glass production' : 'Glass',
'Pulp production' : 'Paper and printing',
'Paper production' : 'Paper and printing',
'Printing and media reproduction' : 'Paper and printing',
'Alumina production' : 'Non-ferrous metals',
'Aluminium - primary production' : 'Non-ferrous metals',
'Aluminium - secondary production' : 'Non-ferrous metals',
'Other non-ferrous metals' : 'Non-ferrous metals',
}
for c in distribution_keys.country.unique(): nodal_demand = pd.DataFrame(0., dtype=float,
buses = distribution_keys.index[distribution_keys.country == c] index=keys.index,
for sector in industrial_demand.columns.levels[1]: columns=industrial_demand.index)
distribution_key = distribution_keys.loc[buses,sector_mapping.get(sector,"population")]
demand = industrial_demand[c,sector] countries = keys.country.unique()
outer = pd.DataFrame(np.outer(distribution_key,demand),index=distribution_key.index,columns=demand.index) sectors = industrial_demand.columns.levels[1]
nodal_demand.loc[buses] += outer
for country, sector in product(countries, sectors):
buses = keys.index[keys.country == country]
mapping = sector_mapping.get(sector, 'population')
key = keys.loc[buses, mapping]
demand = industrial_demand[country, sector]
outer = pd.DataFrame(np.outer(key, demand),
index=key.index,
columns=demand.index)
nodal_demand.loc[buses] += outer
nodal_demand.index.name = "TWh/a" nodal_demand.index.name = "TWh/a"
nodal_demand.to_csv(snakemake.output.industrial_energy_demand_per_node_today) nodal_demand.to_csv(snakemake.output.industrial_energy_demand_per_node_today)
if __name__ == "__main__":
build_nodal_demand() if __name__ == "__main__":
if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake(
'build_industrial_energy_demand_per_node_today',
simpl='',
clusters=48,
)
build_nodal_industrial_energy_demand()

View File

@ -1,218 +1,222 @@
"""Build industrial production per country."""
import pandas as pd import pandas as pd
import numpy as np import numpy as np
import multiprocessing as mp
from tqdm import tqdm
tj_to_ktoe = 0.0238845 tj_to_ktoe = 0.0238845
ktoe_to_twh = 0.01163 ktoe_to_twh = 0.01163
jrc_base_dir = "data/jrc-idees-2015" sub_sheet_name_dict = {'Iron and steel': 'ISI',
eb_base_dir = "data/eurostat-energy_balances-may_2018_edition" 'Chemicals Industry': 'CHI',
'Non-metallic mineral products': 'NMM',
# year for which data is retrieved 'Pulp, paper and printing': 'PPA',
raw_year = 2015 'Food, beverages and tobacco': 'FBT',
year = raw_year-2016 'Non Ferrous Metals': 'NFM',
'Transport Equipment': 'TRE',
sub_sheet_name_dict = { 'Iron and steel':'ISI', 'Machinery Equipment': 'MAE',
'Chemicals Industry':'CHI', 'Textiles and leather': 'TEL',
'Non-metallic mineral products': 'NMM', 'Wood and wood products': 'WWP',
'Pulp, paper and printing': 'PPA', 'Other Industrial Sectors': 'OIS'}
'Food, beverages and tobacco': 'FBT',
'Non Ferrous Metals' : 'NFM',
'Transport Equipment': 'TRE',
'Machinery Equipment': 'MAE',
'Textiles and leather':'TEL',
'Wood and wood products': 'WWP',
'Other Industrial Sectors': 'OIS'}
index = ['elec','biomass','methane','hydrogen','heat','naphtha','process emission','process emission from feedstock']
non_EU = ['NO', 'CH', 'ME', 'MK', 'RS', 'BA', 'AL'] non_EU = ['NO', 'CH', 'ME', 'MK', 'RS', 'BA', 'AL']
jrc_names = {"GR" : "EL", jrc_names = {"GR": "EL", "GB": "UK"}
"GB" : "UK"}
eu28 = ['FR', 'DE', 'GB', 'IT', 'ES', 'PL', 'SE', 'NL', 'BE', 'FI', eu28 = ['FR', 'DE', 'GB', 'IT', 'ES', 'PL', 'SE', 'NL', 'BE', 'FI',
'DK', 'PT', 'RO', 'AT', 'BG', 'EE', 'GR', 'LV', 'CZ', 'DK', 'PT', 'RO', 'AT', 'BG', 'EE', 'GR', 'LV', 'CZ',
'HU', 'IE', 'SK', 'LT', 'HR', 'LU', 'SI', 'CY', 'MT'] 'HU', 'IE', 'SK', 'LT', 'HR', 'LU', 'SI', 'CY', 'MT']
sect2sub = {'Iron and steel': ['Electric arc', 'Integrated steelworks'],
countries = non_EU + eu28
sectors = ['Iron and steel','Chemicals Industry','Non-metallic mineral products',
'Pulp, paper and printing', 'Food, beverages and tobacco', 'Non Ferrous Metals',
'Transport Equipment', 'Machinery Equipment', 'Textiles and leather',
'Wood and wood products', 'Other Industrial Sectors']
sect2sub = {'Iron and steel':['Electric arc','Integrated steelworks'],
'Chemicals Industry': ['Basic chemicals', 'Other chemicals', 'Pharmaceutical products etc.'], 'Chemicals Industry': ['Basic chemicals', 'Other chemicals', 'Pharmaceutical products etc.'],
'Non-metallic mineral products': ['Cement','Ceramics & other NMM','Glass production'], 'Non-metallic mineral products': ['Cement', 'Ceramics & other NMM', 'Glass production'],
'Pulp, paper and printing': ['Pulp production','Paper production','Printing and media reproduction'], 'Pulp, paper and printing': ['Pulp production', 'Paper production', 'Printing and media reproduction'],
'Food, beverages and tobacco': ['Food, beverages and tobacco'], 'Food, beverages and tobacco': ['Food, beverages and tobacco'],
'Non Ferrous Metals': ['Alumina production', 'Aluminium - primary production', 'Aluminium - secondary production', 'Other non-ferrous metals'], 'Non Ferrous Metals': ['Alumina production', 'Aluminium - primary production', 'Aluminium - secondary production', 'Other non-ferrous metals'],
'Transport Equipment': ['Transport Equipment'], 'Transport Equipment': ['Transport Equipment'],
'Machinery Equipment': ['Machinery Equipment'], 'Machinery Equipment': ['Machinery Equipment'],
'Textiles and leather': ['Textiles and leather'], 'Textiles and leather': ['Textiles and leather'],
'Wood and wood products' :['Wood and wood products'], 'Wood and wood products': ['Wood and wood products'],
'Other Industrial Sectors':['Other Industrial Sectors']} 'Other Industrial Sectors': ['Other Industrial Sectors']}
subsectors = [ss for s in sectors for ss in sect2sub[s]] sub2sect = {v: k for k, vv in sect2sub.items() for v in vv}
#material demand per country and industry (kton/a) fields = {'Electric arc': 'Electric arc',
countries_demand = pd.DataFrame(index=countries,
columns=subsectors,
dtype=float)
out_dic ={'Electric arc': 'Electric arc',
'Integrated steelworks': 'Integrated steelworks', 'Integrated steelworks': 'Integrated steelworks',
'Basic chemicals': 'Basic chemicals (kt ethylene eq.)', 'Basic chemicals': 'Basic chemicals (kt ethylene eq.)',
'Other chemicals':'Other chemicals (kt ethylene eq.)', 'Other chemicals': 'Other chemicals (kt ethylene eq.)',
'Pharmaceutical products etc.':'Pharmaceutical products etc. (kt ethylene eq.)', 'Pharmaceutical products etc.': 'Pharmaceutical products etc. (kt ethylene eq.)',
'Cement':'Cement (kt)', 'Cement': 'Cement (kt)',
'Ceramics & other NMM':'Ceramics & other NMM (kt bricks eq.)', 'Ceramics & other NMM': 'Ceramics & other NMM (kt bricks eq.)',
'Glass production':'Glass production (kt)', 'Glass production': 'Glass production (kt)',
'Pulp production':'Pulp production (kt)', 'Pulp production': 'Pulp production (kt)',
'Paper production':'Paper production (kt)', 'Paper production': 'Paper production (kt)',
'Printing and media reproduction':'Printing and media reproduction (kt paper eq.)', 'Printing and media reproduction': 'Printing and media reproduction (kt paper eq.)',
'Food, beverages and tobacco': 'Physical output (index)', 'Food, beverages and tobacco': 'Physical output (index)',
'Alumina production':'Alumina production (kt)', 'Alumina production': 'Alumina production (kt)',
'Aluminium - primary production': 'Aluminium - primary production', 'Aluminium - primary production': 'Aluminium - primary production',
'Aluminium - secondary production': 'Aluminium - secondary production', 'Aluminium - secondary production': 'Aluminium - secondary production',
'Other non-ferrous metals' : 'Other non-ferrous metals (kt lead eq.)', 'Other non-ferrous metals': 'Other non-ferrous metals (kt lead eq.)',
'Transport Equipment': 'Physical output (index)', 'Transport Equipment': 'Physical output (index)',
'Machinery Equipment': 'Physical output (index)', 'Machinery Equipment': 'Physical output (index)',
'Textiles and leather': 'Physical output (index)', 'Textiles and leather': 'Physical output (index)',
'Wood and wood products': 'Physical output (index)', 'Wood and wood products': 'Physical output (index)',
'Other Industrial Sectors': 'Physical output (index)'} 'Other Industrial Sectors': 'Physical output (index)'}
loc_dic={'Iron and steel':[5,8], eb_names = {'NO': 'Norway', 'AL': 'Albania', 'BA': 'Bosnia and Herzegovina',
'Chemicals Industry': [7,11], 'MK': 'FYR of Macedonia', 'GE': 'Georgia', 'IS': 'Iceland',
'Non-metallic mineral products': [6,10], 'KO': 'Kosovo', 'MD': 'Moldova', 'ME': 'Montenegro', 'RS': 'Serbia',
'Pulp, paper and printing': [7,11], 'UA': 'Ukraine', 'TR': 'Turkey', }
'Food, beverages and tobacco': [2,6],
'Non Ferrous Metals': [9,14],
'Transport Equipment': [3,5],
'Machinery Equipment': [3,5],
'Textiles and leather': [3,5],
'Wood and wood products': [3,5],
'Other Industrial Sectors': [3,5]}
# In the summary sheet (IDEES database) some names include a white space eb_sectors = {'Iron & steel industry': 'Iron and steel',
dic_sec_summary = {'Iron and steel': 'Iron and steel', 'Chemical and Petrochemical industry': 'Chemicals Industry',
'Chemicals Industry': 'Chemicals Industry', 'Non-ferrous metal industry': 'Non-metallic mineral products',
'Non-metallic mineral products': 'Non-metallic mineral products', 'Paper, Pulp and Print': 'Pulp, paper and printing',
'Pulp, paper and printing': 'Pulp, paper and printing', 'Food and Tabacco': 'Food, beverages and tobacco',
'Food, beverages and tobacco': ' Food, beverages and tobacco', 'Non-metallic Minerals (Glass, pottery & building mat. Industry)': 'Non Ferrous Metals',
'Non Ferrous Metals': 'Non Ferrous Metals', 'Transport Equipment': 'Transport Equipment',
'Transport Equipment': ' Transport Equipment', 'Machinery': 'Machinery Equipment',
'Machinery Equipment': ' Machinery Equipment', 'Textile and Leather': 'Textiles and leather',
'Textiles and leather': ' Textiles and leather', 'Wood and Wood Products': 'Wood and wood products',
'Wood and wood products': ' Wood and wood products', 'Non-specified (Industry)': 'Other Industrial Sectors'}
'Other Industrial Sectors': ' Other Industrial Sectors'}
#countries=['CH'] # TODO: this should go in a csv in `data`
eb_names={'NO':'Norway', 'AL':'Albania', 'BA':'Bosnia and Herzegovina', # Annual energy consumption in Switzerland by sector in 2015 (in TJ)
'MK':'FYR of Macedonia', 'GE':'Georgia', 'IS':'Iceland', # From: Energieverbrauch in der Industrie und im Dienstleistungssektor, Der Bundesrat
'KO':'Kosovo', 'MD':'Moldova', 'ME':'Montenegro', 'RS':'Serbia', # http://www.bfe.admin.ch/themen/00526/00541/00543/index.html?lang=de&dossier_id=00775
'UA':'Ukraine', 'TR':'Turkey', } e_switzerland = pd.Series({'Iron and steel': 7889.,
'Chemicals Industry': 26871.,
dic_sec ={'Iron and steel':'Iron & steel industry', 'Non-metallic mineral products': 15513.+3820.,
'Chemicals Industry': 'Chemical and Petrochemical industry', 'Pulp, paper and printing': 12004.,
'Non-metallic mineral products': 'Non-ferrous metal industry', 'Food, beverages and tobacco': 17728.,
'Pulp, paper and printing': 'Paper, Pulp and Print', 'Non Ferrous Metals': 3037.,
'Food, beverages and tobacco': 'Food and Tabacco', 'Transport Equipment': 14993.,
'Non Ferrous Metals': 'Non-metallic Minerals (Glass, pottery & building mat. Industry)', 'Machinery Equipment': 4724.,
'Transport Equipment': 'Transport Equipment', 'Textiles and leather': 1742.,
'Machinery Equipment': 'Machinery', 'Wood and wood products': 0.,
'Textiles and leather': 'Textile and Leather', 'Other Industrial Sectors': 10825.,
'Wood and wood products': 'Wood and Wood Products', 'current electricity': 53760.})
'Other Industrial Sectors': 'Non-specified (Industry)'}
# Mining and Quarrying, Construction
#Annual energy consumption in Switzerland by sector in 2015 (in TJ)
#From: Energieverbrauch in der Industrie und im Dienstleistungssektor, Der Bundesrat
#http://www.bfe.admin.ch/themen/00526/00541/00543/index.html?lang=de&dossier_id=00775
dic_Switzerland ={'Iron and steel': 7889.,
'Chemicals Industry': 26871.,
'Non-metallic mineral products': 15513.+3820.,
'Pulp, paper and printing': 12004.,
'Food, beverages and tobacco': 17728.,
'Non Ferrous Metals': 3037.,
'Transport Equipment': 14993.,
'Machinery Equipment': 4724.,
'Textiles and leather': 1742.,
'Wood and wood products': 0.,
'Other Industrial Sectors': 10825.,
'current electricity': 53760.}
dic_sec_position={}
for country in countries:
countries_demand.loc[country] = 0.
print(country)
for sector in sectors:
if country in non_EU:
if country == 'CH':
e_country = dic_Switzerland[sector]*tj_to_ktoe
else:
# estimate physical output
#energy consumption in the sector and country
excel_balances = pd.read_excel('{}/{}.XLSX'.format(eb_base_dir,eb_names[country]),
sheet_name='2016', index_col=2,header=0, skiprows=1 ,squeeze=True)
e_country = excel_balances.loc[dic_sec[sector], 'Total all products']
#energy consumption in the sector and EU28
excel_sum_out = pd.read_excel('{}/JRC-IDEES-2015_Industry_EU28.xlsx'.format(jrc_base_dir),
sheet_name='Ind_Summary', index_col=0,header=0,squeeze=True) # the summary sheet
s_sum_out = excel_sum_out.iloc[49:76,year]
e_EU28 = s_sum_out[dic_sec_summary[sector]]
ratio_country_EU28=e_country/e_EU28
excel_out = pd.read_excel('{}/JRC-IDEES-2015_Industry_EU28.xlsx'.format(jrc_base_dir),
sheet_name=sub_sheet_name_dict[sector],index_col=0,header=0,squeeze=True) # the summary sheet
s_out = excel_out.iloc[loc_dic[sector][0]:loc_dic[sector][1],year]
for subsector in sect2sub[sector]:
countries_demand.loc[country,subsector] = ratio_country_EU28*s_out[out_dic[subsector]]
else:
# read the input sheets
excel_out = pd.read_excel('{}/JRC-IDEES-2015_Industry_{}.xlsx'.format(jrc_base_dir,jrc_names.get(country,country)), sheet_name=sub_sheet_name_dict[sector],index_col=0,header=0,squeeze=True) # the summary sheet
s_out = excel_out.iloc[loc_dic[sector][0]:loc_dic[sector][1],year]
for subsector in sect2sub[sector]:
countries_demand.loc[country,subsector] = s_out[out_dic[subsector]]
#include ammonia demand separately and remove ammonia from basic chemicals def find_physical_output(df):
start = np.where(df.index.str.contains('Physical output', na=''))[0][0]
empty_row = np.where(df.index.isnull())[0]
end = empty_row[np.argmax(empty_row > start)]
return slice(start, end)
ammonia = pd.read_csv(snakemake.input.ammonia_production,
index_col=0)
there = ammonia.index.intersection(countries_demand.index) def get_energy_ratio(country):
missing = countries_demand.index.symmetric_difference(there)
print("Following countries have no ammonia demand:", missing) if country == 'CH':
e_country = e_switzerland * tj_to_ktoe
else:
# estimate physical output, energy consumption in the sector and country
fn = f"{eurostat_dir}/{eb_names[country]}.XLSX"
df = pd.read_excel(fn, sheet_name='2016', index_col=2,
header=0, skiprows=1, squeeze=True)
e_country = df.loc[eb_sectors.keys(
), 'Total all products'].rename(eb_sectors)
countries_demand.insert(2,"Ammonia",0.) fn = f'{jrc_dir}/JRC-IDEES-2015_Industry_EU28.xlsx'
countries_demand.loc[there,"Ammonia"] = ammonia.loc[there, str(raw_year)] df = pd.read_excel(fn, sheet_name='Ind_Summary',
index_col=0, header=0, squeeze=True)
countries_demand["Basic chemicals"] -= countries_demand["Ammonia"] assert df.index[48] == "by sector"
year_i = df.columns.get_loc(year)
e_eu28 = df.iloc[49:76, year_i]
e_eu28.index = e_eu28.index.str.lstrip()
#EE, HR and LT got negative demand through subtraction - poor data e_ratio = e_country / e_eu28
countries_demand.loc[countries_demand["Basic chemicals"] < 0.,"Basic chemicals"] = 0.
countries_demand.rename(columns={"Basic chemicals" : "Basic chemicals (without ammonia)"}, return pd.Series({k: e_ratio[v] for k, v in sub2sect.items()})
inplace=True)
countries_demand.index.name = "kton/a"
countries_demand.to_csv(snakemake.output.industrial_production_per_country, def industry_production_per_country(country):
float_format='%.2f')
def get_sector_data(sector, country):
jrc_country = jrc_names.get(country, country)
fn = f'{jrc_dir}/JRC-IDEES-2015_Industry_{jrc_country}.xlsx'
sheet = sub_sheet_name_dict[sector]
df = pd.read_excel(fn, sheet_name=sheet,
index_col=0, header=0, squeeze=True)
year_i = df.columns.get_loc(year)
df = df.iloc[find_physical_output(df), year_i]
df = df.loc[map(fields.get, sect2sub[sector])]
df.index = sect2sub[sector]
return df
ct = "EU28" if country in non_EU else country
demand = pd.concat([get_sector_data(s, ct) for s in sect2sub.keys()])
if country in non_EU:
demand *= get_energy_ratio(country)
demand.name = country
return demand
def industry_production(countries):
nprocesses = snakemake.threads
func = industry_production_per_country
tqdm_kwargs = dict(ascii=False, unit=' country', total=len(countries),
desc="Build industry production")
with mp.Pool(processes=nprocesses) as pool:
demand_l = list(tqdm(pool.imap(func, countries), **tqdm_kwargs))
demand = pd.concat(demand_l, axis=1).T
demand.index.name = "kton/a"
return demand
def add_ammonia_demand_separately(demand):
"""Include ammonia demand separately and remove ammonia from basic chemicals."""
ammonia = pd.read_csv(snakemake.input.ammonia_production, index_col=0)
there = ammonia.index.intersection(demand.index)
missing = demand.index.symmetric_difference(there)
print("Following countries have no ammonia demand:", missing)
demand.insert(2, "Ammonia", 0.)
demand.loc[there, "Ammonia"] = ammonia.loc[there, str(year)]
demand["Basic chemicals"] -= demand["Ammonia"]
# EE, HR and LT got negative demand through subtraction - poor data
demand['Basic chemicals'].clip(lower=0., inplace=True)
to_rename = {"Basic chemicals": "Basic chemicals (without ammonia)"}
demand.rename(columns=to_rename, inplace=True)
if __name__ == '__main__':
if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake('build_industrial_production_per_country')
countries = non_EU + eu28
year = snakemake.config['industry']['reference_year']
jrc_dir = snakemake.input.jrc
eurostat_dir = snakemake.input.eurostat
demand = industry_production(countries)
add_ammonia_demand_separately(demand)
fn = snakemake.output.industrial_production_per_country
demand.to_csv(fn, float_format='%.2f')

View File

@ -1,29 +1,39 @@
"""Build future industrial production per country."""
import pandas as pd import pandas as pd
industrial_production = pd.read_csv(snakemake.input.industrial_production_per_country, if __name__ == '__main__':
index_col=0) if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake('build_industrial_production_per_country_tomorrow')
total_steel = industrial_production[["Integrated steelworks","Electric arc"]].sum(axis=1) config = snakemake.config["industry"]
fraction_primary_stays_primary = snakemake.config["industry"]["St_primary_fraction"]*total_steel.sum()/industrial_production["Integrated steelworks"].sum() fn = snakemake.input.industrial_production_per_country
production = pd.read_csv(fn, index_col=0)
industrial_production.insert(2, "DRI + Electric arc", keys = ["Integrated steelworks", "Electric arc"]
fraction_primary_stays_primary*industrial_production["Integrated steelworks"]) total_steel = production[keys].sum(axis=1)
industrial_production["Electric arc"] = total_steel - industrial_production["DRI + Electric arc"] int_steel = production["Integrated steelworks"].sum()
industrial_production["Integrated steelworks"] = 0. fraction_persistent_primary = config["St_primary_fraction"] * total_steel.sum() / int_steel
dri = fraction_persistent_primary * production["Integrated steelworks"]
production.insert(2, "DRI + Electric arc", dri)
total_aluminium = industrial_production[["Aluminium - primary production","Aluminium - secondary production"]].sum(axis=1) production["Electric arc"] = total_steel - production["DRI + Electric arc"]
production["Integrated steelworks"] = 0.
fraction_primary_stays_primary = snakemake.config["industry"]["Al_primary_fraction"]*total_aluminium.sum()/industrial_production["Aluminium - primary production"].sum() keys = ["Aluminium - primary production", "Aluminium - secondary production"]
total_aluminium = production[keys].sum(axis=1)
industrial_production["Aluminium - primary production"] = fraction_primary_stays_primary*industrial_production["Aluminium - primary production"] key_pri = "Aluminium - primary production"
industrial_production["Aluminium - secondary production"] = total_aluminium - industrial_production["Aluminium - primary production"] key_sec = "Aluminium - secondary production"
fraction_persistent_primary = config["Al_primary_fraction"] * total_aluminium.sum() / production[key_pri].sum()
production[key_pri] = fraction_persistent_primary * production[key_pri]
production[key_sec] = total_aluminium - production[key_pri]
industrial_production["Basic chemicals (without ammonia)"] *= snakemake.config["industry"]['HVC_primary_fraction'] production["Basic chemicals (without ammonia)"] *= config['HVC_primary_fraction']
fn = snakemake.output.industrial_production_per_country_tomorrow
industrial_production.to_csv(snakemake.output.industrial_production_per_country_tomorrow, production.to_csv(fn, float_format='%.2f')
float_format='%.2f')

View File

@ -1,47 +1,63 @@
"""Build industrial production per node."""
import pandas as pd import pandas as pd
from itertools import product
# map JRC/our sectors to hotmaps sector, where mapping exist
sector_mapping = {
'Electric arc': 'Iron and steel',
'Integrated steelworks': 'Iron and steel',
'DRI + Electric arc': 'Iron and steel',
'Ammonia': 'Chemical industry',
'Basic chemicals (without ammonia)': 'Chemical industry',
'Other chemicals': 'Chemical industry',
'Pharmaceutical products etc.': 'Chemical industry',
'Cement': 'Cement',
'Ceramics & other NMM': 'Non-metallic mineral products',
'Glass production': 'Glass',
'Pulp production': 'Paper and printing',
'Paper production': 'Paper and printing',
'Printing and media reproduction': 'Paper and printing',
'Alumina production': 'Non-ferrous metals',
'Aluminium - primary production': 'Non-ferrous metals',
'Aluminium - secondary production': 'Non-ferrous metals',
'Other non-ferrous metals': 'Non-ferrous metals',
}
def build_nodal_industrial_production(): def build_nodal_industrial_production():
industrial_production = pd.read_csv(snakemake.input.industrial_production_per_country_tomorrow, fn = snakemake.input.industrial_production_per_country_tomorrow
index_col=0) industrial_production = pd.read_csv(fn, index_col=0)
distribution_keys = pd.read_csv(snakemake.input.industrial_distribution_key, fn = snakemake.input.industrial_distribution_key
index_col=0) keys = pd.read_csv(fn, index_col=0)
distribution_keys["country"] = distribution_keys.index.str[:2] keys["country"] = keys.index.str[:2]
nodal_industrial_production = pd.DataFrame(index=distribution_keys.index, nodal_production = pd.DataFrame(index=keys.index,
columns=industrial_production.columns, columns=industrial_production.columns,
dtype=float) dtype=float)
#map JRC/our sectors to hotmaps sector, where mapping exist countries = keys.country.unique()
sector_mapping = {'Electric arc' : 'Iron and steel', sectors = industrial_production.columns
'Integrated steelworks' : 'Iron and steel',
'DRI + Electric arc' : 'Iron and steel', for country, sector in product(countries, sectors):
'Ammonia' : 'Chemical industry',
'Basic chemicals (without ammonia)' : 'Chemical industry',
'Other chemicals' : 'Chemical industry',
'Pharmaceutical products etc.' : 'Chemical industry',
'Cement' : 'Cement',
'Ceramics & other NMM' : 'Non-metallic mineral products',
'Glass production' : 'Glass',
'Pulp production' : 'Paper and printing',
'Paper production' : 'Paper and printing',
'Printing and media reproduction' : 'Paper and printing',
'Alumina production' : 'Non-ferrous metals',
'Aluminium - primary production' : 'Non-ferrous metals',
'Aluminium - secondary production' : 'Non-ferrous metals',
'Other non-ferrous metals' : 'Non-ferrous metals',
}
for c in distribution_keys.country.unique(): buses = keys.index[keys.country == country]
buses = distribution_keys.index[distribution_keys.country == c] mapping = sector_mapping.get(sector, "population")
for sector in industrial_production.columns:
distribution_key = distribution_keys.loc[buses,sector_mapping.get(sector,"population")] key = keys.loc[buses, mapping]
nodal_industrial_production.loc[buses,sector] = industrial_production.at[c,sector]*distribution_key nodal_production.loc[buses, sector] = industrial_production.at[country, sector] * key
nodal_production.to_csv(snakemake.output.industrial_production_per_node)
nodal_industrial_production.to_csv(snakemake.output.industrial_production_per_node)
if __name__ == "__main__": if __name__ == "__main__":
if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake('build_industrial_production_per_node',
simpl='',
clusters=48,
)
build_nodal_industrial_production() build_nodal_industrial_production()

File diff suppressed because it is too large Load Diff

View File

@ -1,103 +1,98 @@
"""Build mapping between grid cells and population (total, urban, rural)"""
# Build mapping between grid cells and population (total, urban, rural) import multiprocessing as mp
import atlite import atlite
import numpy as np
import pandas as pd import pandas as pd
import xarray as xr import xarray as xr
import geopandas as gpd
from vresutils import shapes as vshapes from vresutils import shapes as vshapes
import geopandas as gpd if __name__ == '__main__':
if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake('build_population_layouts')
cutout = atlite.Cutout(snakemake.config['atlite']['cutout'])
if 'snakemake' not in globals(): grid_cells = cutout.grid_cells()
from vresutils import Dict
import yaml
snakemake = Dict()
with open('config.yaml') as f:
snakemake.config = yaml.safe_load(f)
snakemake.input = Dict()
snakemake.output = Dict()
snakemake.input["urban_percent"] = "data/urban_percent.csv" # nuts3 has columns country, gdp, pop, geometry
# population is given in dimensions of 1e3=k
nuts3 = gpd.read_file(snakemake.input.nuts3_shapes).set_index('index')
cutout = atlite.Cutout(snakemake.config['atlite']['cutout_name'], # Indicator matrix NUTS3 -> grid cells
cutout_dir=snakemake.config['atlite']['cutout_dir']) I = atlite.cutout.compute_indicatormatrix(nuts3.geometry, grid_cells)
grid_cells = cutout.grid_cells() # Indicator matrix grid_cells -> NUTS3; inprinciple Iinv*I is identity
# but imprecisions mean not perfect
Iinv = cutout.indicatormatrix(nuts3.geometry)
#nuts3 has columns country, gdp, pop, geometry countries = np.sort(nuts3.country.unique())
#population is given in dimensions of 1e3=k
nuts3 = gpd.read_file(snakemake.input.nuts3_shapes).set_index('index')
urban_fraction = pd.read_csv(snakemake.input.urban_percent,
header=None, index_col=0,
names=['fraction'], squeeze=True) / 100.
# Indicator matrix NUTS3 -> grid cells # fill missing Balkans values
I = atlite.cutout.compute_indicatormatrix(nuts3.geometry, grid_cells) missing = ["AL", "ME", "MK"]
reference = ["RS", "BA"]
average = urban_fraction[reference].mean()
fill_values = pd.Series({ct: average for ct in missing})
urban_fraction = urban_fraction.append(fill_values)
# Indicator matrix grid_cells -> NUTS3; inprinciple Iinv*I is identity # population in each grid cell
# but imprecisions mean not perfect pop_cells = pd.Series(I.dot(nuts3['pop']))
Iinv = cutout.indicatormatrix(nuts3.geometry)
countries = nuts3.country.value_counts().index.sort_values() # in km^2
with mp.Pool(processes=snakemake.threads) as pool:
cell_areas = pd.Series(pool.map(vshapes.area, grid_cells)) / 1e6
urban_fraction = pd.read_csv(snakemake.input.urban_percent, # pop per km^2
header=None,index_col=0,squeeze=True)/100. density_cells = pop_cells / cell_areas
#fill missing Balkans values # rural or urban population in grid cell
missing = ["AL","ME","MK"] pop_rural = pd.Series(0., density_cells.index)
reference = ["RS","BA"] pop_urban = pd.Series(0., density_cells.index)
urban_fraction = urban_fraction.reindex(urban_fraction.index.union(missing))
urban_fraction.loc[missing] = urban_fraction[reference].mean()
for ct in countries:
print(ct, urban_fraction[ct])
#population in each grid cell indicator_nuts3_ct = nuts3.country.apply(lambda x: 1. if x == ct else 0.)
pop_cells = pd.Series(I.dot(nuts3['pop']))
#in km^2 indicator_cells_ct = pd.Series(Iinv.T.dot(indicator_nuts3_ct))
cell_areas = pd.Series(cutout.grid_cells()).map(vshapes.area)/1e6
#pop per km^2 density_cells_ct = indicator_cells_ct * density_cells
density_cells = pop_cells/cell_areas
pop_cells_ct = indicator_cells_ct * pop_cells
#rural or urban population in grid cell # correct for imprecision of Iinv*I
pop_rural = pd.Series(0.,density_cells.index) pop_ct = nuts3.loc[nuts3.country==ct,'pop'].sum()
pop_urban = pd.Series(0.,density_cells.index) pop_cells_ct *= pop_ct / pop_cells_ct.sum()
for ct in countries: # The first low density grid cells to reach rural fraction are rural
print(ct,urban_fraction[ct]) asc_density_i = density_cells_ct.sort_values().index
asc_density_cumsum = pop_cells_ct[asc_density_i].cumsum() / pop_cells_ct.sum()
rural_fraction_ct = 1 - urban_fraction[ct]
pop_ct_rural_b = asc_density_cumsum < rural_fraction_ct
pop_ct_urban_b = ~pop_ct_rural_b
indicator_nuts3_ct = pd.Series(0.,nuts3.index) pop_ct_rural_b[indicator_cells_ct == 0.] = False
indicator_nuts3_ct[nuts3.index[nuts3.country==ct]] = 1. pop_ct_urban_b[indicator_cells_ct == 0.] = False
indicator_cells_ct = pd.Series(Iinv.T.dot(indicator_nuts3_ct)) pop_rural += pop_cells_ct.where(pop_ct_rural_b, 0.)
pop_urban += pop_cells_ct.where(pop_ct_urban_b, 0.)
density_cells_ct = indicator_cells_ct*density_cells pop_cells = {"total": pop_cells}
pop_cells["rural"] = pop_rural
pop_cells["urban"] = pop_urban
pop_cells_ct = indicator_cells_ct*pop_cells for key, pop in pop_cells.items():
#correct for imprecision of Iinv*I ycoords = ('y', cutout.coords['y'])
pop_ct = nuts3['pop'][indicator_nuts3_ct.index[indicator_nuts3_ct == 1.]].sum() xcoords = ('x', cutout.coords['x'])
pop_cells_ct = pop_cells_ct*pop_ct/pop_cells_ct.sum() values = pop.values.reshape(cutout.shape)
layout = xr.DataArray(values, [ycoords, xcoords])
# The first low density grid cells to reach rural fraction are rural layout.to_netcdf(snakemake.output[f"pop_layout_{key}"])
index_from_low_d_to_high_d = density_cells_ct.sort_values().index
pop_ct_rural_b = pop_cells_ct[index_from_low_d_to_high_d].cumsum()/pop_cells_ct.sum() < (1-urban_fraction[ct])
pop_ct_urban_b = ~pop_ct_rural_b
pop_ct_rural_b[indicator_cells_ct==0.] = False
pop_ct_urban_b[indicator_cells_ct==0.] = False
pop_rural += pop_cells_ct.where(pop_ct_rural_b,0.)
pop_urban += pop_cells_ct.where(pop_ct_urban_b,0.)
pop_cells = {"total" : pop_cells}
pop_cells["rural"] = pop_rural
pop_cells["urban"] = pop_urban
for key in pop_cells.keys():
layout = xr.DataArray(pop_cells[key].values.reshape(cutout.shape),
[('y', cutout.coords['y']), ('x', cutout.coords['x'])])
layout.to_netcdf(snakemake.output["pop_layout_"+key])

View File

@ -441,7 +441,7 @@ def prepare_temperature_data():
temperature_factor = (t_threshold - temperature_average_d_heat) * d_heat * 1/365 temperature_factor = (t_threshold - temperature_average_d_heat) * d_heat * 1/365
""" """
temperature = xr.open_dataarray(snakemake.input.air_temperature).T.to_pandas() temperature = xr.open_dataarray(snakemake.input.air_temperature).to_pandas()
d_heat = (temperature.groupby(temperature.columns.str[:2], axis=1).mean() d_heat = (temperature.groupby(temperature.columns.str[:2], axis=1).mean()
.resample("1D").mean()<t_threshold).sum() .resample("1D").mean()<t_threshold).sum()
temperature_average_d_heat = (temperature.groupby(temperature.columns.str[:2], axis=1) temperature_average_d_heat = (temperature.groupby(temperature.columns.str[:2], axis=1)
@ -825,36 +825,15 @@ def sample_dE_costs_area(area, area_tot, costs, dE_space, countries,
#%% --- MAIN -------------------------------------------------------------- #%% --- MAIN --------------------------------------------------------------
if __name__ == "__main__": if __name__ == "__main__":
# for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
import yaml from helper import mock_snakemake
import os snakemake = mock_snakemake(
from vresutils.snakemake import MockSnakemake 'build_retro_cost',
snakemake = MockSnakemake( simpl='',
wildcards=dict( clusters=48,
network='elec', lv=1.0,
simpl='', sector_opts='Co2L0-168H-T-H-B-I-solar3-dist1'
clusters='48',
lv='1',
opts='Co2L-3H',
sector_opts="[Co2L0p0-168H-T-H-B-I]"),
input=dict(
building_stock="data/retro/data_building_stock.csv",
data_tabula="data/retro/tabula-calculator-calcsetbuilding.csv",
u_values_PL="data/retro/u_values_poland.csv",
air_temperature = "resources/temp_air_total_elec_s{simpl}_{clusters}.nc",
tax_w="data/retro/electricity_taxes_eu.csv",
construction_index="data/retro/comparative_level_investment.csv",
floor_area_missing="data/retro/floor_area_missing.csv",
clustered_pop_layout="resources/pop_layout_elec_s{simpl}_{clusters}.csv",
cost_germany="data/retro/retro_cost_germany.csv",
window_assumptions="data/retro/window_assumptions.csv"),
output=dict(
retro_cost="resources/retro_cost_elec_s{simpl}_{clusters}.csv",
floor_area="resources/floor_area_elec_s{simpl}_{clusters}.csv")
) )
with open('config.yaml', encoding='utf8') as f:
snakemake.config = yaml.safe_load(f)
# ******** config ********************************************************* # ******** config *********************************************************

View File

@ -1,52 +1,52 @@
"""Build solar thermal collector time series."""
import geopandas as gpd import geopandas as gpd
import atlite import atlite
import pandas as pd import pandas as pd
import xarray as xr import xarray as xr
import scipy as sp import numpy as np
import helper
if 'snakemake' not in globals(): if __name__ == '__main__':
from vresutils import Dict if 'snakemake' not in globals():
import yaml from helper import mock_snakemake
snakemake = Dict() snakemake = mock_snakemake(
with open('config.yaml') as f: 'build_solar_thermal_profiles',
snakemake.config = yaml.safe_load(f) simpl='',
snakemake.input = Dict() clusters=48,
snakemake.output = Dict() )
time = pd.date_range(freq='m', **snakemake.config['snapshots']) if 'snakemake' not in globals():
params = dict(years=slice(*time.year[[0, -1]]), months=slice(*time.month[[0, -1]])) from vresutils import Dict
import yaml
snakemake = Dict()
with open('config.yaml') as f:
snakemake.config = yaml.safe_load(f)
snakemake.input = Dict()
snakemake.output = Dict()
config = snakemake.config['solar_thermal']
time = pd.date_range(freq='h', **snakemake.config['snapshots'])
cutout_config = snakemake.config['atlite']['cutout']
cutout = atlite.Cutout(cutout_config).sel(time=time)
cutout = atlite.Cutout(snakemake.config['atlite']['cutout_name'], clustered_regions = gpd.read_file(
cutout_dir=snakemake.config['atlite']['cutout_dir'], snakemake.input.regions_onshore).set_index('name').buffer(0).squeeze()
**params)
clustered_busregions_as_geopd = gpd.read_file(snakemake.input.regions_onshore).set_index('name', drop=True) I = cutout.indicatormatrix(clustered_regions)
clustered_busregions = pd.Series(clustered_busregions_as_geopd.geometry, index=clustered_busregions_as_geopd.index) for area in ["total", "rural", "urban"]:
helper.clean_invalid_geometries(clustered_busregions) pop_layout = xr.open_dataarray(snakemake.input[f'pop_layout_{area}'])
I = cutout.indicatormatrix(clustered_busregions) stacked_pop = pop_layout.stack(spatial=('y', 'x'))
M = I.T.dot(np.diag(I.dot(stacked_pop)))
nonzero_sum = M.sum(axis=0, keepdims=True)
nonzero_sum[nonzero_sum == 0.] = 1.
M_tilde = M / nonzero_sum
for item in ["total","rural","urban"]: solar_thermal = cutout.solar_thermal(**config, matrix=M_tilde.T,
index=clustered_regions.index)
pop_layout = xr.open_dataarray(snakemake.input['pop_layout_'+item]) solar_thermal.to_netcdf(snakemake.output[f"solar_thermal_{area}"])
M = I.T.dot(sp.diag(I.dot(pop_layout.stack(spatial=('y', 'x')))))
nonzero_sum = M.sum(axis=0, keepdims=True)
nonzero_sum[nonzero_sum == 0.] = 1.
M_tilde = M/nonzero_sum
solar_thermal_angle = 45.
#should clearsky_model be "simple" or "enhanced"?
solar_thermal = cutout.solar_thermal(clearsky_model="simple",
orientation={'slope': solar_thermal_angle, 'azimuth': 180.},
matrix = M_tilde.T,
index=clustered_busregions.index)
solar_thermal.to_netcdf(snakemake.output["solar_thermal_"+item])

View File

@ -1,50 +1,46 @@
"""Build temperature profiles."""
import geopandas as gpd import geopandas as gpd
import atlite import atlite
import pandas as pd import pandas as pd
import xarray as xr import xarray as xr
import scipy as sp import numpy as np
import helper
if 'snakemake' not in globals(): if __name__ == '__main__':
from vresutils import Dict if 'snakemake' not in globals():
import yaml from helper import mock_snakemake
snakemake = Dict() snakemake = mock_snakemake(
with open('config.yaml') as f: 'build_temperature_profiles',
snakemake.config = yaml.safe_load(f) simpl='',
snakemake.input = Dict() clusters=48,
snakemake.output = Dict() )
time = pd.date_range(freq='m', **snakemake.config['snapshots']) time = pd.date_range(freq='h', **snakemake.config['snapshots'])
params = dict(years=slice(*time.year[[0, -1]]), months=slice(*time.month[[0, -1]])) cutout_config = snakemake.config['atlite']['cutout']
cutout = atlite.Cutout(cutout_config).sel(time=time)
clustered_regions = gpd.read_file(
snakemake.input.regions_onshore).set_index('name').buffer(0).squeeze()
cutout = atlite.Cutout(snakemake.config['atlite']['cutout_name'], I = cutout.indicatormatrix(clustered_regions)
cutout_dir=snakemake.config['atlite']['cutout_dir'],
**params)
clustered_busregions_as_geopd = gpd.read_file(snakemake.input.regions_onshore).set_index('name', drop=True) for area in ["total", "rural", "urban"]:
clustered_busregions = pd.Series(clustered_busregions_as_geopd.geometry, index=clustered_busregions_as_geopd.index) pop_layout = xr.open_dataarray(snakemake.input[f'pop_layout_{area}'])
helper.clean_invalid_geometries(clustered_busregions) stacked_pop = pop_layout.stack(spatial=('y', 'x'))
M = I.T.dot(np.diag(I.dot(stacked_pop)))
I = cutout.indicatormatrix(clustered_busregions) nonzero_sum = M.sum(axis=0, keepdims=True)
nonzero_sum[nonzero_sum == 0.] = 1.
M_tilde = M / nonzero_sum
temp_air = cutout.temperature(
matrix=M_tilde.T, index=clustered_regions.index)
for item in ["total","rural","urban"]: temp_air.to_netcdf(snakemake.output[f"temp_air_{area}"])
pop_layout = xr.open_dataarray(snakemake.input['pop_layout_'+item]) temp_soil = cutout.soil_temperature(
matrix=M_tilde.T, index=clustered_regions.index)
M = I.T.dot(sp.diag(I.dot(pop_layout.stack(spatial=('y', 'x'))))) temp_soil.to_netcdf(snakemake.output[f"temp_soil_{area}"])
nonzero_sum = M.sum(axis=0, keepdims=True)
nonzero_sum[nonzero_sum == 0.] = 1.
M_tilde = M/nonzero_sum
temp_air = cutout.temperature(matrix=M_tilde.T,index=clustered_busregions.index)
temp_air.to_netcdf(snakemake.output["temp_air_"+item])
temp_soil = cutout.soil_temperature(matrix=M_tilde.T,index=clustered_busregions.index)
temp_soil.to_netcdf(snakemake.output["temp_soil_"+item])

View File

@ -1,10 +1,17 @@
from shutil import copy from shutil import copy
files = ["config.yaml", files = [
"Snakefile", "config.yaml",
"scripts/solve_network.py", "Snakefile",
"scripts/prepare_sector_network.py"] "scripts/solve_network.py",
"scripts/prepare_sector_network.py"
]
for f in files: if __name__ == '__main__':
copy(f,snakemake.config['summary_dir'] + '/' + snakemake.config['run'] + '/configs/') if 'snakemake' not in globals():
from helper import mock_snakemake
snakemake = mock_snakemake('copy_config')
for f in files:
copy(f,snakemake.config['summary_dir'] + '/' + snakemake.config['run'] + '/configs/')

View File

@ -1,15 +1,91 @@
import os
import pandas as pd
from pathlib import Path
from pypsa.descriptors import Dict
from pypsa.components import components, component_attrs
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
#https://stackoverflow.com/questions/20833344/fix-invalid-polygon-in-shapely
#https://stackoverflow.com/questions/13062334/polygon-intersection-error-in-shapely-shapely-geos-topologicalerror-the-opera def override_component_attrs(directory):
#https://shapely.readthedocs.io/en/latest/manual.html#object.buffer """Tell PyPSA that links can have multiple outputs by
def clean_invalid_geometries(geometries): overriding the component_attrs. This can be done for
"""Fix self-touching or self-crossing polygons; these seem to appear as many buses as you need with format busi for i = 2,3,4,5,....
due to numerical problems from writing and reading, since the geometries See https://pypsa.org/doc/components.html#link-with-multiple-outputs-or-inputs
are valid before being written in pypsa-eur/scripts/cluster_network.py"""
for i,p in geometries.items(): Parameters
if not p.is_valid: ----------
logger.warning(f'Clustered region {i} had an invalid geometry, fixing using zero buffer.') directory : string
geometries[i] = p.buffer(0) Folder where component attributes to override are stored
analogous to ``pypsa/component_attrs``, e.g. `links.csv`.
Returns
-------
Dictionary of overriden component attributes.
"""
attrs = Dict({k : v.copy() for k,v in component_attrs.items()})
for component, list_name in components.list_name.items():
fn = f"{directory}/{list_name}.csv"
if os.path.isfile(fn):
overrides = pd.read_csv(fn, index_col=0, na_values="n/a")
attrs[component] = overrides.combine_first(attrs[component])
return attrs
# from pypsa-eur/_helpers.py
def mock_snakemake(rulename, **wildcards):
"""
This function is expected to be executed from the 'scripts'-directory of '
the snakemake project. It returns a snakemake.script.Snakemake object,
based on the Snakefile.
If a rule has wildcards, you have to specify them in **wildcards.
Parameters
----------
rulename: str
name of the rule for which the snakemake object should be generated
**wildcards:
keyword arguments fixing the wildcards. Only necessary if wildcards are
needed.
"""
import snakemake as sm
import os
from pypsa.descriptors import Dict
from snakemake.script import Snakemake
script_dir = Path(__file__).parent.resolve()
assert Path.cwd().resolve() == script_dir, \
f'mock_snakemake has to be run from the repository scripts directory {script_dir}'
os.chdir(script_dir.parent)
for p in sm.SNAKEFILE_CHOICES:
if os.path.exists(p):
snakefile = p
break
workflow = sm.Workflow(snakefile)
workflow.include(snakefile)
workflow.global_resources = {}
rule = workflow.get_rule(rulename)
dag = sm.dag.DAG(workflow, rules=[rule])
wc = Dict(wildcards)
job = sm.jobs.Job(rule, dag, wc)
def make_accessable(*ios):
for io in ios:
for i in range(len(io)):
io[i] = os.path.abspath(io[i])
make_accessable(job.input, job.output, job.log)
snakemake = Snakemake(job.input, job.output, job.params, job.wildcards,
job.threads, job.resources, job.log,
job.dag.workflow.config, job.rule.name, None,)
# create log and output dir if not existent
for path in list(snakemake.log) + list(snakemake.output):
Path(path).parent.mkdir(parents=True, exist_ok=True)
os.chdir(script_dir)
return snakemake

View File

@ -1,44 +1,21 @@
from six import iteritems
import sys import sys
import yaml
import pandas as pd
import numpy as np
import pypsa import pypsa
from vresutils.costdata import annuity import numpy as np
import pandas as pd
from prepare_sector_network import generate_periodic_profiles, prepare_costs from prepare_sector_network import prepare_costs
from helper import override_component_attrs
import yaml
idx = pd.IndexSlice idx = pd.IndexSlice
opt_name = {"Store": "e", "Line" : "s", "Transformer" : "s"} opt_name = {
"Store": "e",
#First tell PyPSA that links can have multiple outputs by "Line": "s",
#overriding the component_attrs. This can be done for "Transformer": "s"
#as many buses as you need with format busi for i = 2,3,4,5,.... }
#See https://pypsa.org/doc/components.html#link-with-multiple-outputs-or-inputs
override_component_attrs = pypsa.descriptors.Dict({k : v.copy() for k,v in pypsa.components.component_attrs.items()})
override_component_attrs["Link"].loc["bus2"] = ["string",np.nan,np.nan,"2nd bus","Input (optional)"]
override_component_attrs["Link"].loc["bus3"] = ["string",np.nan,np.nan,"3rd bus","Input (optional)"]
override_component_attrs["Link"].loc["bus4"] = ["string",np.nan,np.nan,"4th bus","Input (optional)"]
override_component_attrs["Link"].loc["efficiency2"] = ["static or series","per unit",1.,"2nd bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["efficiency3"] = ["static or series","per unit",1.,"3rd bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["efficiency4"] = ["static or series","per unit",1.,"4th bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["p2"] = ["series","MW",0.,"2nd bus output","Output"]
override_component_attrs["Link"].loc["p3"] = ["series","MW",0.,"3rd bus output","Output"]
override_component_attrs["Link"].loc["p4"] = ["series","MW",0.,"4th bus output","Output"]
override_component_attrs["StorageUnit"].loc["p_dispatch"] = ["series","MW",0.,"Storage discharging.","Output"]
override_component_attrs["StorageUnit"].loc["p_store"] = ["series","MW",0.,"Storage charging.","Output"]
def assign_carriers(n): def assign_carriers(n):
@ -48,18 +25,16 @@ def assign_carriers(n):
def assign_locations(n): def assign_locations(n):
for c in n.iterate_components(n.one_port_components|n.branch_components): for c in n.iterate_components(n.one_port_components|n.branch_components):
ifind = pd.Series(c.df.index.str.find(" ",start=4),c.df.index) ifind = pd.Series(c.df.index.str.find(" ",start=4),c.df.index)
for i in ifind.unique(): for i in ifind.unique():
names = ifind.index[ifind == i] names = ifind.index[ifind == i]
if i == -1: if i == -1:
c.df.loc[names,'location'] = "" c.df.loc[names, 'location'] = ""
else: else:
c.df.loc[names,'location'] = names.str[:i] c.df.loc[names, 'location'] = names.str[:i]
def calculate_nodal_cfs(n,label,nodal_cfs):
def calculate_nodal_cfs(n, label, nodal_cfs):
#Beware this also has extraneous locations for country (e.g. biomass) or continent-wide (e.g. fossil gas/oil) stuff #Beware this also has extraneous locations for country (e.g. biomass) or continent-wide (e.g. fossil gas/oil) stuff
for c in n.iterate_components((n.branch_components^{"Line","Transformer"})|n.controllable_one_port_components^{"Load","StorageUnit"}): for c in n.iterate_components((n.branch_components^{"Line","Transformer"})|n.controllable_one_port_components^{"Load","StorageUnit"}):
capacities_c = c.df.groupby(["location","carrier"])[opt_name.get(c.name,"p") + "_nom_opt"].sum() capacities_c = c.df.groupby(["location","carrier"])[opt_name.get(c.name,"p") + "_nom_opt"].sum()
@ -74,7 +49,7 @@ def calculate_nodal_cfs(n,label,nodal_cfs):
sys.exit() sys.exit()
c.df["p"] = p c.df["p"] = p
p_c = c.df.groupby(["location","carrier"])["p"].sum() p_c = c.df.groupby(["location", "carrier"])["p"].sum()
cf_c = p_c/capacities_c cf_c = p_c/capacities_c
@ -85,10 +60,7 @@ def calculate_nodal_cfs(n,label,nodal_cfs):
return nodal_cfs return nodal_cfs
def calculate_cfs(n, label, cfs):
def calculate_cfs(n,label,cfs):
for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load","StorageUnit"}): for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load","StorageUnit"}):
capacities_c = c.df[opt_name.get(c.name,"p") + "_nom_opt"].groupby(c.df.carrier).sum() capacities_c = c.df[opt_name.get(c.name,"p") + "_nom_opt"].groupby(c.df.carrier).sum()
@ -113,43 +85,41 @@ def calculate_cfs(n,label,cfs):
return cfs return cfs
def calculate_nodal_costs(n, label, nodal_costs):
def calculate_nodal_costs(n,label,nodal_costs):
#Beware this also has extraneous locations for country (e.g. biomass) or continent-wide (e.g. fossil gas/oil) stuff #Beware this also has extraneous locations for country (e.g. biomass) or continent-wide (e.g. fossil gas/oil) stuff
for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}): for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}):
c.df["capital_costs"] = c.df.capital_cost*c.df[opt_name.get(c.name,"p") + "_nom_opt"] c.df["capital_costs"] = c.df.capital_cost * c.df[opt_name.get(c.name, "p") + "_nom_opt"]
capital_costs = c.df.groupby(["location","carrier"])["capital_costs"].sum() capital_costs = c.df.groupby(["location", "carrier"])["capital_costs"].sum()
index = pd.MultiIndex.from_tuples([(c.list_name,"capital") + t for t in capital_costs.index.to_list()]) index = pd.MultiIndex.from_tuples([(c.list_name, "capital") + t for t in capital_costs.index.to_list()])
nodal_costs = nodal_costs.reindex(index.union(nodal_costs.index)) nodal_costs = nodal_costs.reindex(index.union(nodal_costs.index))
nodal_costs.loc[index,label] = capital_costs.values nodal_costs.loc[index,label] = capital_costs.values
if c.name == "Link": if c.name == "Link":
p = c.pnl.p0.multiply(n.snapshot_weightings,axis=0).sum() p = c.pnl.p0.multiply(n.snapshot_weightings.generators, axis=0).sum()
elif c.name == "Line": elif c.name == "Line":
continue continue
elif c.name == "StorageUnit": elif c.name == "StorageUnit":
p_all = c.pnl.p.multiply(n.snapshot_weightings,axis=0) p_all = c.pnl.p.multiply(n.snapshot_weightings.generators, axis=0)
p_all[p_all < 0.] = 0. p_all[p_all < 0.] = 0.
p = p_all.sum() p = p_all.sum()
else: else:
p = c.pnl.p.multiply(n.snapshot_weightings,axis=0).sum() p = c.pnl.p.multiply(n.snapshot_weightings.generators, axis=0).sum()
#correct sequestration cost #correct sequestration cost
if c.name == "Store": if c.name == "Store":
items = c.df.index[(c.df.carrier == "co2 stored") & (c.df.marginal_cost <= -100.)] items = c.df.index[(c.df.carrier == "co2 stored") & (c.df.marginal_cost <= -100.)]
c.df.loc[items,"marginal_cost"] = -20. c.df.loc[items, "marginal_cost"] = -20.
c.df["marginal_costs"] = p*c.df.marginal_cost c.df["marginal_costs"] = p*c.df.marginal_cost
marginal_costs = c.df.groupby(["location","carrier"])["marginal_costs"].sum() marginal_costs = c.df.groupby(["location", "carrier"])["marginal_costs"].sum()
index = pd.MultiIndex.from_tuples([(c.list_name,"marginal") + t for t in marginal_costs.index.to_list()]) index = pd.MultiIndex.from_tuples([(c.list_name, "marginal") + t for t in marginal_costs.index.to_list()])
nodal_costs = nodal_costs.reindex(index.union(nodal_costs.index)) nodal_costs = nodal_costs.reindex(index.union(nodal_costs.index))
nodal_costs.loc[index,label] = marginal_costs.values nodal_costs.loc[index, label] = marginal_costs.values
return nodal_costs return nodal_costs
def calculate_costs(n,label,costs): def calculate_costs(n, label, costs):
for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}): for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}):
capital_costs = c.df.capital_cost*c.df[opt_name.get(c.name,"p") + "_nom_opt"] capital_costs = c.df.capital_cost*c.df[opt_name.get(c.name,"p") + "_nom_opt"]
@ -160,23 +130,23 @@ def calculate_costs(n,label,costs):
costs = costs.reindex(capital_costs_grouped.index.union(costs.index)) costs = costs.reindex(capital_costs_grouped.index.union(costs.index))
costs.loc[capital_costs_grouped.index,label] = capital_costs_grouped costs.loc[capital_costs_grouped.index, label] = capital_costs_grouped
if c.name == "Link": if c.name == "Link":
p = c.pnl.p0.multiply(n.snapshot_weightings,axis=0).sum() p = c.pnl.p0.multiply(n.snapshot_weightings.generators, axis=0).sum()
elif c.name == "Line": elif c.name == "Line":
continue continue
elif c.name == "StorageUnit": elif c.name == "StorageUnit":
p_all = c.pnl.p.multiply(n.snapshot_weightings,axis=0) p_all = c.pnl.p.multiply(n.snapshot_weightings.generators, axis=0)
p_all[p_all < 0.] = 0. p_all[p_all < 0.] = 0.
p = p_all.sum() p = p_all.sum()
else: else:
p = c.pnl.p.multiply(n.snapshot_weightings,axis=0).sum() p = c.pnl.p.multiply(n.snapshot_weightings.generators, axis=0).sum()
#correct sequestration cost #correct sequestration cost
if c.name == "Store": if c.name == "Store":
items = c.df.index[(c.df.carrier == "co2 stored") & (c.df.marginal_cost <= -100.)] items = c.df.index[(c.df.carrier == "co2 stored") & (c.df.marginal_cost <= -100.)]
c.df.loc[items,"marginal_cost"] = -20. c.df.loc[items, "marginal_cost"] = -20.
marginal_costs = p*c.df.marginal_cost marginal_costs = p*c.df.marginal_cost
@ -189,13 +159,14 @@ def calculate_costs(n,label,costs):
costs.loc[marginal_costs_grouped.index,label] = marginal_costs_grouped costs.loc[marginal_costs_grouped.index,label] = marginal_costs_grouped
#add back in all hydro # add back in all hydro
#costs.loc[("storage_units","capital","hydro"),label] = (0.01)*2e6*n.storage_units.loc[n.storage_units.group=="hydro","p_nom"].sum() #costs.loc[("storage_units", "capital", "hydro"),label] = (0.01)*2e6*n.storage_units.loc[n.storage_units.group=="hydro", "p_nom"].sum()
#costs.loc[("storage_units","capital","PHS"),label] = (0.01)*2e6*n.storage_units.loc[n.storage_units.group=="PHS","p_nom"].sum() #costs.loc[("storage_units", "capital", "PHS"),label] = (0.01)*2e6*n.storage_units.loc[n.storage_units.group=="PHS", "p_nom"].sum()
#costs.loc[("generators","capital","ror"),label] = (0.02)*3e6*n.generators.loc[n.generators.group=="ror","p_nom"].sum() #costs.loc[("generators", "capital", "ror"),label] = (0.02)*3e6*n.generators.loc[n.generators.group=="ror", "p_nom"].sum()
return costs return costs
def calculate_cumulative_cost(): def calculate_cumulative_cost():
planning_horizons = snakemake.config['scenario']['planning_horizons'] planning_horizons = snakemake.config['scenario']['planning_horizons']
@ -211,11 +182,12 @@ def calculate_cumulative_cost():
for cluster in cumulative_cost.index.get_level_values(level=0).unique(): for cluster in cumulative_cost.index.get_level_values(level=0).unique():
for lv in cumulative_cost.index.get_level_values(level=1).unique(): for lv in cumulative_cost.index.get_level_values(level=1).unique():
for sector_opts in cumulative_cost.index.get_level_values(level=2).unique(): for sector_opts in cumulative_cost.index.get_level_values(level=2).unique():
cumulative_cost.loc[(cluster, lv, sector_opts,'cumulative cost'),r] = np.trapz(cumulative_cost.loc[idx[cluster, lv, sector_opts,planning_horizons],r].values, x=planning_horizons) cumulative_cost.loc[(cluster, lv, sector_opts, 'cumulative cost'),r] = np.trapz(cumulative_cost.loc[idx[cluster, lv, sector_opts,planning_horizons],r].values, x=planning_horizons)
return cumulative_cost return cumulative_cost
def calculate_nodal_capacities(n,label,nodal_capacities):
def calculate_nodal_capacities(n, label, nodal_capacities):
#Beware this also has extraneous locations for country (e.g. biomass) or continent-wide (e.g. fossil gas/oil) stuff #Beware this also has extraneous locations for country (e.g. biomass) or continent-wide (e.g. fossil gas/oil) stuff
for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}): for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}):
nodal_capacities_c = c.df.groupby(["location","carrier"])[opt_name.get(c.name,"p") + "_nom_opt"].sum() nodal_capacities_c = c.df.groupby(["location","carrier"])[opt_name.get(c.name,"p") + "_nom_opt"].sum()
@ -226,9 +198,7 @@ def calculate_nodal_capacities(n,label,nodal_capacities):
return nodal_capacities return nodal_capacities
def calculate_capacities(n, label, capacities):
def calculate_capacities(n,label,capacities):
for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}): for c in n.iterate_components(n.branch_components|n.controllable_one_port_components^{"Load"}):
capacities_grouped = c.df[opt_name.get(c.name,"p") + "_nom_opt"].groupby(c.df.carrier).sum() capacities_grouped = c.df[opt_name.get(c.name,"p") + "_nom_opt"].groupby(c.df.carrier).sum()
@ -236,12 +206,12 @@ def calculate_capacities(n,label,capacities):
capacities = capacities.reindex(capacities_grouped.index.union(capacities.index)) capacities = capacities.reindex(capacities_grouped.index.union(capacities.index))
capacities.loc[capacities_grouped.index,label] = capacities_grouped capacities.loc[capacities_grouped.index, label] = capacities_grouped
return capacities return capacities
def calculate_curtailment(n,label,curtailment): def calculate_curtailment(n, label, curtailment):
avail = n.generators_t.p_max_pu.multiply(n.generators.p_nom_opt).sum().groupby(n.generators.carrier).sum() avail = n.generators_t.p_max_pu.multiply(n.generators.p_nom_opt).sum().groupby(n.generators.carrier).sum()
used = n.generators_t.p.sum().groupby(n.generators.carrier).sum() used = n.generators_t.p.sum().groupby(n.generators.carrier).sum()
@ -250,31 +220,32 @@ def calculate_curtailment(n,label,curtailment):
return curtailment return curtailment
def calculate_energy(n,label,energy):
def calculate_energy(n, label, energy):
for c in n.iterate_components(n.one_port_components|n.branch_components): for c in n.iterate_components(n.one_port_components|n.branch_components):
if c.name in n.one_port_components: if c.name in n.one_port_components:
c_energies = c.pnl.p.multiply(n.snapshot_weightings,axis=0).sum().multiply(c.df.sign).groupby(c.df.carrier).sum() c_energies = c.pnl.p.multiply(n.snapshot_weightings.generators, axis=0).sum().multiply(c.df.sign).groupby(c.df.carrier).sum()
else: else:
c_energies = pd.Series(0.,c.df.carrier.unique()) c_energies = pd.Series(0., c.df.carrier.unique())
for port in [col[3:] for col in c.df.columns if col[:3] == "bus"]: for port in [col[3:] for col in c.df.columns if col[:3] == "bus"]:
totals = c.pnl["p"+port].multiply(n.snapshot_weightings,axis=0).sum() totals = c.pnl["p" + port].multiply(n.snapshot_weightings.generators, axis=0).sum()
#remove values where bus is missing (bug in nomopyomo) #remove values where bus is missing (bug in nomopyomo)
no_bus = c.df.index[c.df["bus"+port] == ""] no_bus = c.df.index[c.df["bus" + port] == ""]
totals.loc[no_bus] = n.component_attrs[c.name].loc["p"+port,"default"] totals.loc[no_bus] = n.component_attrs[c.name].loc["p" + port, "default"]
c_energies -= totals.groupby(c.df.carrier).sum() c_energies -= totals.groupby(c.df.carrier).sum()
c_energies = pd.concat([c_energies], keys=[c.list_name]) c_energies = pd.concat([c_energies], keys=[c.list_name])
energy = energy.reindex(c_energies.index.union(energy.index)) energy = energy.reindex(c_energies.index.union(energy.index))
energy.loc[c_energies.index,label] = c_energies energy.loc[c_energies.index, label] = c_energies
return energy return energy
def calculate_supply(n,label,supply): def calculate_supply(n, label, supply):
"""calculate the max dispatch of each component at the buses aggregated by carrier""" """calculate the max dispatch of each component at the buses aggregated by carrier"""
bus_carriers = n.buses.carrier.unique() bus_carriers = n.buses.carrier.unique()
@ -290,7 +261,7 @@ def calculate_supply(n,label,supply):
if len(items) == 0: if len(items) == 0:
continue continue
s = c.pnl.p[items].max().multiply(c.df.loc[items,'sign']).groupby(c.df.loc[items,'carrier']).sum() s = c.pnl.p[items].max().multiply(c.df.loc[items, 'sign']).groupby(c.df.loc[items, 'carrier']).sum()
s = pd.concat([s], keys=[c.list_name]) s = pd.concat([s], keys=[c.list_name])
s = pd.concat([s], keys=[i]) s = pd.concat([s], keys=[i])
@ -302,23 +273,23 @@ def calculate_supply(n,label,supply):
for end in [col[3:] for col in c.df.columns if col[:3] == "bus"]: for end in [col[3:] for col in c.df.columns if col[:3] == "bus"]:
items = c.df.index[c.df["bus" + end].map(bus_map,na_action=False)] items = c.df.index[c.df["bus" + end].map(bus_map, na_action=False)]
if len(items) == 0: if len(items) == 0:
continue continue
#lots of sign compensation for direction and to do maximums #lots of sign compensation for direction and to do maximums
s = (-1)**(1-int(end))*((-1)**int(end)*c.pnl["p"+end][items]).max().groupby(c.df.loc[items,'carrier']).sum() s = (-1)**(1-int(end))*((-1)**int(end)*c.pnl["p"+end][items]).max().groupby(c.df.loc[items, 'carrier']).sum()
s.index = s.index+end s.index = s.index + end
s = pd.concat([s], keys=[c.list_name]) s = pd.concat([s], keys=[c.list_name])
s = pd.concat([s], keys=[i]) s = pd.concat([s], keys=[i])
supply = supply.reindex(s.index.union(supply.index)) supply = supply.reindex(s.index.union(supply.index))
supply.loc[s.index,label] = s supply.loc[s.index, label] = s
return supply return supply
def calculate_supply_energy(n,label,supply_energy): def calculate_supply_energy(n, label, supply_energy):
"""calculate the total energy supply/consuption of each component at the buses aggregated by carrier""" """calculate the total energy supply/consuption of each component at the buses aggregated by carrier"""
@ -335,54 +306,63 @@ def calculate_supply_energy(n,label,supply_energy):
if len(items) == 0: if len(items) == 0:
continue continue
s = c.pnl.p[items].multiply(n.snapshot_weightings,axis=0).sum().multiply(c.df.loc[items,'sign']).groupby(c.df.loc[items,'carrier']).sum() s = c.pnl.p[items].multiply(n.snapshot_weightings.generators,axis=0).sum().multiply(c.df.loc[items, 'sign']).groupby(c.df.loc[items, 'carrier']).sum()
s = pd.concat([s], keys=[c.list_name]) s = pd.concat([s], keys=[c.list_name])
s = pd.concat([s], keys=[i]) s = pd.concat([s], keys=[i])
supply_energy = supply_energy.reindex(s.index.union(supply_energy.index)) supply_energy = supply_energy.reindex(s.index.union(supply_energy.index))
supply_energy.loc[s.index,label] = s supply_energy.loc[s.index, label] = s
for c in n.iterate_components(n.branch_components): for c in n.iterate_components(n.branch_components):
for end in [col[3:] for col in c.df.columns if col[:3] == "bus"]: for end in [col[3:] for col in c.df.columns if col[:3] == "bus"]:
items = c.df.index[c.df["bus" + str(end)].map(bus_map,na_action=False)] items = c.df.index[c.df["bus" + str(end)].map(bus_map, na_action=False)]
if len(items) == 0: if len(items) == 0:
continue continue
s = (-1)*c.pnl["p"+end][items].multiply(n.snapshot_weightings,axis=0).sum().groupby(c.df.loc[items,'carrier']).sum() s = (-1)*c.pnl["p"+end][items].multiply(n.snapshot_weightings.generators,axis=0).sum().groupby(c.df.loc[items, 'carrier']).sum()
s.index = s.index+end s.index = s.index + end
s = pd.concat([s], keys=[c.list_name]) s = pd.concat([s], keys=[c.list_name])
s = pd.concat([s], keys=[i]) s = pd.concat([s], keys=[i])
supply_energy = supply_energy.reindex(s.index.union(supply_energy.index)) supply_energy = supply_energy.reindex(s.index.union(supply_energy.index))
supply_energy.loc[s.index,label] = s supply_energy.loc[s.index, label] = s
return supply_energy return supply_energy
def calculate_metrics(n,label,metrics):
metrics = metrics.reindex(pd.Index(["line_volume","line_volume_limit","line_volume_AC","line_volume_DC","line_volume_shadow","co2_shadow"]).union(metrics.index)) def calculate_metrics(n, label, metrics):
metrics.at["line_volume_DC",label] = (n.links.length*n.links.p_nom_opt)[n.links.carrier == "DC"].sum() metrics_list = [
metrics.at["line_volume_AC",label] = (n.lines.length*n.lines.s_nom_opt).sum() "line_volume",
metrics.at["line_volume",label] = metrics.loc[["line_volume_AC","line_volume_DC"],label].sum() "line_volume_limit",
"line_volume_AC",
"line_volume_DC",
"line_volume_shadow",
"co2_shadow"
]
if hasattr(n,"line_volume_limit"): metrics = metrics.reindex(pd.Index(metrics_list).union(metrics.index))
metrics.at["line_volume_limit",label] = n.line_volume_limit
metrics.at["line_volume_shadow",label] = n.line_volume_limit_dual metrics.at["line_volume_DC",label] = (n.links.length * n.links.p_nom_opt)[n.links.carrier == "DC"].sum()
metrics.at["line_volume_AC",label] = (n.lines.length * n.lines.s_nom_opt).sum()
metrics.at["line_volume",label] = metrics.loc[["line_volume_AC", "line_volume_DC"], label].sum()
if hasattr(n, "line_volume_limit"):
metrics.at["line_volume_limit", label] = n.line_volume_limit
metrics.at["line_volume_shadow", label] = n.line_volume_limit_dual
if "CO2Limit" in n.global_constraints.index: if "CO2Limit" in n.global_constraints.index:
metrics.at["co2_shadow",label] = n.global_constraints.at["CO2Limit","mu"] metrics.at["co2_shadow", label] = n.global_constraints.at["CO2Limit", "mu"]
return metrics return metrics
def calculate_prices(n,label,prices): def calculate_prices(n, label, prices):
prices = prices.reindex(prices.index.union(n.buses.carrier.unique())) prices = prices.reindex(prices.index.union(n.buses.carrier.unique()))
@ -392,20 +372,26 @@ def calculate_prices(n,label,prices):
return prices return prices
def calculate_weighted_prices(n, label, weighted_prices):
def calculate_weighted_prices(n,label,weighted_prices):
# Warning: doesn't include storage units as loads # Warning: doesn't include storage units as loads
weighted_prices = weighted_prices.reindex(pd.Index([
"electricity",
"heat",
"space heat",
"urban heat",
"space urban heat",
"gas",
"H2"
]))
weighted_prices = weighted_prices.reindex(pd.Index(["electricity","heat","space heat","urban heat","space urban heat","gas","H2"])) link_loads = {"electricity": ["heat pump", "resistive heater", "battery charger", "H2 Electrolysis"],
"heat": ["water tanks charger"],
link_loads = {"electricity" : ["heat pump", "resistive heater", "battery charger", "H2 Electrolysis"], "urban heat": ["water tanks charger"],
"heat" : ["water tanks charger"], "space heat": [],
"urban heat" : ["water tanks charger"], "space urban heat": [],
"space heat" : [], "gas": ["OCGT", "gas boiler", "CHP electric", "CHP heat"],
"space urban heat" : [], "H2": ["Sabatier", "H2 Fuel Cell"]}
"gas" : ["OCGT","gas boiler","CHP electric","CHP heat"],
"H2" : ["Sabatier", "H2 Fuel Cell"]}
for carrier in link_loads: for carrier in link_loads:
@ -421,14 +407,13 @@ def calculate_weighted_prices(n,label,weighted_prices):
if buses.empty: if buses.empty:
continue continue
if carrier in ["H2","gas"]: if carrier in ["H2", "gas"]:
load = pd.DataFrame(index=n.snapshots,columns=buses,data=0.) load = pd.DataFrame(index=n.snapshots, columns=buses, data=0.)
elif carrier[:5] == "space": elif carrier[:5] == "space":
load = heat_demand_df[buses.str[:2]].rename(columns=lambda i: str(i)+suffix) load = heat_demand_df[buses.str[:2]].rename(columns=lambda i: str(i)+suffix)
else: else:
load = n.loads_t.p_set[buses] load = n.loads_t.p_set[buses]
for tech in link_loads[carrier]: for tech in link_loads[carrier]:
names = n.links.index[n.links.index.to_series().str[-len(tech):] == tech] names = n.links.index[n.links.index.to_series().str[-len(tech):] == tech]
@ -436,24 +421,22 @@ def calculate_weighted_prices(n,label,weighted_prices):
if names.empty: if names.empty:
continue continue
load += n.links_t.p0[names].groupby(n.links.loc[names,"bus0"],axis=1).sum() load += n.links_t.p0[names].groupby(n.links.loc[names, "bus0"],axis=1).sum()
#Add H2 Store when charging # Add H2 Store when charging
#if carrier == "H2": #if carrier == "H2":
# stores = n.stores_t.p[buses+ " Store"].groupby(n.stores.loc[buses+ " Store","bus"],axis=1).sum(axis=1) # stores = n.stores_t.p[buses+ " Store"].groupby(n.stores.loc[buses+ " Store", "bus"],axis=1).sum(axis=1)
# stores[stores > 0.] = 0. # stores[stores > 0.] = 0.
# load += -stores # load += -stores
weighted_prices.loc[carrier,label] = (load*n.buses_t.marginal_price[buses]).sum().sum()/load.sum().sum() weighted_prices.loc[carrier,label] = (load * n.buses_t.marginal_price[buses]).sum().sum() / load.sum().sum()
if carrier[:5] == "space": if carrier[:5] == "space":
print(load*n.buses_t.marginal_price[buses]) print(load * n.buses_t.marginal_price[buses])
return weighted_prices return weighted_prices
def calculate_market_values(n, label, market_values): def calculate_market_values(n, label, market_values):
# Warning: doesn't include storage units # Warning: doesn't include storage units
@ -463,41 +446,40 @@ def calculate_market_values(n, label, market_values):
## First do market value of generators ## ## First do market value of generators ##
generators = n.generators.index[n.buses.loc[n.generators.bus,"carrier"] == carrier] generators = n.generators.index[n.buses.loc[n.generators.bus, "carrier"] == carrier]
techs = n.generators.loc[generators,"carrier"].value_counts().index techs = n.generators.loc[generators, "carrier"].value_counts().index
market_values = market_values.reindex(market_values.index.union(techs)) market_values = market_values.reindex(market_values.index.union(techs))
for tech in techs: for tech in techs:
gens = generators[n.generators.loc[generators,"carrier"] == tech] gens = generators[n.generators.loc[generators, "carrier"] == tech]
dispatch = n.generators_t.p[gens].groupby(n.generators.loc[gens,"bus"],axis=1).sum().reindex(columns=buses,fill_value=0.) dispatch = n.generators_t.p[gens].groupby(n.generators.loc[gens, "bus"], axis=1).sum().reindex(columns=buses, fill_value=0.)
revenue = dispatch*n.buses_t.marginal_price[buses] revenue = dispatch * n.buses_t.marginal_price[buses]
market_values.at[tech,label] = revenue.sum().sum()/dispatch.sum().sum()
market_values.at[tech,label] = revenue.sum().sum() / dispatch.sum().sum()
## Now do market value of links ## ## Now do market value of links ##
for i in ["0","1"]: for i in ["0", "1"]:
all_links = n.links.index[n.buses.loc[n.links["bus"+i],"carrier"] == carrier] all_links = n.links.index[n.buses.loc[n.links["bus"+i], "carrier"] == carrier]
techs = n.links.loc[all_links,"carrier"].value_counts().index techs = n.links.loc[all_links, "carrier"].value_counts().index
market_values = market_values.reindex(market_values.index.union(techs)) market_values = market_values.reindex(market_values.index.union(techs))
for tech in techs: for tech in techs:
links = all_links[n.links.loc[all_links,"carrier"] == tech] links = all_links[n.links.loc[all_links, "carrier"] == tech]
dispatch = n.links_t["p"+i][links].groupby(n.links.loc[links,"bus"+i],axis=1).sum().reindex(columns=buses,fill_value=0.) dispatch = n.links_t["p"+i][links].groupby(n.links.loc[links, "bus"+i], axis=1).sum().reindex(columns=buses, fill_value=0.)
revenue = dispatch*n.buses_t.marginal_price[buses] revenue = dispatch * n.buses_t.marginal_price[buses]
market_values.at[tech,label] = revenue.sum().sum()/dispatch.sum().sum() market_values.at[tech,label] = revenue.sum().sum() / dispatch.sum().sum()
return market_values return market_values
@ -505,17 +487,17 @@ def calculate_market_values(n, label, market_values):
def calculate_price_statistics(n, label, price_statistics): def calculate_price_statistics(n, label, price_statistics):
price_statistics = price_statistics.reindex(price_statistics.index.union(pd.Index(["zero_hours","mean","standard_deviation"]))) price_statistics = price_statistics.reindex(price_statistics.index.union(pd.Index(["zero_hours", "mean", "standard_deviation"])))
buses = n.buses.index[n.buses.carrier == "AC"] buses = n.buses.index[n.buses.carrier == "AC"]
threshold = 0.1 #higher than phoney marginal_cost of wind/solar threshold = 0.1 # higher than phoney marginal_cost of wind/solar
df = pd.DataFrame(data=0.,columns=buses,index=n.snapshots) df = pd.DataFrame(data=0., columns=buses, index=n.snapshots)
df[n.buses_t.marginal_price[buses] < threshold] = 1. df[n.buses_t.marginal_price[buses] < threshold] = 1.
price_statistics.at["zero_hours", label] = df.sum().sum()/(df.shape[0]*df.shape[1]) price_statistics.at["zero_hours", label] = df.sum().sum() / (df.shape[0] * df.shape[1])
price_statistics.at["mean", label] = n.buses_t.marginal_price[buses].unstack().mean() price_statistics.at["mean", label] = n.buses_t.marginal_price[buses].unstack().mean()
@ -524,38 +506,41 @@ def calculate_price_statistics(n, label, price_statistics):
return price_statistics return price_statistics
outputs = ["nodal_costs",
"nodal_capacities",
"nodal_cfs",
"cfs",
"costs",
"capacities",
"curtailment",
"energy",
"supply",
"supply_energy",
"prices",
"weighted_prices",
"price_statistics",
"market_values",
"metrics",
]
def make_summaries(networks_dict): def make_summaries(networks_dict):
columns = pd.MultiIndex.from_tuples(networks_dict.keys(),names=["cluster","lv","opt","planning_horizon"]) outputs = [
"nodal_costs",
"nodal_capacities",
"nodal_cfs",
"cfs",
"costs",
"capacities",
"curtailment",
"energy",
"supply",
"supply_energy",
"prices",
"weighted_prices",
"price_statistics",
"market_values",
"metrics",
]
columns = pd.MultiIndex.from_tuples(
networks_dict.keys(),
names=["cluster", "lv", "opt", "planning_horizon"]
)
df = {} df = {}
for output in outputs: for output in outputs:
df[output] = pd.DataFrame(columns=columns,dtype=float) df[output] = pd.DataFrame(columns=columns, dtype=float)
for label, filename in iteritems(networks_dict): for label, filename in networks_dict.items():
print(label, filename) print(label, filename)
n = pypsa.Network(filename, overrides = override_component_attrs(snakemake.input.overrides)
override_component_attrs=override_component_attrs) n = pypsa.Network(filename, override_component_attrs=overrides)
assign_carriers(n) assign_carriers(n)
assign_locations(n) assign_locations(n)
@ -567,56 +552,37 @@ def make_summaries(networks_dict):
def to_csv(df): def to_csv(df):
for key in df: for key in df:
df[key].to_csv(snakemake.output[key]) df[key].to_csv(snakemake.output[key])
if __name__ == "__main__": if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from vresutils import Dict from helper import mock_snakemake
import yaml snakemake = mock_snakemake('make_summary')
snakemake = Dict()
with open('config.yaml', encoding='utf8') as f: networks_dict = {
snakemake.config = yaml.safe_load(f) (cluster, lv, opt+sector_opt, planning_horizon) :
snakemake.config['results_dir'] + snakemake.config['run'] + f'/postnetworks/elec_s{simpl}_{cluster}_lv{lv}_{opt}_{sector_opt}_{planning_horizon}.nc' \
#overwrite some options for simpl in snakemake.config['scenario']['simpl'] \
snakemake.config["run"] = "version-8" for cluster in snakemake.config['scenario']['clusters'] \
snakemake.config["scenario"]["lv"] = [1.0] for opt in snakemake.config['scenario']['opts'] \
snakemake.config["scenario"]["sector_opts"] = ["3H-T-H-B-I-solar3-dist1"] for sector_opt in snakemake.config['scenario']['sector_opts'] \
snakemake.config["planning_horizons"] = ['2020', '2030', '2040', '2050'] for lv in snakemake.config['scenario']['lv'] \
snakemake.input = Dict() for planning_horizon in snakemake.config['scenario']['planning_horizons']
snakemake.input['heat_demand_name'] = 'data/heating/daily_heat_demand.h5' }
snakemake.input['costs'] = snakemake.config['costs_dir'] + "costs_{}.csv".format(snakemake.config['scenario']['planning_horizons'][0])
snakemake.output = Dict()
for item in outputs:
snakemake.output[item] = snakemake.config['summary_dir'] + '/{name}/csvs/{item}.csv'.format(name=snakemake.config['run'],item=item)
snakemake.output['cumulative_cost'] = snakemake.config['summary_dir'] + '/{name}/csvs/cumulative_cost.csv'.format(name=snakemake.config['run'])
networks_dict = {(cluster, lv, opt+sector_opt, planning_horizon) :
snakemake.config['results_dir'] + snakemake.config['run'] + '/postnetworks/elec_s{simpl}_{cluster}_lv{lv}_{opt}_{sector_opt}_{planning_horizon}.nc'\
.format(simpl=simpl,
cluster=cluster,
opt=opt,
lv=lv,
sector_opt=sector_opt,
planning_horizon=planning_horizon)\
for simpl in snakemake.config['scenario']['simpl'] \
for cluster in snakemake.config['scenario']['clusters'] \
for opt in snakemake.config['scenario']['opts'] \
for sector_opt in snakemake.config['scenario']['sector_opts'] \
for lv in snakemake.config['scenario']['lv'] \
for planning_horizon in snakemake.config['scenario']['planning_horizons']}
print(networks_dict) print(networks_dict)
Nyears = 1 Nyears = 1
costs_db = prepare_costs(snakemake.input.costs, costs_db = prepare_costs(
snakemake.config['costs']['USD2013_to_EUR2013'], snakemake.input.costs,
snakemake.config['costs']['discountrate'], snakemake.config['costs']['USD2013_to_EUR2013'],
Nyears, snakemake.config['costs']['discountrate'],
snakemake.config['costs']['lifetime']) Nyears,
snakemake.config['costs']['lifetime']
)
df = make_summaries(networks_dict) df = make_summaries(networks_dict)

View File

@ -1,44 +1,20 @@
import pypsa
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cartopy.crs as ccrs import cartopy.crs as ccrs
from matplotlib.legend_handler import HandlerPatch from matplotlib.legend_handler import HandlerPatch
from matplotlib.patches import Circle, Ellipse from matplotlib.patches import Circle, Ellipse
from make_summary import assign_carriers from make_summary import assign_carriers
from plot_summary import rename_techs, preferred_order from plot_summary import rename_techs, preferred_order
import numpy as np from helper import override_component_attrs
import pypsa
import matplotlib.pyplot as plt
import pandas as pd
# allow plotting without Xwindows plt.style.use('ggplot')
import matplotlib
matplotlib.use('Agg')
# from sector/scripts/paper_graphics-co2_sweep.py
override_component_attrs = pypsa.descriptors.Dict(
{k: v.copy() for k, v in pypsa.components.component_attrs.items()})
override_component_attrs["Link"].loc["bus2"] = [
"string", np.nan, np.nan, "2nd bus", "Input (optional)"]
override_component_attrs["Link"].loc["bus3"] = [
"string", np.nan, np.nan, "3rd bus", "Input (optional)"]
override_component_attrs["Link"].loc["efficiency2"] = [
"static or series", "per unit", 1., "2nd bus efficiency", "Input (optional)"]
override_component_attrs["Link"].loc["efficiency3"] = [
"static or series", "per unit", 1., "3rd bus efficiency", "Input (optional)"]
override_component_attrs["Link"].loc["p2"] = [
"series", "MW", 0., "2nd bus output", "Output"]
override_component_attrs["Link"].loc["p3"] = [
"series", "MW", 0., "3rd bus output", "Output"]
override_component_attrs["StorageUnit"].loc["p_dispatch"] = [
"series", "MW", 0., "Storage discharging.", "Output"]
override_component_attrs["StorageUnit"].loc["p_store"] = [
"series", "MW", 0., "Storage charging.", "Output"]
# ----------------- PLOT HELPERS ---------------------------------------------
def rename_techs_tyndp(tech): def rename_techs_tyndp(tech):
tech = rename_techs(tech) tech = rename_techs(tech)
if "heat pump" in tech or "resistive heater" in tech: if "heat pump" in tech or "resistive heater" in tech:
@ -61,8 +37,7 @@ def make_handler_map_to_scale_circles_as_in(ax, dont_resize_actively=False):
fig = ax.get_figure() fig = ax.get_figure()
def axes2pt(): def axes2pt():
return np.diff(ax.transData.transform([(0, 0), (1, 1)]), axis=0)[ return np.diff(ax.transData.transform([(0, 0), (1, 1)]), axis=0)[0] * (72. / fig.dpi)
0] * (72. / fig.dpi)
ellipses = [] ellipses = []
if not dont_resize_actively: if not dont_resize_actively:
@ -90,20 +65,14 @@ def make_legend_circles_for(sizes, scale=1.0, **kw):
def assign_location(n): def assign_location(n):
for c in n.iterate_components(n.one_port_components | n.branch_components): for c in n.iterate_components(n.one_port_components | n.branch_components):
ifind = pd.Series(c.df.index.str.find(" ", start=4), c.df.index) ifind = pd.Series(c.df.index.str.find(" ", start=4), c.df.index)
for i in ifind.value_counts().index: for i in ifind.value_counts().index:
# these have already been assigned defaults # these have already been assigned defaults
if i == -1: if i == -1: continue
continue
names = ifind.index[ifind == i] names = ifind.index[ifind == i]
c.df.loc[names, 'location'] = names.str[:i] c.df.loc[names, 'location'] = names.str[:i]
# ----------------- PLOT FUNCTIONS --------------------------------------------
def plot_map(network, components=["links", "stores", "storage_units", "generators"], def plot_map(network, components=["links", "stores", "storage_units", "generators"],
bus_size_factor=1.7e10, transmission=False): bus_size_factor=1.7e10, transmission=False):
@ -126,6 +95,7 @@ def plot_map(network, components=["links", "stores", "storage_units", "generator
costs = pd.concat([costs, costs_c], axis=1) costs = pd.concat([costs, costs_c], axis=1)
print(comp, costs) print(comp, costs)
costs = costs.groupby(costs.columns, axis=1).sum() costs = costs.groupby(costs.columns, axis=1).sum()
costs.drop(list(costs.columns[(costs == 0.).all()]), axis=1, inplace=True) costs.drop(list(costs.columns[(costs == 0.).all()]), axis=1, inplace=True)
@ -193,24 +163,34 @@ def plot_map(network, components=["links", "stores", "storage_units", "generator
fig, ax = plt.subplots(subplot_kw={"projection": ccrs.PlateCarree()}) fig, ax = plt.subplots(subplot_kw={"projection": ccrs.PlateCarree()})
fig.set_size_inches(7, 6) fig.set_size_inches(7, 6)
n.plot(bus_sizes=costs / bus_size_factor, n.plot(
bus_colors=snakemake.config['plotting']['tech_colors'], bus_sizes=costs / bus_size_factor,
line_colors=ac_color, bus_colors=snakemake.config['plotting']['tech_colors'],
link_colors=dc_color, line_colors=ac_color,
line_widths=line_widths / linewidth_factor, link_colors=dc_color,
link_widths=link_widths / linewidth_factor, line_widths=line_widths / linewidth_factor,
ax=ax, boundaries=(-10, 30, 34, 70), link_widths=link_widths / linewidth_factor,
color_geomap={'ocean': 'lightblue', 'land': "palegoldenrod"}) ax=ax, **map_opts
)
handles = make_legend_circles_for( handles = make_legend_circles_for(
[5e9, 1e9], scale=bus_size_factor, facecolor="gray") [5e9, 1e9],
scale=bus_size_factor,
facecolor="gray"
)
labels = ["{} bEUR/a".format(s) for s in (5, 1)] labels = ["{} bEUR/a".format(s) for s in (5, 1)]
l2 = ax.legend(handles, labels,
loc="upper left", bbox_to_anchor=(0.01, 1.01), l2 = ax.legend(
labelspacing=1.0, handles, labels,
framealpha=1., loc="upper left",
title='System cost', bbox_to_anchor=(0.01, 1.01),
handler_map=make_handler_map_to_scale_circles_as_in(ax)) labelspacing=1.0,
frameon=False,
title='System cost',
handler_map=make_handler_map_to_scale_circles_as_in(ax)
)
ax.add_artist(l2) ax.add_artist(l2)
handles = [] handles = []
@ -221,16 +201,23 @@ def plot_map(network, components=["links", "stores", "storage_units", "generator
linewidth=s * 1e3 / linewidth_factor)) linewidth=s * 1e3 / linewidth_factor))
labels.append("{} GW".format(s)) labels.append("{} GW".format(s))
l1_1 = ax.legend(handles, labels, l1_1 = ax.legend(
loc="upper left", bbox_to_anchor=(0.30, 1.01), handles, labels,
framealpha=1, loc="upper left",
labelspacing=0.8, handletextpad=1.5, bbox_to_anchor=(0.22, 1.01),
title=title) frameon=False,
labelspacing=0.8,
handletextpad=1.5,
title=title
)
ax.add_artist(l1_1) ax.add_artist(l1_1)
fig.savefig(snakemake.output.map, transparent=True, fig.savefig(
bbox_inches="tight") snakemake.output.map,
transparent=True,
bbox_inches="tight"
)
def plot_h2_map(network): def plot_h2_map(network):
@ -253,7 +240,7 @@ def plot_h2_map(network):
elec = n.links.index[n.links.carrier == "H2 Electrolysis"] elec = n.links.index[n.links.carrier == "H2 Electrolysis"]
bus_sizes = n.links.loc[elec,"p_nom_opt"].groupby(n.links.loc[elec,"bus0"]).sum() / bus_size_factor bus_sizes = n.links.loc[elec,"p_nom_opt"].groupby(n.links.loc[elec, "bus0"]).sum() / bus_size_factor
# make a fake MultiIndex so that area is correct for legend # make a fake MultiIndex so that area is correct for legend
bus_sizes.index = pd.MultiIndex.from_product( bus_sizes.index = pd.MultiIndex.from_product(
@ -271,26 +258,38 @@ def plot_h2_map(network):
print(n.links[["bus0", "bus1"]]) print(n.links[["bus0", "bus1"]])
fig, ax = plt.subplots(subplot_kw={"projection": ccrs.PlateCarree()}) fig, ax = plt.subplots(
figsize=(7, 6),
subplot_kw={"projection": ccrs.PlateCarree()}
)
fig.set_size_inches(7, 6) n.plot(
bus_sizes=bus_sizes,
n.plot(bus_sizes=bus_sizes, bus_colors={"electrolysis": bus_color},
bus_colors={"electrolysis": bus_color}, link_colors=link_color,
link_colors=link_color, link_widths=link_widths,
link_widths=link_widths, branch_components=["Link"],
branch_components=["Link"], ax=ax, **map_opts
ax=ax, boundaries=(-10, 30, 34, 70)) )
handles = make_legend_circles_for( handles = make_legend_circles_for(
[50000, 10000], scale=bus_size_factor, facecolor=bus_color) [50000, 10000],
scale=bus_size_factor,
facecolor=bus_color
)
labels = ["{} GW".format(s) for s in (50, 10)] labels = ["{} GW".format(s) for s in (50, 10)]
l2 = ax.legend(handles, labels,
loc="upper left", bbox_to_anchor=(0.01, 1.01), l2 = ax.legend(
labelspacing=1.0, handles, labels,
framealpha=1., loc="upper left",
title='Electrolyzer capacity', bbox_to_anchor=(0.01, 1.01),
handler_map=make_handler_map_to_scale_circles_as_in(ax)) labelspacing=1.0,
frameon=False,
title='Electrolyzer capacity',
handler_map=make_handler_map_to_scale_circles_as_in(ax)
)
ax.add_artist(l2) ax.add_artist(l2)
handles = [] handles = []
@ -300,15 +299,24 @@ def plot_h2_map(network):
handles.append(plt.Line2D([0], [0], color=link_color, handles.append(plt.Line2D([0], [0], color=link_color,
linewidth=s * 1e3 / linewidth_factor)) linewidth=s * 1e3 / linewidth_factor))
labels.append("{} GW".format(s)) labels.append("{} GW".format(s))
l1_1 = ax.legend(handles, labels,
loc="upper left", bbox_to_anchor=(0.30, 1.01), l1_1 = ax.legend(
framealpha=1, handles, labels,
labelspacing=0.8, handletextpad=1.5, loc="upper left",
title='H2 pipeline capacity') bbox_to_anchor=(0.28, 1.01),
frameon=False,
labelspacing=0.8,
handletextpad=1.5,
title='H2 pipeline capacity'
)
ax.add_artist(l1_1) ax.add_artist(l1_1)
fig.savefig(snakemake.output.map.replace("-costs-all","-h2_network"), transparent=True, fig.savefig(
bbox_inches="tight") snakemake.output.map.replace("-costs-all","-h2_network"),
transparent=True,
bbox_inches="tight"
)
def plot_map_without(network): def plot_map_without(network):
@ -319,9 +327,10 @@ def plot_map_without(network):
# Drop non-electric buses so they don't clutter the plot # Drop non-electric buses so they don't clutter the plot
n.buses.drop(n.buses.index[n.buses.carrier != "AC"], inplace=True) n.buses.drop(n.buses.index[n.buses.carrier != "AC"], inplace=True)
fig, ax = plt.subplots(subplot_kw={"projection": ccrs.PlateCarree()}) fig, ax = plt.subplots(
figsize=(7, 6),
fig.set_size_inches(7, 6) subplot_kw={"projection": ccrs.PlateCarree()}
)
# PDF has minimum width, so set these to zero # PDF has minimum width, so set these to zero
line_lower_threshold = 200. line_lower_threshold = 200.
@ -333,8 +342,8 @@ def plot_map_without(network):
# hack because impossible to drop buses... # hack because impossible to drop buses...
n.buses.loc["EU gas", ["x", "y"]] = n.buses.loc["DE0 0", ["x", "y"]] n.buses.loc["EU gas", ["x", "y"]] = n.buses.loc["DE0 0", ["x", "y"]]
n.links.drop(n.links.index[(n.links.carrier != "DC") & ( to_drop = n.links.index[(n.links.carrier != "DC") & (n.links.carrier != "B2B")]
n.links.carrier != "B2B")], inplace=True) n.links.drop(to_drop, inplace=True)
if snakemake.wildcards["lv"] == "1.0": if snakemake.wildcards["lv"] == "1.0":
line_widths = n.lines.s_nom line_widths = n.lines.s_nom
@ -349,13 +358,14 @@ def plot_map_without(network):
line_widths[line_widths > line_upper_threshold] = line_upper_threshold line_widths[line_widths > line_upper_threshold] = line_upper_threshold
link_widths[link_widths > line_upper_threshold] = line_upper_threshold link_widths[link_widths > line_upper_threshold] = line_upper_threshold
n.plot(bus_colors="k", n.plot(
line_colors=ac_color, bus_colors="k",
link_colors=dc_color, line_colors=ac_color,
line_widths=line_widths / linewidth_factor, link_colors=dc_color,
link_widths=link_widths / linewidth_factor, line_widths=line_widths / linewidth_factor,
ax=ax, boundaries=(-10, 30, 34, 70), link_widths=link_widths / linewidth_factor,
color_geomap={'ocean': 'lightblue', 'land': "palegoldenrod"}) ax=ax, **map_opts
)
handles = [] handles = []
labels = [] labels = []
@ -366,12 +376,16 @@ def plot_map_without(network):
labels.append("{} GW".format(s)) labels.append("{} GW".format(s))
l1_1 = ax.legend(handles, labels, l1_1 = ax.legend(handles, labels,
loc="upper left", bbox_to_anchor=(0.05, 1.01), loc="upper left", bbox_to_anchor=(0.05, 1.01),
framealpha=1, frameon=False,
labelspacing=0.8, handletextpad=1.5, labelspacing=0.8, handletextpad=1.5,
title='Today\'s transmission') title='Today\'s transmission')
ax.add_artist(l1_1) ax.add_artist(l1_1)
fig.savefig(snakemake.output.today, transparent=True, bbox_inches="tight") fig.savefig(
snakemake.output.today,
transparent=True,
bbox_inches="tight"
)
def plot_series(network, carrier="AC", name="test"): def plot_series(network, carrier="AC", name="test"):
@ -488,7 +502,7 @@ def plot_series(network, carrier="AC", name="test"):
new_handles.append(handles[i]) new_handles.append(handles[i])
new_labels.append(labels[i]) new_labels.append(labels[i])
ax.legend(new_handles, new_labels, ncol=3, loc="upper left") ax.legend(new_handles, new_labels, ncol=3, loc="upper left", frameon=False)
ax.set_xlim([start, stop]) ax.set_xlim([start, stop])
ax.set_ylim([-1300, 1900]) ax.set_ylim([-1300, 1900])
ax.grid(True) ax.grid(True)
@ -502,41 +516,28 @@ def plot_series(network, carrier="AC", name="test"):
transparent=True) transparent=True)
# %%
if __name__ == "__main__": if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from vresutils import Dict from helper import mock_snakemake
import yaml snakemake = mock_snakemake(
snakemake = Dict() 'plot_network',
with open('config.yaml') as f: simpl='',
snakemake.config = yaml.safe_load(f) clusters=48,
snakemake.config['run'] = "retro_vs_noretro" lv=1.0,
snakemake.wildcards = {"lv": "1.0"} # lv1.0, lv1.25, lvopt sector_opts='Co2L0-168H-T-H-B-I-solar3-dist1',
name = "elec_s_48_lv{}__Co2L0-3H-T-H-B".format(snakemake.wildcards["lv"]) planning_horizons=2050,
suffix = "_retro_tes" )
name = name + suffix
snakemake.input = Dict()
snakemake.output = Dict(
map=(snakemake.config['results_dir'] + snakemake.config['run']
+ "/maps/{}".format(name)),
today=(snakemake.config['results_dir'] + snakemake.config['run']
+ "/maps/{}.pdf".format(name)))
snakemake.input.scenario = "lv" + snakemake.wildcards["lv"]
# snakemake.config["run"] = "bio_costs"
path = snakemake.config['results_dir'] + snakemake.config['run']
snakemake.input.network = (path +
"/postnetworks/{}.nc"
.format(name))
snakemake.output.network = (path +
"/maps/{}"
.format(name))
n = pypsa.Network(snakemake.input.network, overrides = override_component_attrs(snakemake.input.overrides)
override_component_attrs=override_component_attrs) n = pypsa.Network(snakemake.input.network, override_component_attrs=overrides)
plot_map(n, components=["generators", "links", "stores", "storage_units"], map_opts = snakemake.config['plotting']['map']
bus_size_factor=1.5e10, transmission=False)
plot_map(n,
components=["generators", "links", "stores", "storage_units"],
bus_size_factor=1.5e10,
transmission=False
)
plot_h2_map(n) plot_h2_map(n)
plot_map_without(n) plot_map_without(n)

View File

@ -3,41 +3,58 @@
import numpy as np import numpy as np
import pandas as pd import pandas as pd
#allow plotting without Xwindows
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
plt.style.use('ggplot')
from prepare_sector_network import co2_emissions_year from prepare_sector_network import co2_emissions_year
#consolidate and rename #consolidate and rename
def rename_techs(label): def rename_techs(label):
prefix_to_remove = ["residential ","services ","urban ","rural ","central ","decentral "] prefix_to_remove = [
"residential ",
"services ",
"urban ",
"rural ",
"central ",
"decentral "
]
rename_if_contains = ["CHP","gas boiler","biogas","solar thermal","air heat pump","ground heat pump","resistive heater","Fischer-Tropsch"] rename_if_contains = [
"CHP",
"gas boiler",
"biogas",
"solar thermal",
"air heat pump",
"ground heat pump",
"resistive heater",
"Fischer-Tropsch"
]
rename_if_contains_dict = {"water tanks" : "hot water storage", rename_if_contains_dict = {
"retrofitting" : "building retrofitting", "water tanks": "hot water storage",
"H2" : "hydrogen storage", "retrofitting": "building retrofitting",
"battery" : "battery storage", "H2": "hydrogen storage",
"CC" : "CC"} "battery": "battery storage",
"CC": "CC"
}
rename = {"solar" : "solar PV", rename = {
"Sabatier" : "methanation", "solar": "solar PV",
"offwind" : "offshore wind", "Sabatier": "methanation",
"offwind-ac" : "offshore wind (AC)", "offwind": "offshore wind",
"offwind-dc" : "offshore wind (DC)", "offwind-ac": "offshore wind (AC)",
"onwind" : "onshore wind", "offwind-dc": "offshore wind (DC)",
"ror" : "hydroelectricity", "onwind": "onshore wind",
"hydro" : "hydroelectricity", "ror": "hydroelectricity",
"PHS" : "hydroelectricity", "hydro": "hydroelectricity",
"co2 Store" : "DAC", "PHS": "hydroelectricity",
"co2 stored" : "CO2 sequestration", "co2 Store": "DAC",
"AC" : "transmission lines", "co2 stored": "CO2 sequestration",
"DC" : "transmission lines", "AC": "transmission lines",
"B2B" : "transmission lines"} "DC": "transmission lines",
"B2B": "transmission lines"
}
for ptr in prefix_to_remove: for ptr in prefix_to_remove:
if label[:len(ptr)] == ptr: if label[:len(ptr)] == ptr:
@ -57,18 +74,56 @@ def rename_techs(label):
return label return label
preferred_order = pd.Index(["transmission lines","hydroelectricity","hydro reservoir","run of river","pumped hydro storage","solid biomass","biogas","onshore wind","offshore wind","offshore wind (AC)","offshore wind (DC)","solar PV","solar thermal","solar","building retrofitting","ground heat pump","air heat pump","heat pump","resistive heater","power-to-heat","gas-to-power/heat","CHP","OCGT","gas boiler","gas","natural gas","helmeth","methanation","hydrogen storage","power-to-gas","power-to-liquid","battery storage","hot water storage","CO2 sequestration"]) preferred_order = pd.Index([
"transmission lines",
"hydroelectricity",
"hydro reservoir",
"run of river",
"pumped hydro storage",
"solid biomass",
"biogas",
"onshore wind",
"offshore wind",
"offshore wind (AC)",
"offshore wind (DC)",
"solar PV",
"solar thermal",
"solar",
"building retrofitting",
"ground heat pump",
"air heat pump",
"heat pump",
"resistive heater",
"power-to-heat",
"gas-to-power/heat",
"CHP",
"OCGT",
"gas boiler",
"gas",
"natural gas",
"helmeth",
"methanation",
"hydrogen storage",
"power-to-gas",
"power-to-liquid",
"battery storage",
"hot water storage",
"CO2 sequestration"
])
def plot_costs(): def plot_costs():
cost_df = pd.read_csv(snakemake.input.costs,index_col=list(range(3)),header=list(range(n_header))) cost_df = pd.read_csv(
snakemake.input.costs,
index_col=list(range(3)),
header=list(range(n_header))
)
df = cost_df.groupby(cost_df.index.get_level_values(2)).sum() df = cost_df.groupby(cost_df.index.get_level_values(2)).sum()
#convert to billions #convert to billions
df = df/1e9 df = df / 1e9
df = df.groupby(df.index.map(rename_techs)).sum() df = df.groupby(df.index.map(rename_techs)).sum()
@ -86,11 +141,14 @@ def plot_costs():
new_columns = df.sum().sort_values().index new_columns = df.sum().sort_values().index
fig, ax = plt.subplots() fig, ax = plt.subplots(figsize=(12,8))
fig.set_size_inches((12,8))
df.loc[new_index,new_columns].T.plot(kind="bar",ax=ax,stacked=True,color=[snakemake.config['plotting']['tech_colors'][i] for i in new_index])
df.loc[new_index,new_columns].T.plot(
kind="bar",
ax=ax,
stacked=True,
color=[snakemake.config['plotting']['tech_colors'][i] for i in new_index]
)
handles,labels = ax.get_legend_handles_labels() handles,labels = ax.get_legend_handles_labels()
@ -103,24 +161,25 @@ def plot_costs():
ax.set_xlabel("") ax.set_xlabel("")
ax.grid(axis="y") ax.grid(axis='x')
ax.legend(handles,labels,ncol=4,loc="upper left") ax.legend(handles, labels, ncol=1, loc="upper left", bbox_to_anchor=[1,1], frameon=False)
fig.savefig(snakemake.output.costs, bbox_inches='tight')
fig.tight_layout()
fig.savefig(snakemake.output.costs,transparent=True)
def plot_energy(): def plot_energy():
energy_df = pd.read_csv(snakemake.input.energy,index_col=list(range(2)),header=list(range(n_header))) energy_df = pd.read_csv(
snakemake.input.energy,
index_col=list(range(2)),
header=list(range(n_header))
)
df = energy_df.groupby(energy_df.index.get_level_values(1)).sum() df = energy_df.groupby(energy_df.index.get_level_values(1)).sum()
#convert MWh to TWh #convert MWh to TWh
df = df/1e6 df = df / 1e6
df = df.groupby(df.index.map(rename_techs)).sum() df = df.groupby(df.index.map(rename_techs)).sum()
@ -139,53 +198,57 @@ def plot_energy():
new_index = preferred_order.intersection(df.index).append(df.index.difference(preferred_order)) new_index = preferred_order.intersection(df.index).append(df.index.difference(preferred_order))
new_columns = df.columns.sort_values() new_columns = df.columns.sort_values()
#new_columns = df.sum().sort_values().index
fig, ax = plt.subplots() fig, ax = plt.subplots(figsize=(12,8))
fig.set_size_inches((12,8))
print(df.loc[new_index,new_columns]) print(df.loc[new_index, new_columns])
df.loc[new_index,new_columns].T.plot(kind="bar",ax=ax,stacked=True,color=[snakemake.config['plotting']['tech_colors'][i] for i in new_index])
df.loc[new_index, new_columns].T.plot(
kind="bar",
ax=ax,
stacked=True,
color=[snakemake.config['plotting']['tech_colors'][i] for i in new_index]
)
handles,labels = ax.get_legend_handles_labels() handles,labels = ax.get_legend_handles_labels()
handles.reverse() handles.reverse()
labels.reverse() labels.reverse()
ax.set_ylim([snakemake.config['plotting']['energy_min'],snakemake.config['plotting']['energy_max']]) ax.set_ylim([snakemake.config['plotting']['energy_min'], snakemake.config['plotting']['energy_max']])
ax.set_ylabel("Energy [TWh/a]") ax.set_ylabel("Energy [TWh/a]")
ax.set_xlabel("") ax.set_xlabel("")
ax.grid(axis="y") ax.grid(axis="x")
ax.legend(handles,labels,ncol=4,loc="upper left") ax.legend(handles, labels, ncol=1, loc="upper left", bbox_to_anchor=[1, 1], frameon=False)
fig.savefig(snakemake.output.energy, bbox_inches='tight')
fig.tight_layout()
fig.savefig(snakemake.output.energy,transparent=True)
def plot_balances(): def plot_balances():
co2_carriers = ["co2","co2 stored","process emissions"] co2_carriers = ["co2", "co2 stored", "process emissions"]
balances_df = pd.read_csv(snakemake.input.balances,index_col=list(range(3)),header=list(range(n_header))) balances_df = pd.read_csv(
snakemake.input.balances,
index_col=list(range(3)),
header=list(range(n_header))
)
balances = {i.replace(" ","_") : [i] for i in balances_df.index.levels[0]} balances = {i.replace(" ","_"): [i] for i in balances_df.index.levels[0]}
balances["energy"] = [i for i in balances_df.index.levels[0] if i not in co2_carriers] balances["energy"] = [i for i in balances_df.index.levels[0] if i not in co2_carriers]
for k,v in balances.items(): for k, v in balances.items():
df = balances_df.loc[v] df = balances_df.loc[v]
df = df.groupby(df.index.get_level_values(2)).sum() df = df.groupby(df.index.get_level_values(2)).sum()
#convert MWh to TWh #convert MWh to TWh
df = df/1e6 df = df / 1e6
#remove trailing link ports #remove trailing link ports
df.index = [i[:-1] if ((i != "co2") and (i[-1:] in ["0","1","2","3"])) else i for i in df.index] df.index = [i[:-1] if ((i != "co2") and (i[-1:] in ["0","1","2","3"])) else i for i in df.index]
@ -209,9 +272,7 @@ def plot_balances():
new_columns = df.columns.sort_values() new_columns = df.columns.sort_values()
fig, ax = plt.subplots(figsize=(12,8))
fig, ax = plt.subplots()
fig.set_size_inches((12,8))
df.loc[new_index,new_columns].T.plot(kind="bar",ax=ax,stacked=True,color=[snakemake.config['plotting']['tech_colors'][i] for i in new_index]) df.loc[new_index,new_columns].T.plot(kind="bar",ax=ax,stacked=True,color=[snakemake.config['plotting']['tech_colors'][i] for i in new_index])
@ -228,14 +289,13 @@ def plot_balances():
ax.set_xlabel("") ax.set_xlabel("")
ax.grid(axis="y") ax.grid(axis="x")
ax.legend(handles,labels,ncol=4,loc="upper left") ax.legend(handles, labels, ncol=1, loc="upper left", bbox_to_anchor=[1, 1], frameon=False)
fig.tight_layout() fig.savefig(snakemake.output.balances[:-10] + k + ".pdf", bbox_inches='tight')
fig.savefig(snakemake.output.balances[:-10] + k + ".pdf",transparent=True)
def historical_emissions(cts): def historical_emissions(cts):
""" """
@ -369,25 +429,11 @@ def plot_carbon_budget_distribution():
path_cb_plot = snakemake.config['results_dir'] + snakemake.config['run'] + '/graphs/' path_cb_plot = snakemake.config['results_dir'] + snakemake.config['run'] + '/graphs/'
plt.savefig(path_cb_plot+'carbon_budget_plot.pdf', dpi=300) plt.savefig(path_cb_plot+'carbon_budget_plot.pdf', dpi=300)
if __name__ == "__main__": if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from vresutils import Dict from helper import mock_snakemake
import yaml snakemake = mock_snakemake('plot_summary')
snakemake = Dict()
with open('config.yaml', encoding='utf8') as f:
snakemake.config = yaml.safe_load(f)
snakemake.input = Dict()
snakemake.output = Dict()
snakemake.wildcards = Dict()
#snakemake.wildcards['sector_opts']='3H-T-H-B-I-solar3-dist1-cb48be3'
for item in ["costs", "energy"]:
snakemake.input[item] = snakemake.config['summary_dir'] + '/{name}/csvs/{item}.csv'.format(name=snakemake.config['run'],item=item)
snakemake.output[item] = snakemake.config['summary_dir'] + '/{name}/graphs/{item}.pdf'.format(name=snakemake.config['run'],item=item)
snakemake.input["balances"] = snakemake.config['summary_dir'] + '/{name}/csvs/supply_energy.csv'.format(name=snakemake.config['run'],item=item)
snakemake.output["balances"] = snakemake.config['summary_dir'] + '/{name}/graphs/balances-energy.csv'.format(name=snakemake.config['run'],item=item)
n_header = 4 n_header = 4

File diff suppressed because it is too large Load Diff

View File

@ -1,55 +1,35 @@
"""Solve network."""
import numpy as np
import pandas as pd
import logging
logger = logging.getLogger(__name__)
import gc
import os
import pypsa import pypsa
import numpy as np
from pypsa.linopt import get_var, linexpr, define_constraints from pypsa.linopt import get_var, linexpr, define_constraints
from pypsa.descriptors import free_output_series_dataframes from pypsa.linopf import network_lopf, ilopf
# Suppress logging of the slack bus choices
pypsa.pf.logger.setLevel(logging.WARNING)
from vresutils.benchmark import memory_logger from vresutils.benchmark import memory_logger
from helper import override_component_attrs
import logging
logger = logging.getLogger(__name__)
pypsa.pf.logger.setLevel(logging.WARNING)
#First tell PyPSA that links can have multiple outputs by def add_land_use_constraint(n):
#overriding the component_attrs. This can be done for
#as many buses as you need with format busi for i = 2,3,4,5,....
#See https://pypsa.org/doc/components.html#link-with-multiple-outputs-or-inputs
#warning: this will miss existing offwind which is not classed AC-DC and has carrier 'offwind'
for carrier in ['solar', 'onwind', 'offwind-ac', 'offwind-dc']:
existing = n.generators.loc[n.generators.carrier == carrier, "p_nom"].groupby(n.generators.bus.map(n.buses.location)).sum()
existing.index += " " + carrier + "-" + snakemake.wildcards.planning_horizons
n.generators.loc[existing.index, "p_nom_max"] -= existing
override_component_attrs = pypsa.descriptors.Dict({k : v.copy() for k,v in pypsa.components.component_attrs.items()}) n.generators.p_nom_max.clip(lower=0, inplace=True)
override_component_attrs["Link"].loc["bus2"] = ["string",np.nan,np.nan,"2nd bus","Input (optional)"]
override_component_attrs["Link"].loc["bus3"] = ["string",np.nan,np.nan,"3rd bus","Input (optional)"]
override_component_attrs["Link"].loc["bus4"] = ["string",np.nan,np.nan,"4th bus","Input (optional)"]
override_component_attrs["Link"].loc["efficiency2"] = ["static or series","per unit",1.,"2nd bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["efficiency3"] = ["static or series","per unit",1.,"3rd bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["efficiency4"] = ["static or series","per unit",1.,"4th bus efficiency","Input (optional)"]
override_component_attrs["Link"].loc["p2"] = ["series","MW",0.,"2nd bus output","Output"]
override_component_attrs["Link"].loc["p3"] = ["series","MW",0.,"3rd bus output","Output"]
override_component_attrs["Link"].loc["p4"] = ["series","MW",0.,"4th bus output","Output"]
def patch_pyomo_tmpdir(tmpdir):
# PYOMO should write its lp files into tmp here
import os
if not os.path.isdir(tmpdir):
os.mkdir(tmpdir)
from pyutilib.services import TempfileManager
TempfileManager.tempdir = tmpdir
def prepare_network(n, solve_opts=None): def prepare_network(n, solve_opts=None):
if solve_opts is None:
solve_opts = snakemake.config['solving']['options']
if 'clip_p_max_pu' in solve_opts: if 'clip_p_max_pu' in solve_opts:
for df in (n.generators_t.p_max_pu, n.generators_t.p_min_pu, n.storage_units_t.inflow): for df in (n.generators_t.p_max_pu, n.generators_t.p_min_pu, n.storage_units_t.inflow):
df.where(df>solve_opts['clip_p_max_pu'], other=0., inplace=True) df.where(df>solve_opts['clip_p_max_pu'], other=0., inplace=True)
@ -73,50 +53,31 @@ def prepare_network(n, solve_opts=None):
# t.df['capital_cost'] += 1e1 + 2.*(np.random.random(len(t.df)) - 0.5) # t.df['capital_cost'] += 1e1 + 2.*(np.random.random(len(t.df)) - 0.5)
if 'marginal_cost' in t.df: if 'marginal_cost' in t.df:
np.random.seed(174) np.random.seed(174)
t.df['marginal_cost'] += 1e-2 + 2e-3*(np.random.random(len(t.df)) - 0.5) t.df['marginal_cost'] += 1e-2 + 2e-3 * (np.random.random(len(t.df)) - 0.5)
for t in n.iterate_components(['Line', 'Link']): for t in n.iterate_components(['Line', 'Link']):
np.random.seed(123) np.random.seed(123)
t.df['capital_cost'] += (1e-1 + 2e-2*(np.random.random(len(t.df)) - 0.5)) * t.df['length'] t.df['capital_cost'] += (1e-1 + 2e-2 * (np.random.random(len(t.df)) - 0.5)) * t.df['length']
if solve_opts.get('nhours'): if solve_opts.get('nhours'):
nhours = solve_opts['nhours'] nhours = solve_opts['nhours']
n.set_snapshots(n.snapshots[:nhours]) n.set_snapshots(n.snapshots[:nhours])
n.snapshot_weightings[:] = 8760./nhours n.snapshot_weightings[:] = 8760./nhours
if snakemake.config['foresight']=='myopic': if snakemake.config['foresight'] == 'myopic':
add_land_use_constraint(n) add_land_use_constraint(n)
return n return n
def add_opts_constraints(n, opts=None):
if opts is None:
opts = snakemake.wildcards.opts.split('-')
if 'BAU' in opts:
mincaps = snakemake.config['electricity']['BAU_mincapacities']
def bau_mincapacities_rule(model, carrier):
gens = n.generators.index[n.generators.p_nom_extendable & (n.generators.carrier == carrier)]
return sum(model.generator_p_nom[gen] for gen in gens) >= mincaps[carrier]
n.model.bau_mincapacities = pypsa.opt.Constraint(list(mincaps), rule=bau_mincapacities_rule)
if 'SAFE' in opts:
peakdemand = (1. + snakemake.config['electricity']['SAFE_reservemargin']) * n.loads_t.p_set.sum(axis=1).max()
conv_techs = snakemake.config['plotting']['conv_techs']
exist_conv_caps = n.generators.loc[n.generators.carrier.isin(conv_techs) & ~n.generators.p_nom_extendable, 'p_nom'].sum()
ext_gens_i = n.generators.index[n.generators.carrier.isin(conv_techs) & n.generators.p_nom_extendable]
n.model.safe_peakdemand = pypsa.opt.Constraint(expr=sum(n.model.generator_p_nom[gen] for gen in ext_gens_i) >= peakdemand - exist_conv_caps)
def add_eps_storage_constraint(n):
if not hasattr(n, 'epsilon'):
n.epsilon = 1e-5
fix_sus_i = n.storage_units.index[~ n.storage_units.p_nom_extendable]
n.model.objective.expr += sum(n.epsilon * n.model.state_of_charge[su, n.snapshots[0]] for su in fix_sus_i)
def add_battery_constraints(n): def add_battery_constraints(n):
chargers = n.links.index[n.links.carrier.str.contains("battery charger") & n.links.p_nom_extendable] chargers_b = n.links.carrier.str.contains("battery charger")
dischargers = chargers.str.replace("charger","discharger") chargers = n.links.index[chargers_b & n.links.p_nom_extendable]
dischargers = chargers.str.replace("charger", "discharger")
if chargers.empty or ('Link', 'p_nom') not in n.variables.index:
return
link_p_nom = get_var(n, "Link", "p_nom") link_p_nom = get_var(n, "Link", "p_nom")
@ -138,44 +99,28 @@ def add_chp_constraints(n):
electric = n.links.index[electric_bool] electric = n.links.index[electric_bool]
heat = n.links.index[heat_bool] heat = n.links.index[heat_bool]
electric_ext = n.links.index[electric_bool & n.links.p_nom_extendable] electric_ext = n.links.index[electric_bool & n.links.p_nom_extendable]
heat_ext = n.links.index[heat_bool & n.links.p_nom_extendable] heat_ext = n.links.index[heat_bool & n.links.p_nom_extendable]
electric_fix = n.links.index[electric_bool & ~n.links.p_nom_extendable] electric_fix = n.links.index[electric_bool & ~n.links.p_nom_extendable]
heat_fix = n.links.index[heat_bool & ~n.links.p_nom_extendable] heat_fix = n.links.index[heat_bool & ~n.links.p_nom_extendable]
link_p = get_var(n, "Link", "p")
if not electric_ext.empty: if not electric_ext.empty:
link_p_nom = get_var(n, "Link", "p_nom") link_p_nom = get_var(n, "Link", "p_nom")
#ratio of output heat to electricity set by p_nom_ratio #ratio of output heat to electricity set by p_nom_ratio
lhs = linexpr((n.links.loc[electric_ext,"efficiency"] lhs = linexpr((n.links.loc[electric_ext, "efficiency"]
*n.links.loc[electric_ext,'p_nom_ratio'], *n.links.loc[electric_ext, "p_nom_ratio"],
link_p_nom[electric_ext]), link_p_nom[electric_ext]),
(-n.links.loc[heat_ext,"efficiency"].values, (-n.links.loc[heat_ext, "efficiency"].values,
link_p_nom[heat_ext].values)) link_p_nom[heat_ext].values))
define_constraints(n, lhs, "=", 0, 'chplink', 'fix_p_nom_ratio') define_constraints(n, lhs, "=", 0, 'chplink', 'fix_p_nom_ratio')
if not electric.empty:
link_p = get_var(n, "Link", "p")
#backpressure
lhs = linexpr((n.links.loc[electric,'c_b'].values
*n.links.loc[heat,"efficiency"],
link_p[heat]),
(-n.links.loc[electric,"efficiency"].values,
link_p[electric].values))
define_constraints(n, lhs, "<=", 0, 'chplink', 'backpressure')
if not electric_ext.empty:
link_p_nom = get_var(n, "Link", "p_nom")
link_p = get_var(n, "Link", "p")
#top_iso_fuel_line for extendable #top_iso_fuel_line for extendable
lhs = linexpr((1,link_p[heat_ext]), lhs = linexpr((1,link_p[heat_ext]),
(1,link_p[electric_ext].values), (1,link_p[electric_ext].values),
@ -183,222 +128,93 @@ def add_chp_constraints(n):
define_constraints(n, lhs, "<=", 0, 'chplink', 'top_iso_fuel_line_ext') define_constraints(n, lhs, "<=", 0, 'chplink', 'top_iso_fuel_line_ext')
if not electric_fix.empty: if not electric_fix.empty:
link_p = get_var(n, "Link", "p")
#top_iso_fuel_line for fixed #top_iso_fuel_line for fixed
lhs = linexpr((1,link_p[heat_fix]), lhs = linexpr((1,link_p[heat_fix]),
(1,link_p[electric_fix].values)) (1,link_p[electric_fix].values))
define_constraints(n, lhs, "<=", n.links.loc[electric_fix,"p_nom"].values, 'chplink', 'top_iso_fuel_line_fix') rhs = n.links.loc[electric_fix, "p_nom"].values
def add_land_use_constraint(n): define_constraints(n, lhs, "<=", rhs, 'chplink', 'top_iso_fuel_line_fix')
#warning: this will miss existing offwind which is not classed AC-DC and has carrier 'offwind' if not electric.empty:
for carrier in ['solar', 'onwind', 'offwind-ac', 'offwind-dc']:
existing_capacities = n.generators.loc[n.generators.carrier==carrier,"p_nom"].groupby(n.generators.bus.map(n.buses.location)).sum() #backpressure
existing_capacities.index += " " + carrier + "-" + snakemake.wildcards.planning_horizons lhs = linexpr((n.links.loc[electric, "c_b"].values
n.generators.loc[existing_capacities.index,"p_nom_max"] -= existing_capacities *n.links.loc[heat, "efficiency"],
link_p[heat]),
(-n.links.loc[electric, "efficiency"].values,
link_p[electric].values))
define_constraints(n, lhs, "<=", 0, 'chplink', 'backpressure')
n.generators.p_nom_max[n.generators.p_nom_max<0]=0.
def extra_functionality(n, snapshots): def extra_functionality(n, snapshots):
#add_opts_constraints(n, opts)
#add_eps_storage_constraint(n)
add_chp_constraints(n) add_chp_constraints(n)
add_battery_constraints(n) add_battery_constraints(n)
def fix_branches(n, lines_s_nom=None, links_p_nom=None): def solve_network(n, config, opts='', **kwargs):
if lines_s_nom is not None and len(lines_s_nom) > 0: solver_options = config['solving']['solver'].copy()
n.lines.loc[lines_s_nom.index,"s_nom"] = lines_s_nom.values
n.lines.loc[lines_s_nom.index,"s_nom_extendable"] = False
if links_p_nom is not None and len(links_p_nom) > 0:
n.links.loc[links_p_nom.index,"p_nom"] = links_p_nom.values
n.links.loc[links_p_nom.index,"p_nom_extendable"] = False
def solve_network(n, config=None, solver_log=None, opts=None):
if config is None:
config = snakemake.config['solving']
solve_opts = config['options']
solver_options = config['solver'].copy()
if solver_log is None:
solver_log = snakemake.log.solver
solver_name = solver_options.pop('name') solver_name = solver_options.pop('name')
cf_solving = config['solving']['options']
track_iterations = cf_solving.get('track_iterations', False)
min_iterations = cf_solving.get('min_iterations', 4)
max_iterations = cf_solving.get('max_iterations', 6)
def run_lopf(n, allow_warning_status=False, fix_zero_lines=False, fix_ext_lines=False): # add to network for extra_functionality
free_output_series_dataframes(n) n.config = config
n.opts = opts
if fix_zero_lines:
fix_lines_b = (n.lines.s_nom_opt == 0.) & n.lines.s_nom_extendable
fix_links_b = (n.links.carrier=='DC') & (n.links.p_nom_opt == 0.) & n.links.p_nom_extendable
fix_branches(n,
lines_s_nom=pd.Series(0., n.lines.index[fix_lines_b]),
links_p_nom=pd.Series(0., n.links.index[fix_links_b]))
if fix_ext_lines:
fix_branches(n,
lines_s_nom=n.lines.loc[n.lines.s_nom_extendable, 's_nom_opt'],
links_p_nom=n.links.loc[(n.links.carrier=='DC') & n.links.p_nom_extendable, 'p_nom_opt'])
if "line_volume_constraint" in n.global_constraints.index:
n.global_constraints.drop("line_volume_constraint",inplace=True)
else:
if "line_volume_constraint" not in n.global_constraints.index:
line_volume = getattr(n, 'line_volume_limit', None)
if line_volume is not None and not np.isinf(line_volume):
n.add("GlobalConstraint",
"line_volume_constraint",
type="transmission_volume_expansion_limit",
carrier_attribute="AC,DC",
sense="<=",
constant=line_volume)
# Firing up solve will increase memory consumption tremendously, so
# make sure we freed everything we can
gc.collect()
#from pyomo.opt import ProblemFormat
#print("Saving model to MPS")
#n.model.write('/home/ka/ka_iai/ka_kc5996/projects/pypsa-eur/128-B-I.mps', format=ProblemFormat.mps)
#print("Model is saved to MPS")
#sys.exit()
status, termination_condition = n.lopf(pyomo=False,
solver_name=solver_name,
solver_logfile=solver_log,
solver_options=solver_options,
solver_dir=tmpdir,
extra_functionality=extra_functionality,
formulation=solve_opts['formulation'])
#extra_postprocessing=extra_postprocessing
#keep_files=True
#free_memory={'pypsa'}
assert status == "ok" or allow_warning_status and status == 'warning', \
("network_lopf did abort with status={} "
"and termination_condition={}"
.format(status, termination_condition))
if not fix_ext_lines and "line_volume_constraint" in n.global_constraints.index:
n.line_volume_limit_dual = n.global_constraints.at["line_volume_constraint","mu"]
print("line volume limit dual:",n.line_volume_limit_dual)
return status, termination_condition
lines_ext_b = n.lines.s_nom_extendable
if lines_ext_b.any():
# puh: ok, we need to iterate, since there is a relation
# between s/p_nom and r, x for branches.
msq_threshold = 0.01
lines = pd.DataFrame(n.lines[['r', 'x', 'type', 'num_parallel']])
lines['s_nom'] = (
np.sqrt(3) * n.lines['type'].map(n.line_types.i_nom) *
n.lines.bus0.map(n.buses.v_nom)
).where(n.lines.type != '', n.lines['s_nom'])
lines_ext_typed_b = (n.lines.type != '') & lines_ext_b
lines_ext_untyped_b = (n.lines.type == '') & lines_ext_b
def update_line_parameters(n, zero_lines_below=10, fix_zero_lines=False):
if zero_lines_below > 0:
n.lines.loc[n.lines.s_nom_opt < zero_lines_below, 's_nom_opt'] = 0.
n.links.loc[(n.links.carrier=='DC') & (n.links.p_nom_opt < zero_lines_below), 'p_nom_opt'] = 0.
if lines_ext_untyped_b.any():
for attr in ('r', 'x'):
n.lines.loc[lines_ext_untyped_b, attr] = (
lines[attr].multiply(lines['s_nom']/n.lines['s_nom_opt'])
)
if lines_ext_typed_b.any():
n.lines.loc[lines_ext_typed_b, 'num_parallel'] = (
n.lines['s_nom_opt']/lines['s_nom']
)
logger.debug("lines.num_parallel={}".format(n.lines.loc[lines_ext_typed_b, 'num_parallel']))
iteration = 1
lines['s_nom_opt'] = lines['s_nom'] * n.lines['num_parallel'].where(n.lines.type != '', 1.)
status, termination_condition = run_lopf(n, allow_warning_status=True)
def msq_diff(n):
lines_err = np.sqrt(((n.lines['s_nom_opt'] - lines['s_nom_opt'])**2).mean())/lines['s_nom_opt'].mean()
logger.info("Mean square difference after iteration {} is {}".format(iteration, lines_err))
return lines_err
min_iterations = solve_opts.get('min_iterations', 2)
max_iterations = solve_opts.get('max_iterations', 999)
while msq_diff(n) > msq_threshold or iteration < min_iterations:
if iteration >= max_iterations:
logger.info("Iteration {} beyond max_iterations {}. Stopping ...".format(iteration, max_iterations))
break
update_line_parameters(n)
lines['s_nom_opt'] = n.lines['s_nom_opt']
iteration += 1
status, termination_condition = run_lopf(n, allow_warning_status=True)
update_line_parameters(n, zero_lines_below=100)
logger.info("Starting last run with fixed extendable lines")
# Not really needed, could also be taken out
# if 'snakemake' in globals():
# fn = os.path.basename(snakemake.output[0])
# n.export_to_netcdf('/home/vres/data/jonas/playground/pypsa-eur/' + fn)
status, termination_condition = run_lopf(n, allow_warning_status=True, fix_ext_lines=True)
# Drop zero lines from network
# zero_lines_i = n.lines.index[(n.lines.s_nom_opt == 0.) & n.lines.s_nom_extendable]
# if len(zero_lines_i):
# n.mremove("Line", zero_lines_i)
# zero_links_i = n.links.index[(n.links.p_nom_opt == 0.) & n.links.p_nom_extendable]
# if len(zero_links_i):
# n.mremove("Link", zero_links_i)
if cf_solving.get('skip_iterations', False):
network_lopf(n, solver_name=solver_name, solver_options=solver_options,
extra_functionality=extra_functionality, **kwargs)
else:
ilopf(n, solver_name=solver_name, solver_options=solver_options,
track_iterations=track_iterations,
min_iterations=min_iterations,
max_iterations=max_iterations,
extra_functionality=extra_functionality, **kwargs)
return n return n
if __name__ == "__main__": if __name__ == "__main__":
# Detect running outside of snakemake and mock snakemake for testing
if 'snakemake' not in globals(): if 'snakemake' not in globals():
from vresutils.snakemake import MockSnakemake, Dict from helper import mock_snakemake
snakemake = MockSnakemake( snakemake = mock_snakemake(
wildcards=dict(network='elec', simpl='', clusters='39', lv='1.0', 'solve_network',
sector_opts='Co2L0-168H-T-H-B-I-solar3-dist1', simpl='',
co2_budget_name='b30b3', planning_horizons='2050'), clusters=48,
input=dict(network="pypsa-eur-sec/results/test/prenetworks_brownfield/elec_s{simpl}_{clusters}_lv{lv}__{sector_opts}_{co2_budget_name}_{planning_horizons}.nc"), lv=1.0,
output=["results/networks/s{simpl}_{clusters}_lv{lv}_{sector_opts}_{co2_budget_name}_{planning_horizons}-test.nc"], sector_opts='Co2L0-168H-T-H-B-I-solar3-dist1',
log=dict(gurobi="logs/elec_s{simpl}_{clusters}_lv{lv}_{sector_opts}_{co2_budget_name}_{planning_horizons}_gurobi-test.log", planning_horizons=2050,
python="logs/elec_s{simpl}_{clusters}_lv{lv}_{sector_opts}_{co2_budget_name}_{planning_horizons}_python-test.log")
) )
import yaml
with open('config.yaml', encoding='utf8') as f:
snakemake.config = yaml.safe_load(f)
tmpdir = snakemake.config['solving'].get('tmpdir')
if tmpdir is not None:
patch_pyomo_tmpdir(tmpdir)
logging.basicConfig(filename=snakemake.log.python, logging.basicConfig(filename=snakemake.log.python,
level=snakemake.config['logging_level']) level=snakemake.config['logging_level'])
with memory_logger(filename=getattr(snakemake.log, 'memory', None), interval=30.) as mem: tmpdir = snakemake.config['solving'].get('tmpdir')
if tmpdir is not None:
Path(tmpdir).mkdir(parents=True, exist_ok=True)
opts = snakemake.wildcards.opts.split('-')
solve_opts = snakemake.config['solving']['options']
n = pypsa.Network(snakemake.input.network, fn = getattr(snakemake.log, 'memory', None)
override_component_attrs=override_component_attrs) with memory_logger(filename=fn, interval=30.) as mem:
n = prepare_network(n) overrides = override_component_attrs(snakemake.input.overrides)
n = pypsa.Network(snakemake.input.network, override_component_attrs=overrides)
n = solve_network(n) n = prepare_network(n, solve_opts)
n = solve_network(n, config=snakemake.config, opts=opts,
solver_dir=tmpdir,
solver_logfile=snakemake.log.solver)
if "lv_limit" in n.global_constraints.index:
n.line_volume_limit = n.global_constraints.at["lv_limit", "constant"]
n.line_volume_limit_dual = n.global_constraints.at["lv_limit", "mu"]
n.export_to_netcdf(snakemake.output[0]) n.export_to_netcdf(snakemake.output[0])