forked from ryanpeek/ffm_targets
-
Notifications
You must be signed in to change notification settings - Fork 0
/
APPENDIX_functional_flow_preds.Rmd
425 lines (327 loc) · 20.5 KB
/
APPENDIX_functional_flow_preds.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
---
title: "Updating Functional Flow Predictions"
description: |
Steps to use revised watershed data to generate functional flow predictions
author:
- name: Ryan Peek
affiliation: Center for Watershed Sciences, UCD
affiliation_url: https://watershed.ucdavis.edu
date: "`r Sys.Date()`"
output:
distill::distill_article:
toc: true
editor_options:
chunk_output_type: console
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, tidy = FALSE, message = FALSE, warning = FALSE)
library(knitr)
library(here)
library(glue)
library(sf)
suppressPackageStartupMessages(library(tidyverse))
library(tmap)
library(tmaptools)
library(mapview)
mapviewOptions(fgb = FALSE)
# data
catch <- read_rds(here("data_input/catchments_final_lshasta.rds"))
catch_clean <- read_rds("data_output/cleaned_full_lshasta_nhd_catchments.rds")
h10 <- read_rds(here("data_input/huc10_little_shasta.rds"))
gages <- read_rds(here("data_input/gages_little_shasta.rds"))
springs <- read_rds(here("data_input/springs_little_shasta.rds"))
lsh_flowlines <- read_rds("data_output/cleaned_full_lshasta_nhd_flownet.rds")
lois <- lsh_flowlines %>%
filter(ID %in% c(3917946, 3917950, 3917198))
# change to include full delineation RMd or not
FULL_MODE <- TRUE
# https://bookdown.org/yihui/rmarkdown-cookbook/child-document.html
```
# Overview
This document outlines the process used to generate revised functional flow metric predictions for a given watershed. The process outlined below is what is required to generate new predictions for the Little Shasta River case study, but the same steps could be followed in other watersheds too. Generally this requires revising the catchment and streamline delineations, re-generating catchment and watershed scale data used in functional flow models, calculating the accumulated values for for each stream segment (`COMID`), scaling or transforming these data, and finally running the functional flow predictive model to generate revised flow predictions for the functional flow metrics.
Each step is detailed below, with the associated requirements and potential issues required to consider each step.
```{r makestaticcatchmap, echo=FALSE, eval=FALSE, fig.cap="Revised catchments associated with cleaned streamlines in the Little Shasta.", layout="l-page"}
tmap_mode("plot")
#gm_osm <- read_osm(catch_clean, type = "esri-topo", raster=TRUE)
#tm_shape(gm_osm) + tm_rgb() +
tm_shape(catch_clean) +
tm_polygons(col="gray", border.lwd = 0.3, border.alpha = 0.4, alpha=0.2, border.col = "gray30") +
tm_shape(lois) +
tm_sf(col="orange", lwd=8) +
tm_shape(lsh_flowlines) + tm_lines(col="royalblue2", lwd=2) +
tm_compass(type = "4star", position = c("left","bottom")) +
tm_scale_bar(position = c("left","bottom")) +
tm_layout(frame = FALSE,
#legend.position = c("left", "top"),
title = 'Little Shasta Watershed',
#title.position = c('right', 'top'),
legend.outside = FALSE,
attr.outside = FALSE,
fontfamily = "Roboto Condensed",
#inner.margins = 0.01, outer.margins = (0.01),
#legend.position = c(0.6,0.85),
title.position = c(0.7, 0.95))
# works with roboto
# tmap::tmap_save(filename = "figures/tmap_lshasta.png", width = 10, height = 8, dpi=300)
# doesn't work with roboto
#tmap::tmap_save(filename = "figs/tmap_lshasta.pdf", width = 10, height = 8, dpi=300)
# crop
# tinytex::tlmgr_install('pdfcrop')
# knitr::plot_crop("figures/tmap_lshasta.png")
```
```{r staticcatchmap, echo=FALSE, eval=FALSE, fig.cap="Revised catchments associated with cleaned streamlines in the Little Shasta.", layout="l-page"}
knitr::include_graphics(here("figures/tmap_lshasta.png"))
```
```{r, child=if(FULL_MODE) 'appendix01_catchment_delineation.Rmd'}
```
# Download Catchment Attributes
With the cleaned flowline network and catchment map, we can proceed with downloading catchment attributes to be used in the flow models. These data were downloaded from ScienceBase, "Select Attributes for NHDPlus Version 2.1 Reach Catchments and Modified Network Routed Upstream Watersheds for the Conterminous United States" (Wieczorek, Jackson and Schwarz, 2018, <https://doi.org/10.5066/P9PA63SM>).
- [NHD Flowline Routing](https://www.sciencebase.gov/catalog/item/5b92790be4b0702d0e809fe5)
- [Select Attributes](https://www.sciencebase.gov/catalog/item/5669a79ee4b08895842a1d47)
- [Climate and Water Balance Attributes](https://www.sciencebase.gov/catalog/item/566f6c76e4b09cfe53ca77fe)
- [Krug Average Annual Runoff](https://water.usgs.gov/GIS/metadata/usgswrd/XML/runoff.xml) (and [zipped data file](https://water.usgs.gov/GIS/dsdl/runoff.e00.zip))
## Scaling Data
After subcatchment and basin level attributes have been downloaded, a common step is centering or scaling data to provide a more common set of units and magnitudes so the modeling can be more accurate. In this specific case, the exact scaling used in the original flow models remains unknown, and thus we currently cannot recreate the exact same accumulation dataset used in the original flow models. Scaling is typical with many different types of models, and provides better performance when variance (and data) all fall around a similar variance, and the scales aren't significantly different.
However, once a known scaling factor is described, accumulation and associated surface water functional flow metrics should be reproducible.
## Accumulation by `COMID` and covariate
Accumulation of the catchment variables was done using variable appropriate calculations (i.e., area weighted average, sum, max, min, etc.). Area weighted average--calculated as the local area of a given COMID catchment, divided by the total accumulated drainage area for that catchment--was the primary calculation for the majority of the metrics (`area_weight = [(local_catch_area) / (drain_area)]`). In headwater catchments, the area weight should equal **`1`**, as the local catchment area and the cumulative drainage area are the same.
The original data and cross-walked information is in this [file on github](https://github.com/ryanpeek/ffm_accumulation/blob/main/data_clean/08_accumulated_final_xwalk.csv). There are over 250+ variables in the dataset used for the accumulation. We need to join all these variables to the updated network and catchment areas.
## Calculate Accumulation by Variable Type
Using the cross walk to identify which variables require what type of calculation for the accumulation, code was used to calculate the accumulation values for each year for each `COMID` for each variable of interest. These calculations used the raw dataset that was downloaded and trimmed from the national NHD dataset described above.
For example, these are variables that used the area weighted average (*`AWA`*):
```{r accumVarSel, eval=FALSE, echo=TRUE}
# crosswalk of variables
xwalk <- read_csv(here("data_input/08_accumulated_final_xwalk.csv"))
vars_awa <- xwalk %>%
filter(accum_op_class == "AWA") %>%
select(mod_input_final) %>%
mutate(dat_output = case_when(
mod_input_final == "ann_min_precip_basin" ~ "cat_minp6190",
mod_input_final == "ann_max_precip_basin" ~ "cat_maxp6190",
mod_input_final == "cat_ppt7100_ann" ~ "pptavg_basin",
mod_input_final == "et_basin" ~ "cat_et",
mod_input_final == "wtdepave" ~ "cat_wtdep",
mod_input_final == "ann_min_precip" ~ "cat_minp6190",
mod_input_final == "ann_max_precip" ~ "cat_maxp6190",
mod_input_final == "pet_basin" ~ "cat_pet",
mod_input_final == "rh_basin" ~ "cat_rh",
mod_input_final == "pptavg_basin" ~"cat_ppt7100_ann",
TRUE ~ mod_input_final),
.after = mod_input_final)
# filter to vars
cat_df_awa <- catch_dat %>%
select(comid:wa_yr, vars_awa$dat_output)
```
Using the list of `COMIDs` generated previously (for each `COMID` there is a list of all upstream `COMIDs`), the accumulation calculation can be looped and applied to calculate each variable of interest. These data are then collated by `COMID` and year, and then used in the functional flow models.
# Functional Flow Modeling & Future Steps
The functional flow modeling process and framework is described in Grantham et al. (2022) and Yarnell et al. (2021). Using code and the parameters derived from these studies, functional flow models were created for each metric independently. Therefore 24 individual models existed for each of the 24 individual flow metrics. The reference dataset used to derive these models is described in Grantham et al. 2022. Reference models were then used to make flow metric predictions using the Little Shasta River accumulated dataset described above. Data were predicted for each year, and the median value was used to estimate the percentile (e.g., 10, 25, 50, 75, 90) values.
Ultimately, the process identified above is functional and can be used broadly in surface-water driven systems. In the Little Shasta River, the upstream location of interest (LOI-3) was used as it is least impacted by both surfacewater/groundwater interactions, and the functional flow predictions were unaffected by the errors identified in the NHD flow network.
## Groundwater Dependent Ecosystems
The Little Shasta watershed contain a significant fraction of area classified as [Groundwater Dependent Ecosystems (or GDE's)](https://gis.water.ca.gov/arcgis/rest/services/Biota/i02_NaturalCommunitiesCommonlyAssociatedWithGroundwater/MapServer). Groundwater Dependent Ecosystems (live [interactive map here](https://www.arcgis.com/home/webmap/viewer.html?url=https%3A%2F%2Fgis.water.ca.gov%2Farcgis%2Frest%2Fservices%2FBiota%2Fi02_NaturalCommunitiesCommonlyAssociatedWithGroundwater%2FMapServer&source=sd)) represent a compilation of phreatophytic vegetation, regularly flooded natural wetlands and riverine areas, and springs and seeps extracted from 48 publicly available state and federal agency datasets. Two habitat classes are included in the dataset: wetland features commonly associated with the surface expression of groundwater under natural, unmodified conditions; and vegetation types commonly associated with the sub-surface presence of groundwater (phreatophytes).
If we look at the landscape classification for the Little Shasta, the lower section of the watershed is predominantly cultivated crops, hay/pasture, and shrub/scrub, and contains the GDE's, while the upper part of the watershed is predominantly evergreen forest.
```{r getGagesGDEs, eval=FALSE, echo=FALSE}
library(tmap)
library(tmaptools)
# USGS GAGES
gages_usgs <- read_rds(file = "data_input/gages_little_shasta.rds") %>%
mutate(site_number=as.character(site_number))
# LSR GAGE
gage_lsr <- st_as_sf(data.frame("site_longitude"=-122.350357, "site_latitude"=41.733093, "site_name"="LSR", "site_number"="LSR", "site_agency"="UCD"), coords=c("site_longitude", "site_latitude"), crs=4326, remove=FALSE)
# filter to just stations of interest
gages <- gages_usgs %>% filter(site_number %in% c("11517000","11516900")) %>%
bind_rows(gage_lsr)
# boundary?
gage_bounds <- st_as_sfc(st_bbox(gages)) %>% st_transform(3310) %>% st_buffer(800, endCapStyle = 'ROUND') %>% st_transform(4326)
plot(gage_bounds) + plot(gages$geometry, add=TRUE)
# set icon for gages (good library here: https://www.nps.gov/maps/tools/symbol-library/index.html)
icon_gages <- tmap_icons(file = "figures/gage_icon.png")
# png file of the icon to be drawn; can be local or remote (url)
gde_veg_lsh <- read_rds("data_output/gde_veg_lsh.rds")
gde_wet_lsh <- read_rds("data_output/gde_wetland_lsh.rds")
tm_shape(h10) +
tm_polygons(border.col="gray30", alpha = 0, lwd=3) +
tm_shape(gde_wet_lsh, bbox = gage_bounds) +
tm_polygons(col="forestgreen", alpha=0.5, border.col = NULL, title="GDEs", legend.show=TRUE) +
tm_shape(lsh_flowlines) +
tm_lines(lwd=0.9, col = "steelblue2",legend.lwd.show = FALSE) +
tm_shape(gages) + tm_symbols(shape = icon_gages, size = 0.3, border.lwd = NA, legend.col.show = TRUE) +
tm_layout(bg.color="gray98", frame = F, fontfamily = "Roboto", title.size = 2) +
tm_scale_bar(color.dark = "grey30")
```
```{r, getStreamTypes, eval=FALSE}
# Add StreamTypes and Klamath ---------------------------------------------
shasta_main <- read_rds("data_output/shasta_main_river.rds")
lsh_strmtype <- read_rds("data_output/lsh_streamtypes.rds")
# fix names
lsh_strmtype <- lsh_strmtype %>%
mutate(strmtype=case_when(
Name == "Little Shasta Foothills" ~ "Foothills",
Name == "Little Shasta Bottomlands" ~ "Bottomlands",
Name == "Little Shasta Headwaters" ~ "Headwaters",
TRUE ~ "Tributaries"))
# drop COMIDs that are canals:
todrop <- c(3917952, 3917266, 3917222, 948010091, 948010092,
948010096, 948010095, 948010046, 948010047, 948010048,
3917924, 3917100, 3917926)
lsh_foothills <- c(10040642, 10035874,10033986, 10032346, 10038060)
lsh_bottomlands <- c(10021351, 10020862, 10020426)
lsh_hw <- c(10043744, 10047601, 10052559, 10059365, 10069513, 0135206)
flownet <- read_rds("data_output/cleaned_full_lshasta_nhd_flownet.rds")
# update flowlines_map layer:
flownet_map <- flownet %>%
filter(!ID %in% todrop) %>%
mutate(strmtype=case_when(
hydroseq %in% lsh_foothills ~ "Foothills",
hydroseq %in% lsh_bottomlands ~ "Bottomlands",
hydroseq %in% lsh_hw ~ "Headwaters",
TRUE ~ "Headwaters"))
# Make LOI ----------------------------------------------------------------
loi_pts <- st_point_on_surface(flownet_map) %>%
mutate(loi_id = case_when(
ID=="3917946" ~ "LOI-1",
ID=="3917950" ~ "LOI-2",
ID=="3917198" ~ "LOI-3"
)) %>% filter(!is.na(loi_id))
# Raster Data -------------------------------------------------------------
# get base layer for mapping
#gm_osm <- read_osm(bb(flownet_map), type = "stamen-watercolor", raster=TRUE, zoom = 12)
#save(gm_osm, file = "data_input/tmaptools_h10_stamen_wc.rda")
#gm_osm <- read_osm(bb(flownet_map), type = "stamen-terrain", raster=TRUE, zoom = 13)
#save(gm_osm, file = "data_input/tmaptools_h10_stamen_terrain.rda")
#gm_osm <- read_osm(bb(flownet_map), type = "osm", raster=TRUE, zoom = 13)
#save(gm_osm, file = "data_output/tmaptools_h10_osm.rda")
#tm_shape(gm_osm) + tm_rgb()
# get terrain
load("data_input/tmaptools_h10_osm_natgeo.rda") # topo base layer
tmap_options(max.raster = c(plot=1e6, view=1e6))
tm_shape(gm_osm) + tm_rgb() +
tm_shape(gde_wet_lsh) + tm_polygons(col="forestgreen", alpha=0.9, border.col = NULL, title="GDEs", legend.show=TRUE) +
tm_shape(lsh_strmtype) + tm_lines(col = "Name", lwd=3, palette = "viridis", n = 3, contrast = c(0.4, 1)) +
tm_shape(gages) + tm_symbols(shape = icon_gages, size = 0.1, border.lwd = NA, legend.col.show = TRUE) +
tm_shape(loi_pts) + tm_symbols(col="loi_id", alpha=0.5, title.col = "LOI", palette = "PuOr")+
tm_layout(frame = F, fontfamily = "Roboto", title.size = 2) +
tm_scale_bar(color.dark = "grey10")
```
```{r getNLCD, eval=FALSE}
# Get NLCD ----------------------------------------------------------------
# devtools::install_github("ropensci/FedData")
#library(FedData)
#nlcd <- get_nlcd(template = h10, label = "lshasta",
# extraction.dir = "data_input/nlcd")
#unique(getValues(nlcd))
# crop
library(stars)
nlcd_st <- read_stars("data_input/nlcd/lshasta_NLCD_Land_Cover_2019.tif")
#st_crs(nlcd_st)
h10_ls_crop <- st_transform(h10, st_crs(nlcd_st))
st_crs(nlcd_st) == st_crs(h10_ls_crop)
nlcd_crop <- st_crop(nlcd_st, h10_ls_crop)
# plot
tm_shape(nlcd_crop) + tm_raster(title = "2019 NLCD") +
# huc 10
tm_shape(h10) +
tm_polygons(border.col="gray30", alpha = 0, lwd=3) +
tm_shape(flownet_map) +
tm_lines(lwd="arbolatesum", col = "steelblue", scale = 4, legend.lwd.show = FALSE) +
tm_shape(flownet_map) +
tm_lines(lwd=0.9, col = "steelblue2",legend.lwd.show = FALSE) +
tm_legend(frame=FALSE,
legend.text.size=0.65, legend.position=c(0.01,0.03),
legend.show=TRUE, legend.outside = FALSE) +
tm_compass(type = "4star", position = c(0.82, 0.12), text.size = 0.5) +
tm_scale_bar(position = c(0.77,0.05), breaks = c(0,2.5,5),
text.size = 0.6) +
tm_layout(frame=FALSE, fontfamily = "Roboto")
tmap_save(filename = "figures/map_nlcd_2019_lshasta.png", dpi=300,
width = 11, height = 8)
# knitr::plot_crop() # if necessary
```
```{r nlcdmap, echo=FALSE, eval=TRUE, fig.cap="NLCD Map of Little Shasta.", layout="l-page"}
knitr::include_graphics(here("figures/map_nlcd_2019_lshasta.png"))
# knitr::plot_crop()
```
```{r nlcdtable, echo=FALSE, eval=TRUE, fig.cap="NLCD Table: proportions of land cover in Little Shasta HUC10."}
library(stars)
nlcd_st <- read_stars("data_input/nlcd/lshasta_NLCD_Land_Cover_2019.tif")
#st_crs(nlcd_st)
h10_ls_crop <- st_transform(h10, st_crs(nlcd_st))
#st_crs(nlcd_st) == st_crs(h10_ls_crop)
nlcd_crop <- st_crop(nlcd_st, h10_ls_crop)
table(droplevels(nlcd_crop)) %>% as.data.frame() %>% mutate(total = sum(Freq), prcnt = round(Freq/total * 100, 2)) %>% select(NLCD=1, 4) %>% arrange(desc(prcnt)) %>% kable()
```
```{r fullMap, eval=FALSE, echo=FALSE}
# xpand boundaries
gage_bounds <- st_as_sfc(st_bbox(gages)) %>% st_transform(3310) %>% st_buffer(1500, endCapStyle = 'ROUND') %>% st_transform(4326)
# see
#tmaptools::palette_explorer()
library(randomcoloR)
col4 <- randomcoloR::distinctColorPalette(k=4, runTsne = TRUE)
colblind <- c("#000000", "#E69F00", "#56B4E9", "#009E73",
"#F0E442", "#0072B2", "#D55E00", "#CC79A7")
#pie(rep(1, 8), col = colblind)
colblind4 <- c("#0072B2", "#009E73", "#F0E442", "#CC79A7")
#pie(rep(1, 4), col = colblind4)
# LShasta map with DEM
loi_map <-
# baselayer
tm_shape(gm_osm) + tm_rgb() +
# HUC10 boundary
tm_shape(h10_ls_crop) +
tm_polygons(border.col="gray30", alpha = 0, lwd=3) +
# gde
tm_shape(gde_veg_lsh, bbox = gage_bounds) +
tm_fill(col = "aquamarine3", alpha = 0.9, title = "GDE", legend.show = TRUE) +
tm_shape(gde_wet_lsh) +
tm_fill(col = "aquamarine3", alpha = 0.9, title = "GDE", legend.show = TRUE) +
tm_add_legend('fill', col=c('aquamarine3', 'seashell', 'darkseagreen3'), border.col='black', size=1,
labels=c(' GDE', " Private Land", " US Forest Service")) +
# flowlines
tm_shape(flownet_map) +
tm_lines(col="steelblue", lwd=0.4, scale = 2, legend.lwd.show = FALSE) +
tm_shape(flownet_map) +
tm_lines(col="strmtype", lwd="arbolatesum", scale = 5,
#palette = "Dark2", n = 4, contrast = c(0.3, 0.9),
palette = colblind4[c(1,2,4)],
legend.lwd.show = FALSE,
legend.col.show = TRUE, title.col = "") +
# flowlines shasta
#tm_shape(shasta_main) + tm_lines(col="darkblue", lwd=3, legend.lwd.show = FALSE) +
#tm_text("Name", shadow = TRUE, along.lines = TRUE, auto.placement = TRUE) +
# springs
tm_shape(springs) +
tm_dots(col="skyblue", size = 0.4, title = "Springs", legend.show = TRUE, shape = 21) +
tm_shape(springs[c(1,2),]) +
tm_text("Name", auto.placement = FALSE, col="gray10", size=0.8,
just = "bottom", ymod = -1, xmod=2, shadow = FALSE)+
tm_add_legend('symbol', col='skyblue', border.col='black', size=1,
labels=c(' Springs')) +
# gages
tm_shape(gages, bbox=gage_bounds) +
tm_symbols(shape = icon_gages, size = 0.1, border.lwd = NA, title.shape = "Gages") +
tm_text("site_number", col = "black", auto.placement = FALSE, size = 0.5,
just = "left", ymod = 1.2, xmod=0.5, shadow = TRUE) +
# LOI points
tm_shape(loi_pts) + tm_dots(col="darkgoldenrod1", shape=22, size=0.5, alpha=0.7) +
tm_text("loi_id", fontface = "bold", auto.placement = FALSE,
bg.color = "white", bg.alpha = 0.1, col = "darkgoldenrod2",
just = "right", xmod=-0.4, ymod=-1, shadow = TRUE) +
tm_add_legend('symbol',shape = 22, col='darkgoldenrod1', border.col='black', size=1,
labels=c(' Location of Interest')) +
# layout
tm_legend(bg.color="white", frame=TRUE, legend.text.size=0.8, legend.position=c(0.82,0.05),
legend.show=TRUE, legend.outside = FALSE) +
tm_layout(frame = FALSE,
attr.outside = FALSE)+
tm_compass(type = "4star", position = c(0.12, 0.12), text.size = 0.5) +
tm_scale_bar(position = c(0.05,0.05), breaks = c(0,2.5,5),
text.size = 0.6)
loi_map
# save
tmap_save(loi_map, filename = "figures/map_of_loi_w_gdes.jpg", height = 8.5, width = 11, units = "in", dpi = 300)
```
```{r fullLSHmap, echo=FALSE, eval=TRUE, fig.cap="Map of Little Shasta.", layout="l-page"}
knitr::include_graphics(here("figures/map_of_loi_w_gdes.jpg"))
```
## Reproducibility
The framework to derive this entire analysis was built on the `{targets}` package in R (version 4.1.3), which permits singular changes at any point in the workflow, but allows the user to rerun only the components which are subsequently affected.
The code and writeups can be found here: https://github.com/ryanpeek/ffm_targets