Title: | Geographic Tools for Studying Gerrymandering |
---|---|
Description: | A compilation of tools to complete common tasks for studying gerrymandering. This focuses on the geographic tool side of common problems, such as linking different levels of spatial units or estimating how to break up units. Functions exist for creating redistricting-focused data for the US. |
Authors: | Christopher T. Kenny [aut, cre] , Cory McCartan [ctb] |
Maintainer: | Christopher T. Kenny <[email protected]> |
License: | MIT + file LICENCE |
Version: | 2.4.0 |
Built: | 2024-10-28 05:22:08 UTC |
Source: | https://github.com/christopherkenny/geomander |
A compilation of tools to complete common tasks for studying gerrymandering. This focuses on the geographic tool side of common problems, such as linking different levels of spatial units or estimating how to break up units. Functions exist for creating redistricting-focused data for the US.
Index of help topics:
add_edge Add Edges to an Adjacency List adjacency Build Adjacency List alarm_states List Available States from ALARM Data baf_to_vtd Estimate Plans from a Block Assignment File to Voting Districts block2prec Aggregate Block Table by Matches block2prec_by_county Aggregate Block Table by Matches and County check_contiguity Check Contiguity by Group check_polygon_contiguity Check Polygon Contiguity checkerboard Checkerboard checkerboard_adj Checkerboard Adjacency clean_vest Clean VEST Names compare_adjacencies Compare Adjacency Lists count_connections Count Times Precincts are Connected create_block_table Create Block Level Data create_tract_table Create Tract Level Data dra2r DRA to R estimate_down Estimate Down Levels estimate_up Estimate Up Levels geo_estimate_down Estimate Down Geography Levels geo_estimate_up Estimate Up Geography Levels geo_filter Filter to Intersecting Pieces geo_match Match Across Geographic Layers geo_plot Plots a Shape with Row Numbers as Text geo_plot_group Create Plots of Shapes by Group with Connected Components Colored geo_sort Sort Precincts geo_trim Trim Away Small Pieces geomander-package Geographic Tools for Studying Gerrymandering geos_centerish Get the kind of center of each shape geos_circle_center Get the centroid of the maximum inscribed circle get_alarm Get ALARM Dataset get_dra Get Dave's Redistricting App Dataset get_heda Get Harvard Election Data Archive ("HEDA") Dataset get_lewis Get historical United States Congressional District Shapefiles get_rpvnearme Get Racially Polarized Voting Dataset from RPV Near Me get_vest Get Voting and Election Science Team ("VEST") Dataset global_gearys Compute Global Geary's C global_morans Compute Global Moran's I gstar_i Compute Standardized Getis Ord G*i heda_states List Available States from HEDA Dataverse local_gearys Compute Local Geary's C local_morans Compute Local Moran's I nrcsd nrcsd orange orange precincts precincts r2dra R to DRA regionalize Estimate Regions by Geographic Features rockland rockland seam_adj Filter Adjacency to Edges Along Border seam_geom Filter Shape to Geographies Along Border seam_rip Remove Edges along a Boundary seam_sew Suggest Edges to Connect Two Sides of a Border split_precinct Split a Precinct st_centerish Get the kind of center of each shape st_circle_center Get the centroid of the maximum inscribed circle subtract_edge Subtract Edges from an Adjacency List suggest_component_connection Suggest Connections for Disconnected Groups suggest_neighbors Suggest Neighbors for Lonely Precincts towns towns va18sub va18sub va_blocks va_blocks va_vtd va_vtd vest_states List Available States from VEST Dataverse
Christopher T. Kenny <[email protected]>
Christopher T. Kenny [aut, cre] (<https://orcid.org/0000-0002-9386-6860>), Cory McCartan [ctb] (<https://orcid.org/0000-0002-6251-669X>)
Add Edges to an Adjacency List
add_edge(adj, v1, v2, ids = NULL, zero = TRUE)
add_edge(adj, v1, v2, ids = NULL, zero = TRUE)
adj |
list of adjacent precincts |
v1 |
vector of vertex identifiers for the first vertex. Can be an
integer index or a value to look up in |
v2 |
vector of vertex identifiers for the second vertex. Can be an
integer index or a value to look up in |
ids |
A vector of identifiers which is used to look up the row indices
for the vertices. If provided, the entries in |
zero |
boolean, TRUE if the list is zero indexed. False if one indexed. |
adjacency list.
data(towns) adj <- adjacency(towns) add_edge(adj, 2, 3) add_edge(adj, "West Haverstraw", "Stony Point", towns$MUNI)
data(towns) adj <- adjacency(towns) add_edge(adj, 2, 3) add_edge(adj, "West Haverstraw", "Stony Point", towns$MUNI)
This mimics redist's redist.adjacency using GEOS to create the patterns, rather than sf. This is faster than that version, but forces projections.
adjacency(shp, epsg = 3857)
adjacency(shp, epsg = 3857)
shp |
sf dataframe |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
list with nrow(shp) entries
data(precincts) adj <- adjacency(precincts)
data(precincts) adj <- adjacency(precincts)
List Available States from ALARM Data
alarm_states()
alarm_states()
character abbreviations for states
## Not run: # relies on internet availability and interactivity on some systems alarm_states() ## End(Not run)
## Not run: # relies on internet availability and interactivity on some systems alarm_states() ## End(Not run)
District lines are often provided at the census block level, but analyses often occur at the voting district level. This provides a simple way to estimate the block level to the voting district level.
baf_to_vtd(baf, plan_name, GEOID = "GEOID", year = 2020)
baf_to_vtd(baf, plan_name, GEOID = "GEOID", year = 2020)
baf |
a tibble representing a block assignment file. |
plan_name |
character. Name of column in |
GEOID |
character. Name of column which corresponds to each block's GEOID,
sometimes called "BLOCKID". Default is |
year |
the decade to request, either |
If a voting district is split between blocks, this currently uses the most common district.
a tibble with a vtd-level assignment file
# Not guaranteed to reach download from redistrict2020.org ## Not run: # download and read baf ---- url <- paste0('https://github.com/PlanScore/Redistrict2020/', 'raw/main/files/DE-2021-01/DE_SLDU_bef.zip') tf <- tempfile('.zip') utils::download.file(url, tf) utils::unzip(tf, exdir = dirname(tf)) baf <- readr::read_csv( file = paste0(dirname(tf), '/DE_SLDU_bef.csv'), col_types = 'ci' ) names(baf) <- c('GEOID', 'ssd_20') # convert to vtd level ---- baf_to_vtd(baf = baf, plan_name = 'ssd_20', 'GEOID') ## End(Not run)
# Not guaranteed to reach download from redistrict2020.org ## Not run: # download and read baf ---- url <- paste0('https://github.com/PlanScore/Redistrict2020/', 'raw/main/files/DE-2021-01/DE_SLDU_bef.zip') tf <- tempfile('.zip') utils::download.file(url, tf) utils::unzip(tf, exdir = dirname(tf)) baf <- readr::read_csv( file = paste0(dirname(tf), '/DE_SLDU_bef.csv'), col_types = 'ci' ) names(baf) <- c('GEOID', 'ssd_20') # convert to vtd level ---- baf_to_vtd(baf = baf, plan_name = 'ssd_20', 'GEOID') ## End(Not run)
Aggregates block table values up to a higher level, normally precincts, hence the name block2prec.
block2prec(block_table, matches, geometry = FALSE)
block2prec(block_table, matches, geometry = FALSE)
block_table |
Required. Block table output from create_block_table |
matches |
Required. Grouping variable to aggregate up by, typically made with geo_match |
geometry |
Boolean. Whether to keep geometry or not. |
dataframe with length(unique(matches)) rows
set.seed(1) data(rockland) rockland$id <- sample(1:2, nrow(rockland), TRUE) block2prec(rockland, rockland$id)
set.seed(1) data(rockland) rockland$id <- sample(1:2, nrow(rockland), TRUE) block2prec(rockland, rockland$id)
Performs the same type of operation as block2prec, but subsets a precinct geometry based on a County fips column. This helps get around the problem that county geometries often have borders that follow rivers and lead to funny shaped blocks. This guarantees that every block is matched to a precinct which is in the same county.
block2prec_by_county(block_table, precinct, precinct_county_fips, epsg = 3857)
block2prec_by_county(block_table, precinct, precinct_county_fips, epsg = 3857)
block_table |
Required. Block table output from create_block_table |
precinct |
sf dataframe of shapefiles to match to. |
precinct_county_fips |
Column within precincts |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
dataframe with nrow(precinct) rows
## Not run: # Need Census API data(towns) towns$fips <- '087' block <- create_block_table('NY', 'Rockland') block2prec_by_county(block, towns, 'fips') ## End(Not run)
## Not run: # Need Census API data(towns) towns$fips <- '087' block <- create_block_table('NY', 'Rockland') block2prec_by_county(block, towns, 'fips') ## End(Not run)
Identify contiguous sets of units and numbers each set. Can be extended to repeat the procedure within a subgeography.
check_contiguity(adj, group) cct(adj, group) ccm(adj, group)
check_contiguity(adj, group) cct(adj, group) ccm(adj, group)
adj |
adjacency list |
group |
array of group identifiers. Typically district numbers or county names. Defaults to 1 if no input is provided, checking that the adjacency list itself is one connected component. |
Given a zero-indexed adjacency list and an array of group identifiers, this
returns a tibble which identifies the connected components. The three columns
are group
for the inputted group, group_number
which uniquely identifies each
group as a positive integer, and component
which identifies the connected
component number for each corresponding entry of adjacency and group. If everything
is connected within the group, then each element of component
will be 1
.
Otherwise, the largest component is given the value 1
, the next largest 2
,
and so on.
If nothing is provided to group, it will default to a vector of ones, checking if the adjacency graph is connected.
cct()
is shorthand for creating a table of the component values. If everything
is connected within each group, it returns a value of 1. In general, it returns
a frequency table of components.
ccm()
is shorthand for getting the maximum component value. It returns the
maximum number of components that a group is broken into.
This returns 1 if each group is connected. #'
tibble with contiguity indicators. Each row is the units of adj
. Columns include
group
Values of the inputted group
argument. If group
is not specified, then all values
will be 1.
component
A number for each contiguous set of units within a group
. If all units within a
group
are contiguous, all values are 1. If there are two sets, each discontiguous with
the other, the larger one will be numbered 1 and the smaller one will be numbered as 2.
data(checkerboard) adj <- adjacency(checkerboard) # These each indicate the graph is connected. check_contiguity(adj) # all contiguous # If there are two discontiguous groups, there will be 2 values of `component` cct(adj) ccm(adj)
data(checkerboard) adj <- adjacency(checkerboard) # These each indicate the graph is connected. check_contiguity(adj) # all contiguous # If there are two discontiguous groups, there will be 2 values of `component` cct(adj) ccm(adj)
Cast shp
to component polygons, build the adjacency, and check the contiguity.
Avoids issues where a precinct is actually a multipolygon
check_polygon_contiguity(shp, group, epsg = 3857)
check_polygon_contiguity(shp, group, epsg = 3857)
shp |
An sf data frame |
group |
unquoted name of group identifier in shp. Typically, this is district assignment. If you're looking for dis-contiguous precincts, this should be a row number. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
tibble with a column for each of inputted group, created group number, and the identified connected component number
data(checkerboard) check_polygon_contiguity(checkerboard, i)
data(checkerboard) check_polygon_contiguity(checkerboard, i)
This data set contains 64 squares in an 8x8 grid, like a checkerboard.
data("checkerboard")
data("checkerboard")
An sf dataframe with 64 observations
data('checkerboard')
data('checkerboard')
This data contains a zero indexed adjacency list for the checkerboard dataset.
data("checkerboard_adj")
data("checkerboard_adj")
A list with 64 entries
data('checkerboard_adj')
data('checkerboard_adj')
Clean VEST Names
clean_vest(data)
clean_vest(data)
data |
sf tibble from VEST |
data with cleaned names
data(va18sub) va <- clean_vest(va18sub)
data(va18sub) va <- clean_vest(va18sub)
Compare Adjacency Lists
compare_adjacencies(adj1, adj2, shp, zero = TRUE)
compare_adjacencies(adj1, adj2, shp, zero = TRUE)
adj1 |
Required. A first adjacency list. |
adj2 |
Required. A second adjacency list. |
shp |
shapefile to compare intersection types. |
zero |
Boolean. Defaults to TRUE. Are adj1 and adj2 zero indexed? |
tibble with row indices to compare, and optionally columns which describe the DE-9IM relationship between differences.
data(towns) rook <- adjacency(towns) sf_rook <- lapply(sf::st_relate(towns, pattern = 'F***1****'), function(x) { x - 1L }) compare_adjacencies(rook, sf_rook, zero = FALSE)
data(towns) rook <- adjacency(towns) sf_rook <- lapply(sf::st_relate(towns, pattern = 'F***1****'), function(x) { x - 1L }) compare_adjacencies(rook, sf_rook, zero = FALSE)
Count Times Precincts are Connected
count_connections(dm, normalize = FALSE)
count_connections(dm, normalize = FALSE)
dm |
district membership matrix |
normalize |
Whether to normalize all values by the number of columns. |
matrix with the number of connections between precincts
set.seed(1) dm <- matrix(sample(1:2, size = 100, TRUE), 10) count_connections(dm)
set.seed(1) dm <- matrix(sample(1:2, size = 100, TRUE), 10) count_connections(dm)
Creates a block level dataset, using the decennial census information, with the standard redistricting variables.
create_block_table( state, county = NULL, geometry = TRUE, year = 2020, mem = FALSE, epsg = 3857 )
create_block_table( state, county = NULL, geometry = TRUE, year = 2020, mem = FALSE, epsg = 3857 )
state |
Required. Two letter state postal code. |
county |
Optional. Name of county. If not provided, returns blocks for the entire state. |
geometry |
Defaults to TRUE. Whether to return the geometry or not. |
year |
year, must be 2000, 2010, or 2020 |
mem |
Default is FALSE. Set TRUE to use memoized backend. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
dataframe with data for each block in the selected region. Data includes 2 sets of columns for each race or ethnicity category: population (pop) and voting age population (vap)
## Not run: # uses the Census API create_block_table(state = 'NY', county = 'Rockland', geometry = FALSE) ## End(Not run)
## Not run: # uses the Census API create_block_table(state = 'NY', county = 'Rockland', geometry = FALSE) ## End(Not run)
Create Tract Level Data
create_tract_table( state, county, geometry = TRUE, year = 2019, mem = FALSE, epsg = 3857 )
create_tract_table( state, county, geometry = TRUE, year = 2019, mem = FALSE, epsg = 3857 )
state |
Required. Two letter state postal code. |
county |
Optional. Name of county. If not provided, returns tracts for the entire state. |
geometry |
Defaults to TRUE. Whether to return the geography or not. |
year |
year, must be >= 2009 and <= 2019. |
mem |
Default is FALSE. Set TRUE to use memoized backend. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
dataframe with data for each tract in the selected region. Data includes 3 sets of columns for each race or ethnicity category: population (pop), voting age population (vap), and citizen voting age population (cvap)
## Not run: # Relies on Census Bureau API tract <- create_tract_table('NY', 'Rockland', year = 2018) ## End(Not run)
## Not run: # Relies on Census Bureau API tract <- create_tract_table('NY', 'Rockland', year = 2018) ## End(Not run)
Creates a block or precinct level dataset from DRA csv output.
dra2r(dra, state, precincts, epsg = 3857)
dra2r(dra, state, precincts, epsg = 3857)
dra |
The path to an exported csv or a dataframe with columns GEOID20 and District, loaded from a DRA export. |
state |
the state postal code of the state |
precincts |
an sf dataframe of precinct shapes to link the output to |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
sf dataframe either at the block level or precinct level
## Not run: # Needs Census Bureau API # dra_utah_test is available at https://bit.ly/3c6UDKk blocklevel <- dra2r('dra_utah_test.csv', state = 'UT') ## End(Not run)
## Not run: # Needs Census Bureau API # dra_utah_test is available at https://bit.ly/3c6UDKk blocklevel <- dra2r('dra_utah_test.csv', state = 'UT') ## End(Not run)
Non-geographic partner function to geo_estimate_down. Allows users to estimate down without the costly matching operation if they've already matched.
estimate_down(wts, value, group)
estimate_down(wts, value, group)
wts |
numeric vector. Defaults to 1. Typically population or VAP, as a weight to give each precinct. |
value |
numeric vector. Defaults to 1. Typically electoral outcomes, as a value to estimate down into blocks. |
group |
matches of length(wts) that correspond to row indices of value. Often, this input is the output of geo_match. |
numeric vector with each value split by weight
library(dplyr) set.seed(1) data(checkerboard) counties <- checkerboard |> group_by(id <= 32) |> summarize(geometry = sf::st_union(geometry)) |> mutate(pop = c(100, 200)) matches <- geo_match(checkerboard, counties) estimate_down(wts = rep(1, nrow(checkerboard)), value = counties$pop, group = matches)
library(dplyr) set.seed(1) data(checkerboard) counties <- checkerboard |> group_by(id <= 32) |> summarize(geometry = sf::st_union(geometry)) |> mutate(pop = c(100, 200)) matches <- geo_match(checkerboard, counties) estimate_down(wts = rep(1, nrow(checkerboard)), value = counties$pop, group = matches)
Non-geographic partner function to geo_estimate_up. Allows users to aggregate up without the costly matching operation if they've already matched.
estimate_up(value, group)
estimate_up(value, group)
value |
numeric vector. Defaults to 1. Typically population values. |
group |
matches of length(value) that correspond to row indices of value. Often, this input is the output of geo_match. |
numeric vector with each value aggregated by group
library(dplyr) set.seed(1) data(checkerboard) counties <- checkerboard |> group_by(id <= 32) |> summarize(geometry = sf::st_union(geometry)) |> mutate(pop = c(100, 200)) matches <- geo_match(checkerboard, counties) estimate_up(value = checkerboard$i, group = matches)
library(dplyr) set.seed(1) data(checkerboard) counties <- checkerboard |> group_by(id <= 32) |> summarize(geometry = sf::st_union(geometry)) |> mutate(pop = c(100, 200)) matches <- geo_match(checkerboard, counties) estimate_up(value = checkerboard$i, group = matches)
Simple method for estimating data down to a lower level. This is most often useful for getting election data down from a precinct level to a block level in the case that a state or other jurisdiction split precincts when creating districts. Geographic partner to estimate_down.
geo_estimate_down(from, to, wts, value, method = "center", epsg = 3857)
geo_estimate_down(from, to, wts, value, method = "center", epsg = 3857)
from |
Larger geography level |
to |
smaller geography level |
wts |
numeric vector of length nrow(to). Defaults to 1. Typically population or VAP, as a weight to give each precinct. |
value |
numeric vector of length nrow(from). Defaults to 1. Typically electoral outcomes, as a value to estimate down into blocks. |
method |
string from center, centroid, point, or area for matching levels |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
numeric vector with each value split by weight
library(dplyr) set.seed(1) data(checkerboard) counties <- checkerboard |> group_by(id <= 32) |> summarize(geometry = sf::st_union(geometry)) |> mutate(pop = c(100, 200)) geo_estimate_down(from = counties, to = checkerboard, value = counties$pop)
library(dplyr) set.seed(1) data(checkerboard) counties <- checkerboard |> group_by(id <= 32) |> summarize(geometry = sf::st_union(geometry)) |> mutate(pop = c(100, 200)) geo_estimate_down(from = counties, to = checkerboard, value = counties$pop)
Simple method for aggregating data up to a higher level This is most often useful for getting population data from a block level up to a precinct level. Geographic partner to estimate_up.
geo_estimate_up(from, to, value, method = "center", epsg = 3857)
geo_estimate_up(from, to, value, method = "center", epsg = 3857)
from |
smaller geography level |
to |
larger geography level |
value |
numeric vector of length nrow(from). Defaults to 1. |
method |
string from center, centroid, point, or area for matching levels |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
numeric vector with each value aggregated by group
library(dplyr) set.seed(1) data(checkerboard) counties <- checkerboard |> group_by(id <= 32) |> summarize(geometry = sf::st_union(geometry)) |> mutate(pop = c(100, 200)) geo_estimate_up(from = checkerboard, to = counties, value = checkerboard$i)
library(dplyr) set.seed(1) data(checkerboard) counties <- checkerboard |> group_by(id <= 32) |> summarize(geometry = sf::st_union(geometry)) |> mutate(pop = c(100, 200)) geo_estimate_up(from = checkerboard, to = counties, value = checkerboard$i)
Filter to Intersecting Pieces
geo_filter(from, to, bool = FALSE, epsg = 3857)
geo_filter(from, to, bool = FALSE, epsg = 3857)
from |
Required. sf dataframe. the geography to subset |
to |
Required. sf dataframe. the geography that from must intersect |
bool |
Optional, defaults to FALSE. Should this just return a logical vector? |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
sf data frame or logical vector if bool == TRUE
## Not run: # Needs Census Bureau API data(towns) block <- create_block_table('NY', 'Rockland') geo_filter(block, towns) ## End(Not run) data(towns) data(rockland) sub <- geo_filter(rockland, towns)
## Not run: # Needs Census Bureau API data(towns) block <- create_block_table('NY', 'Rockland') geo_filter(block, towns) ## End(Not run) data(towns) data(rockland) sub <- geo_filter(rockland, towns)
Match Across Geographic Layers
geo_match( from, to, method = "center", by = NULL, tiebreaker = TRUE, epsg = 3857 )
geo_match( from, to, method = "center", by = NULL, tiebreaker = TRUE, epsg = 3857 )
from |
smaller geographic level to match up from |
to |
larger geographic level to be matched to |
method |
string from 'center', 'centroid', 'point', 'circle', or 'area' for matching method |
by |
A character vector to match by. One element if both |
tiebreaker |
Should ties be broken? boolean. If FALSE, precincts with no matches get value -1 and precincts with multiple matches get value -2. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
Methods are as follows:
centroid: matches each element of from
to the to
entry that the geographic centroid intersects
center: very similar to centroid, but it matches an arbitrary center point within from
if the centroid of from
is outside the bounds of from. (This happens for non-convex shapes only).
point: matches each element of from
to the to
entry that the "point on surface" intersects.
circle: matches each element of from
to the to
entry that the centroid
of the maximum inscribed circle intersects
area: matches each element of from
to the to
element which has the largest area overlap
Integer Vector of matches length(to) with values in 1:nrow(from)
library(dplyr) data(checkerboard) counties <- sf::st_as_sf(as.data.frame(rbind( sf::st_union(checkerboard |> filter(i < 4)), sf::st_union(checkerboard |> filter(i >= 4)) ))) geo_match(from = checkerboard, to = counties) geo_match(from = checkerboard, to = counties, method = 'area')
library(dplyr) data(checkerboard) counties <- sf::st_as_sf(as.data.frame(rbind( sf::st_union(checkerboard |> filter(i < 4)), sf::st_union(checkerboard |> filter(i >= 4)) ))) geo_match(from = checkerboard, to = counties) geo_match(from = checkerboard, to = counties, method = 'area')
One liner to plot a shape with row numbers
geo_plot(shp)
geo_plot(shp)
shp |
An sf shapefile |
ggplot
data(checkerboard) geo_plot(checkerboard)
data(checkerboard) geo_plot(checkerboard)
Create Plots of Shapes by Group with Connected Components Colored
geo_plot_group(shp, adj, group, save = FALSE, path = "")
geo_plot_group(shp, adj, group, save = FALSE, path = "")
shp |
An sf shapefile |
adj |
adjacency list |
group |
array of group identifiers. Typically district numbers or county names. |
save |
Boolean, whether to save or not. |
path |
Path to save, only used if save is TRUE. Defaults to working directory. |
list of ggplots
library(dplyr) data('checkerboard') data('checkerboard_adj') checkerboard <- checkerboard |> mutate(discont = as.integer(j == 5 | j == 6)) p <- geo_plot_group(checkerboard, checkerboard_adj, checkerboard$discont) p[[1]] p[[2]]
library(dplyr) data('checkerboard') data('checkerboard_adj') checkerboard <- checkerboard |> mutate(discont = as.integer(j == 5 | j == 6)) p <- geo_plot_group(checkerboard, checkerboard_adj, checkerboard$discont) p[[1]] p[[2]]
Reorders precincts by distance from the NW corner of the bounding box.
geo_sort(shp, epsg = 3857)
geo_sort(shp, epsg = 3857)
shp |
sf dataframe, required. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
sf dataframe
data(checkerboard) geo_sort(checkerboard)
data(checkerboard) geo_sort(checkerboard)
Trim Away Small Pieces
geo_trim(from, to, thresh = 0.01, bool = FALSE, epsg = 3857)
geo_trim(from, to, thresh = 0.01, bool = FALSE, epsg = 3857)
from |
Required. sf dataframe. the geography to subset |
to |
Required. sf dataframe. the geography that from must intersect |
thresh |
Percent as decimal of an area to trim away. Default is .01, which is 1%. |
bool |
Optional, defaults to FALSE. Should this just return a logical vector? |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
sf data frame or logical vector if bool=TRUE
## Not run: # Needs Census Bureau API data(towns) block <- create_block_table('NY', 'Rockland') geo_trim(block, towns, thresh = 0.05) ## End(Not run) data(towns) data(rockland) sub <- geo_filter(rockland, towns) rem <- geo_trim(sub, towns, thresh = 0.05)
## Not run: # Needs Census Bureau API data(towns) block <- create_block_table('NY', 'Rockland') geo_trim(block, towns, thresh = 0.05) ## End(Not run) data(towns) data(rockland) sub <- geo_filter(rockland, towns) rem <- geo_trim(sub, towns, thresh = 0.05)
Returns points within the shape, near the center. Uses the centroid if that's in the shape, or point on surface if not.
geos_centerish(shp, epsg = 3857)
geos_centerish(shp, epsg = 3857)
shp |
An sf dataframe |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
A geos geometry list
data(towns) geos_centerish(towns)
data(towns) geos_centerish(towns)
Returns the centroid of the largest inscribed circle for each shape
geos_circle_center(shp, tolerance = 0.01, epsg = 3857)
geos_circle_center(shp, tolerance = 0.01, epsg = 3857)
shp |
An sf dataframe |
tolerance |
positive numeric tolerance to simplify by. Default is 0.01. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
A geos geometry list
data(towns) geos_circle_center(towns)
data(towns) geos_circle_center(towns)
Gets a dataset from the Algorithm-Assisted Redistricting Methodology Project.
The current supported data is the 2020 retabulations of the VEST data, which
can be downloaded with get_vest
.
get_alarm(state, year = 2020, geometry = TRUE, epsg = 3857)
get_alarm(state, year = 2020, geometry = TRUE, epsg = 3857)
state |
two letter state abbreviation |
year |
year to get data for. Either |
geometry |
Default is TRUE. Add geometry to the data? |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
See the full available data at https://github.com/alarm-redist/census-2020.
tibble with election data and optional geometry
ak <- get_alarm('AK', geometry = FALSE)
ak <- get_alarm('AK', geometry = FALSE)
Gets a dataset from Dave's Redistricting App.
get_dra(state, year = 2020, geometry = TRUE, clean_names = TRUE, epsg = 3857)
get_dra(state, year = 2020, geometry = TRUE, clean_names = TRUE, epsg = 3857)
state |
two letter state abbreviation |
year |
year to get data for. Either |
geometry |
Default is TRUE. Add geometry to the data? |
clean_names |
Clean names. Default is |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
See the full available data at https://github.com/dra2020/vtd_data.
tibble with election data and optional geometry
ak <- get_dra('AK', geometry = FALSE)
ak <- get_dra('AK', geometry = FALSE)
Get Harvard Election Data Archive ("HEDA") Dataset
get_heda(state, path = tempdir(), epsg = 3857, ...)
get_heda(state, path = tempdir(), epsg = 3857, ...)
state |
two letter state abbreviation |
path |
folder to put shape in. Default is |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
... |
additional arguments passed to |
sf tibble
shp <- get_heda('ND')
shp <- get_heda('ND')
Data sourced from the United States Congressional District Shapefiles, primarily hosted at https://cdmaps.polisci.ucla.edu/. Files are fetched through the GitHub repository at https://github.com/JeffreyBLewis/congressional-district-boundaries.
get_lewis(state, congress)
get_lewis(state, congress)
state |
two letter state abbreviation |
congress |
congress number, from 1 to 114. |
a sf tibble of the congressional district boundaries
Jeffrey B. Lewis, Brandon DeVine, Lincoln Pitcher, and Kenneth C. Martis. (2013) Digital Boundary Definitions of United States Congressional Districts, 1789-2012. [Data file and code book]. Retrieved from https://cdmaps.polisci.ucla.edu on [date of download].
get_lewis(state = 'NM', congress = 111)
get_lewis(state = 'NM', congress = 111)
Get Racially Polarized Voting Dataset from RPV Near Me
get_rpvnearme(state, version = c(1, 2))
get_rpvnearme(state, version = c(1, 2))
state |
the state postal code of the state |
version |
the version of the data to use. |
a tibble of precinct-level estimates of votes (party) by race
get_rpvnearme('DE')
get_rpvnearme('DE')
Get Voting and Election Science Team ("VEST") Dataset
get_vest(state, year, path = tempdir(), clean_names = TRUE, epsg = 3857, ...)
get_vest(state, year, path = tempdir(), clean_names = TRUE, epsg = 3857, ...)
state |
two letter state abbreviation |
year |
year any in 2016-2021 |
path |
folder to put shape in. Default is |
clean_names |
Clean names. Default is |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
... |
additional arguments passed to |
sf tibble
## Not run: # Requires Dataverse API shp <- get_vest('CO', 2020) ## End(Not run)
## Not run: # Requires Dataverse API shp <- get_vest('CO', 2020) ## End(Not run)
Computes the Global Geary's Contiguity statistic. Can produce spatial weights from an adjacency or sf data frame, in which case the spatial_mat is a contiguity matrix. Users can also provide a spatial_mat argument directly.
global_gearys(shp, adj, wts, spatial_mat, epsg = 3857)
global_gearys(shp, adj, wts, spatial_mat, epsg = 3857)
shp |
sf data frame. Optional if adj or spatial_mat provided. |
adj |
zero indexed adjacency list. Optional if shp or spatial_mat provided. |
wts |
Required. Numeric vector with weights to use for Moran's I. |
spatial_mat |
matrix of spatial weights. Optional if shp or adj provided. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
double
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) global_gearys(shp = checkerboard, wts = checkerboard$m)
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) global_gearys(shp = checkerboard, wts = checkerboard$m)
Computes the Global Moran's I statistic and expectation. Can produce spatial weights from an adjacency or sf data frame, in which case the spatial_mat is a contiguity matrix. Users can also provide a spatial_mat argument directly.
global_morans(shp, adj, wts, spatial_mat, epsg = 3857)
global_morans(shp, adj, wts, spatial_mat, epsg = 3857)
shp |
sf data frame. Optional if adj or spatial_mat provided. |
adj |
zero indexed adjacency list. Optional if shp or spatial_mat provided. |
wts |
Required. Numeric vector with weights to use for Moran's I. |
spatial_mat |
matrix of spatial weights. Optional if shp or adj provided. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
list
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) global_morans(shp = checkerboard, wts = checkerboard$m)
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) global_morans(shp = checkerboard, wts = checkerboard$m)
Returns the Getis Ord G*i in standardized form.
gstar_i(shp, adj, wts, spatial_mat, epsg = 3857)
gstar_i(shp, adj, wts, spatial_mat, epsg = 3857)
shp |
sf data frame. Optional if adj or spatial_mat provided. |
adj |
zero indexed adjacency list. Optional if shp or spatial_mat provided. |
wts |
Required. Numeric vector with weights to use for Moran's I. |
spatial_mat |
matrix of spatial weights. Optional if shp or adj provided. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
vector of G*i scores
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) gstar_i(shp = checkerboard, wts = checkerboard$m)
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) gstar_i(shp = checkerboard, wts = checkerboard$m)
List Available States from HEDA Dataverse
heda_states()
heda_states()
character abbreviations for states
heda_states()
heda_states()
Compute Local Geary's C
local_gearys(shp, adj, wts, spatial_mat, epsg = 3857)
local_gearys(shp, adj, wts, spatial_mat, epsg = 3857)
shp |
sf data frame. Optional if adj or spatial_mat provided. |
adj |
zero indexed adjacency list. Optional if shp or spatial_mat provided. |
wts |
Required. Numeric vector with weights to use for Moran's I. |
spatial_mat |
matrix of spatial weights. Not required if shp or adj provided. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
numeric vector
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) local_gearys(shp = checkerboard, wts = checkerboard$m)
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) local_gearys(shp = checkerboard, wts = checkerboard$m)
Compute Local Moran's I
local_morans(shp, adj, wts, spatial_mat, epsg = 3857)
local_morans(shp, adj, wts, spatial_mat, epsg = 3857)
shp |
sf data frame. Optional if adj or spatial_mat provided. |
adj |
zero indexed adjacency list. Optional if shp or spatial_mat provided. |
wts |
Required. Numeric vector with weights to use for Moran's I. |
spatial_mat |
matrix of spatial weights. Optional if shp or adj provided. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
tibble
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) local_morans(shp = checkerboard, wts = checkerboard$m)
library(dplyr) data('checkerboard') checkerboard <- checkerboard |> mutate(m = as.numeric((id + i) %% 2 == 0)) local_morans(shp = checkerboard, wts = checkerboard$m)
The data contains the North Rockland Central School District.
data('nrcsd')
data('nrcsd')
An sf dataframe with 1 observation
data('nrcsd')
data('nrcsd')
This data contains the blocks for Orange County NY, with geographies simplified to allow for better examples.
data("orange")
data("orange")
An sf dataframe with 10034 observations
It can be recreated with: orange <- create_block_table('NY', 'Orange') orange <- rmapshaper::ms_simplify(orange, keep_shapes = TRUE)
data('orange')
data('orange')
This data contains the election districts (or precincts) for Rockland County NY, with geographies simplified to allow for better examples.
data("precincts")
data("precincts")
An sf dataframe with 278 observations
https://www.rocklandgis.com/portal/apps/sites/#/data/datasets/2d91f9db816c48318848ad66eb1a18e9
data('precincts')
data('precincts')
Project a plan at the precinct level down to blocks into a format that can be used with DRA. Projecting down to blocks can take a lot of time for larger states.
r2dra(precincts, plan, state, path, epsg = 3857)
r2dra(precincts, plan, state, path, epsg = 3857)
precincts |
Required. an sf dataframe of precinct shapes |
plan |
Required. Either a vector of district assignments or the name of a column in precincts with district assignments. |
state |
Required. the state postal code of the state |
path |
Optional. A path to try to save to. Warns if saving failed. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
tibble with columns Id, as used by DRA, identical to GEOID in census terms and District.
## Not run: # Needs Census Bureau API cd <- tinytiger::tt_congressional_districts() |> filter(STATEFP == '49') cnty <- tinytiger::tt_counties(state = 49) matchedcty <- geo_match(from = cnty, to = cd) # use counties as precincts and let the plan be their center match: r2dra(cnty, matchedcty, 'UT', 'r2dra_ex.csv') ## End(Not run)
## Not run: # Needs Census Bureau API cd <- tinytiger::tt_congressional_districts() |> filter(STATEFP == '49') cnty <- tinytiger::tt_counties(state = 49) matchedcty <- geo_match(from = cnty, to = cd) # use counties as precincts and let the plan be their center match: r2dra(cnty, matchedcty, 'UT', 'r2dra_ex.csv') ## End(Not run)
This offers a basic method for dividing a shape into separate pieces
regionalize(shp, lines, adj = adjacency(shp), epsg = 3857)
regionalize(shp, lines, adj = adjacency(shp), epsg = 3857)
shp |
|
lines |
|
adj |
adjacency graph |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
integer vector of regions with nrow(shp)
entries
data(towns) # make some weird roadlike feature passing through the towns lines <- sf::st_sfc(sf::st_linestring(sf::st_coordinates(sf::st_centroid(towns))), crs = sf::st_crs(towns) ) regionalize(towns, lines)
data(towns) # make some weird roadlike feature passing through the towns lines <- sf::st_sfc(sf::st_linestring(sf::st_coordinates(sf::st_centroid(towns))), crs = sf::st_crs(towns) ) regionalize(towns, lines)
This data contains the blocks for Rockland County NY, with geographies simplified to allow for better examples.
data("rockland")
data("rockland")
An sf dataframe with 4764 observations
It can be recreated with: rockland <- create_block_table('NY', 'Rockland') rockland <- rmapshaper::ms_simplify(rockland, keep_shapes = TRUE)
data('rockland')
data('rockland')
Filter Adjacency to Edges Along Border
seam_adj(adj, shp, admin, seam, epsg = 3857)
seam_adj(adj, shp, admin, seam, epsg = 3857)
adj |
zero indexed adjacency graph |
shp |
tibble to subset and where admin column is found |
admin |
quoted name of administrative unit column |
seam |
administrative units to filter by |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
subset of adj
data('rockland') data('orange') data('nrcsd') o_and_r <- rbind(orange, rockland) o_and_r <- o_and_r |> geo_filter(nrcsd) |> geo_trim(nrcsd) adj <- adjacency(o_and_r) seam_adj(adj, shp = o_and_r, admin = 'county', seam = c('071', '087'))
data('rockland') data('orange') data('nrcsd') o_and_r <- rbind(orange, rockland) o_and_r <- o_and_r |> geo_filter(nrcsd) |> geo_trim(nrcsd) adj <- adjacency(o_and_r) seam_adj(adj, shp = o_and_r, admin = 'county', seam = c('071', '087'))
Filter Shape to Geographies Along Border
seam_geom(adj, shp, admin, seam, epsg = 3857)
seam_geom(adj, shp, admin, seam, epsg = 3857)
adj |
zero indexed adjacency graph |
shp |
tibble to subset and where admin column is found |
admin |
quoted name of administrative unit column |
seam |
administrative units to filter by |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
subset of shp
data('rockland') data('orange') data('nrcsd') o_and_r <- rbind(orange, rockland) o_and_r <- o_and_r |> geo_filter(nrcsd) |> geo_trim(nrcsd) adj <- adjacency(o_and_r) seam_geom(adj, shp = o_and_r, admin = 'county', seam = c('071', '087'))
data('rockland') data('orange') data('nrcsd') o_and_r <- rbind(orange, rockland) o_and_r <- o_and_r |> geo_filter(nrcsd) |> geo_trim(nrcsd) adj <- adjacency(o_and_r) seam_geom(adj, shp = o_and_r, admin = 'county', seam = c('071', '087'))
Remove Edges along a Boundary
seam_rip(adj, shp, admin, seam, epsg = 3857)
seam_rip(adj, shp, admin, seam, epsg = 3857)
adj |
zero indexed adjacency graph |
shp |
tibble where admin column is found |
admin |
quoted name of administrative unit column |
seam |
units to rip the seam between by removing adjacency connections |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
adjacency list
data('rockland') data('orange') data('nrcsd') o_and_r <- rbind(orange, rockland) o_and_r <- o_and_r |> geo_filter(nrcsd) |> geo_trim(nrcsd) adj <- adjacency(o_and_r) seam_rip(adj, o_and_r, 'county', c('071', '087'))
data('rockland') data('orange') data('nrcsd') o_and_r <- rbind(orange, rockland) o_and_r <- o_and_r |> geo_filter(nrcsd) |> geo_trim(nrcsd) adj <- adjacency(o_and_r) seam_rip(adj, o_and_r, 'county', c('071', '087'))
Suggest Edges to Connect Two Sides of a Border
seam_sew(shp, admin, seam, epsg = 3857)
seam_sew(shp, admin, seam, epsg = 3857)
shp |
sf tibble where admin column is found |
admin |
quoted name of administrative unit column |
seam |
administrative units to filter by |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
tibble of edges connecting sides of a border
data('rockland') data('orange') data('nrcsd') o_and_r <- rbind(orange, rockland) o_and_r <- o_and_r |> geo_filter(nrcsd) |> geo_trim(nrcsd) adj <- adjacency(o_and_r) adds <- seam_sew(o_and_r, 'county', c('071', '087')) adj <- adj |> add_edge(adds$v1, adds$v2)
data('rockland') data('orange') data('nrcsd') o_and_r <- rbind(orange, rockland) o_and_r <- o_and_r |> geo_filter(nrcsd) |> geo_trim(nrcsd) adj <- adjacency(o_and_r) adds <- seam_sew(o_and_r, 'county', c('071', '087')) adj <- adj |> add_edge(adds$v1, adds$v2)
States often split a precinct when they create districts but rarely provide the geography for the split precinct. This allows you to split a precinct using a lower geography, typically blocks.
split_precinct(lower, precinct, split_by, lower_wt, split_by_id, epsg = 3857)
split_precinct(lower, precinct, split_by, lower_wt, split_by_id, epsg = 3857)
lower |
The lower geography that makes up the precinct, this is often a block level geography. |
precinct |
The single precinct that you would like to split. |
split_by |
The upper geography that you want to split precinct by |
lower_wt |
Optional. Numeric weights to give to each precinct, typically VAP or population. |
split_by_id |
Optional. A string that names a column in split_by that identifies each observation in split_by |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
sf data frame with precinct split
library(sf) data(checkerboard) low <- checkerboard |> dplyr::slice(1:3, 9:11) prec <- checkerboard |> dplyr::slice(1:3) |> dplyr::summarize(geometry = sf::st_union(geometry)) dists <- checkerboard |> dplyr::slice(1:3, 9:11) |> dplyr::mutate(dist = c(1, 2, 2, 1, 3, 3)) |> dplyr::group_by(dist) |> dplyr::summarize(geometry = sf::st_union(geometry)) split_precinct(low, prec, dists, split_by_id = 'dist')
library(sf) data(checkerboard) low <- checkerboard |> dplyr::slice(1:3, 9:11) prec <- checkerboard |> dplyr::slice(1:3) |> dplyr::summarize(geometry = sf::st_union(geometry)) dists <- checkerboard |> dplyr::slice(1:3, 9:11) |> dplyr::mutate(dist = c(1, 2, 2, 1, 3, 3)) |> dplyr::group_by(dist) |> dplyr::summarize(geometry = sf::st_union(geometry)) split_precinct(low, prec, dists, split_by_id = 'dist')
Returns points within the shape, near the center. Uses the centroid if that's in the shape, or point on surface if not.
st_centerish(shp, epsg = 3857)
st_centerish(shp, epsg = 3857)
shp |
An sf dataframe |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
An sf dataframe where geometry is the center(ish) of each shape in shp
data(towns) st_centerish(towns)
data(towns) st_centerish(towns)
Returns the centroid of the largest inscribed circle for each shape
st_circle_center(shp, tolerance = 0.01, epsg = 3857)
st_circle_center(shp, tolerance = 0.01, epsg = 3857)
shp |
An sf dataframe |
tolerance |
positive numeric tolerance to simplify by. Default is 0.01. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
An sf dataframe where geometry is the circle center of each shape in shp
data(towns) st_circle_center(towns)
data(towns) st_circle_center(towns)
Subtract Edges from an Adjacency List
subtract_edge(adj, v1, v2, ids = NULL, zero = TRUE)
subtract_edge(adj, v1, v2, ids = NULL, zero = TRUE)
adj |
list of adjacent precincts |
v1 |
vector of vertex identifiers for the first vertex. Can be an
integer index or a value to look up in |
v2 |
vector of vertex identifiers for the second vertex. Can be an
integer index or a value to look up in |
ids |
A vector of identifiers which is used to look up the row indices
for the vertices. If provided, the entries in |
zero |
boolean, TRUE if |
adjacency list.
data(towns) adj <- adjacency(towns) subtract_edge(adj, 2, 3) subtract_edge(adj, "West Haverstraw", "Stony Point", towns$MUNI)
data(towns) adj <- adjacency(towns) subtract_edge(adj, 2, 3) subtract_edge(adj, "West Haverstraw", "Stony Point", towns$MUNI)
Suggests nearest neighbors for connecting a disconnected group.
suggest_component_connection(shp, adj, group, epsg = 3857)
suggest_component_connection(shp, adj, group, epsg = 3857)
shp |
An sf data frame |
adj |
adjacency list |
group |
array of group identifiers. Typically district numbers or county names. Defaults to rep(1, length(adj)) if missing. |
epsg |
numeric EPSG code to planarize to. Default is 3857. |
tibble with two columns of suggested rows of shp to connect in adj
library(dplyr) data(checkerboard) checkerboard <- checkerboard |> filter(i != 1, j != 1) adj <- adjacency(checkerboard) suggest_component_connection(checkerboard, adj)
library(dplyr) data(checkerboard) checkerboard <- checkerboard |> filter(i != 1, j != 1) adj <- adjacency(checkerboard) suggest_component_connection(checkerboard, adj)
For precincts which have no adjacent precincts, this suggests the nearest precinct as a friend to add. This is useful for when a small number of precincts are disconnected from the remainder of the geography, such as an island.
suggest_neighbors(shp, adj, idx, neighbors = 1)
suggest_neighbors(shp, adj, idx, neighbors = 1)
shp |
an sf shapefile |
adj |
an adjacency list |
idx |
Optional. Which indices to suggest neighbors for. If blank, suggests for those with no neighbors. |
neighbors |
number of neighbors to suggest |
tibble with two columns of suggested rows of shp to connect in adj
library(dplyr) data(va18sub) va18sub <- va18sub |> filter(!VTDST %in% c('000516', '000510', '000505', '000518')) adj <- adjacency(va18sub) suggests <- suggest_neighbors(va18sub, adj) adj <- adj |> add_edge(v1 = suggests$x, v2 = suggests$y)
library(dplyr) data(va18sub) va18sub <- va18sub |> filter(!VTDST %in% c('000516', '000510', '000505', '000518')) adj <- adjacency(va18sub) suggests <- suggest_neighbors(va18sub, adj) adj <- adj |> add_edge(v1 = suggests$x, v2 = suggests$y)
This data contains 7 town boundaries for the towns which overlap North Rockland School District in NY.
data("towns")
data("towns")
An sf dataframe with 7 observations
https://www.rocklandgis.com/portal/apps/sites/#/data/items/746ec7870a0b4f46b168e07369e79a27
data('towns')
data('towns')
This data contains the blocks Henrico County, VA with geographies simplified to allow for better examples.
data("va_blocks")
data("va_blocks")
An sf dataframe with 6354 observations
blocks87 <- create_block_table(state = 'VA', county = '087') va_blocks <- rmapshaper::ms_simplify(va_blocks, keep_shapes = TRUE)
data('va_blocks')
data('va_blocks')
This data contains the blocks for Henrico County, VA with geographies simplified to allow for better examples.
data("va_blocks")
data("va_blocks")
An sf dataframe with 93 observations
va_vtd <- tinytiger::tt_voting_districts(state = 'VA', county = '087', year = 2010) va_vtd <- rmapshaper::ms_simplify(va_vtd, keep_shapes = TRUE)
data('va_blocks')
data('va_blocks')
This data contains a 90 precinct subset of Virginia from the 2018 Senate race. Contains results for Henrico County
data("va18sub")
data("va18sub")
An sf dataframe with 90 observations
Voting and Election Science Team, 2019, "va_2018.zip", 2 018 Precinct-Level Election Results, https://doi.org/10.7910/DVN/UBKYRU/FQDLOO, Harvard Dataverse, V4
data('va18sub')
data('va18sub')
List Available States from VEST Dataverse
vest_states(year)
vest_states(year)
year |
year in 2016, 2018, or 2020 |
character abbreviations for states
## Not run: # Requires Dataverse API vest_states(2020) ## End(Not run)
## Not run: # Requires Dataverse API vest_states(2020) ## End(Not run)