I thought it would be neat (since Matt did the data scraping part already) to look at AG tenure distribution by party, while also pointing out where Sessions falls.
iframe on the page that contains his vis.
It’s still easier to get it from there vs re-scrape Wikipedia (like Matt did) thanks to the
V8 package by @opencpu.
The following code:
- grabs the vis iframe
- performs some factor re-coding (for better grouping and to make it easier to identify Sessions)
- plots the distributions using the beeswarm quasirandom alogrithm
library(V8) library(rvest) library(ggbeeswarm) library(hrbrthemes) library(tidyverse) pg % html_text()) ctx$get("DATA") %>% as_tibble() %>% readr::type_convert() %>% mutate(party = ifelse(is.na(party), "Other", party)) %>% mutate(party = fct_lump(party)) %>% mutate(color1 = case_when( party == "Democratic" ~ "#313695", party == "Republican" ~ "#a50026", party == "Other" ~ "#4d4d4d") ) %>% mutate(color2 = ifelse(grepl("Sessions", label), "#2b2b2b", "#00000000")) -> ags ggplot() + geom_quasirandom(data = ags, aes(party, amt, color = color1)) + geom_quasirandom(data = ags, aes(party, amt, color = color2), fill = "#ffffff00", size = 4, stroke = 0.25, shape = 21) + geom_text(data = data_frame(), aes(x = "Republican", y = 100, label = "Jeff Sessions"), nudge_x = -0.15, family = font_rc, size = 3, hjust = 1) + scale_color_identity() + scale_y_comma(limits = c(0, 4200)) + labs(x = "Party", y = "Tenure (days)", title = "U.S. Attorneys General", subtitle = "Distribution of tenure in office, by days & party: 1789-2017", caption = "Source data/idea: Matt Stiles
") + theme_ipsum_rc(grid = "XY")
I turned the data into a CSV and stuck it in this gist if folks want to play w/o doing the js scraping.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…