Visualizing Winter Olympics success

This semester has been busy! I haven’t had a chance to dive into the side projects I’ve been brewing but I’ve really been enjoying the data visualization class I’m taking with Thomas Brambor. Our first assignment was to build some visualizations using historical Winter Olympics data going up to 2014, using ggplot and Plotly. I had a lot of fun doing this assignment and discovered some very cool tips and tricks in the process of perfecting my charts. If you’re viewing this on a phone – sorry, the Plotly charts do not take kindly to that. The full assignment with code is below:

Pivoting data in R

Bakeoff Data Collection Part 2

Pivoting Data in R

I left off the last GBBO data collection post with code in Github to generate a dataset that combines every episode table from seasons 2-11. This included information on what bakers made for the signature and showstopper bakes, as well as their performance in the technical challenges.

Some crucial information that the episode table doesn’t have is the results of the episode – who is crowned “Star Baker” and who is eliminated. This information is contained in a separate chart for each season that looks like this:

The elimination chart is set up differently from the episode table in a few ways. First – while there is one row per baker, each column represents an episode rather than having separate tables for each episode. The chart also uses color to store information. While some of the colors and text are redundant, favorite and least favorite baker information is only stored with color.

The goal:

  • extract all the data from this table, including colors
  • convert it to a shape that can be joined easily to the episodes table I made in the previous GBBO post

This will involve the dreaded… pivot.

Pivot Philosophy Lesson

I am constantly traumatized by pivoting with SQL at work. I don’t think pivoting in SQL is intuitive at all, and after some soul searching I think I know why. It’s a clichรฉ but at my core I am a visual learner who understands concepts best if I can picture them in my head spatially. When I first had to pivot data while working at the NIH I used SPSS, which gives you the option to pivot between “wide” and “long” data – this I understood very well because I could clearly imagine how I wanted the data to rearrange itself. Wide data stores information in more columns and long data stores information in more rows. To convert between the two, you either have to combine columns and, if you’re not aggregating, add additional rows from what was previously a single observation (wide to long), or you combine multiple rows into one and (again, if not aggregating) add additional variables (long to wide).

The way SQL is written is usually quite helpful and intuitive because it reads like a full sentence that tells a story about data. It’s great for picking and choosing from the data you have, but it gets clunky when you want to imagine something new. Because SQL describes data in this literary way, it makes sense to me that it breaks down in its utility when dealing with data as a spatial structure.

I think it is much simpler and more intuitive to use R (or Python, I assume, never tried) to pivot data because it is a functional programming language that uses objects. I greatly prefer pivoting in R using a simple function to building a fussy query in SQL with multiple CTEs. The function I have used for this program is from the tidyr package which is R royalty and truly needs no introduction or explanation. It’s a bonus that the function is called pivot_longer (there is also, naturally, pivot_wider) – so intuitive, so good.

Pivoting from Wide to Long with R

Our elimination chart data is wide. There is only one row per baker, and 10 columns, one for each episode. The episode data that I made in the previous GBBO post is long – rather than a single row for each baker and a column for each episode, there is a row for each baker and episode, and one episode column (instead of 10) that tells us which episode the data in each row refers to. To merge the elimination chart data with the episode data, I’ll need to change it from having one column for each episode to having one row for each baker and episode, with one episode column – essentially multiplying the number of rows by 10 and reducing the columns from 10 to 1. Then I’ll be able to add additional variables to hold the favorite, least favorite, star baker and eliminated baker data.

Let’s continue using season 11 as an example. First, we will pull the raw data into a table called “result”

url = "https://en.wikipedia.org/wiki/The_Great_British_Bake_Off_(series_11)"
webpage <- read_html(url)
tables <- html_nodes(webpage,'table.wikitable')
tables <- html_table(tables, header = TRUE)

result <- data.frame(tables[2])

The shape of this table is identical to that of the chart, though it didn’t automatically assume the first row is the column name. That’s fine – we’ll actually use this to our advantage when we pivot.

The code to pivot the result table is as follows:

result.pivot <- tidyr::pivot_longer(subset(result, Elimination.chart != 'Baker'),
                                    cols = starts_with("Elimination.chart."), 
                                    names_to = "episode", 
                                    names_prefix = "Elimination.chart.",
                                    values_to = "result")

I’ve used the pivot_longer function from tidyr and removed the extra row with column names from the data. The cols argument should have all the columns you want to pivot. I used the starts_with function to include only columns that begin with “Elimination.chart.”, so I’m pivoting every row except the first, which contains the baker names. The names_to argument creates a new column (in this case called “episode”) that will house the name of the column that was pivoted to a row. The names_prefix argument tells the names_to argument to remove the prefix “Elimination.chart.” from the column names. This will leave us with a nice clean value for the episode number. Finally, the values_to argument creates a new column (“result”) that houses the values from the columns that were pivoted.

We end up with this:

This looks nice! This looks sleek! This looks like the episodes table! One problem – this doesn’t contain any background color information ๐Ÿ™

Now html or xml or any markup language is not my strong suit but that is ok because I am great at googling. I was finally able to find and finesse some code on stack overflow to miraculously pull color information into a table that is *close* to the same shape as our pivoted data:

colors <- webpage %>% html_nodes(xpath='//*[@id="mw-content-text"]/div/table[3]') %>% html_nodes('td')
colors <- bind_rows(lapply(xml_attrs(colors), function(x) data.frame(as.list(x), stringsAsFactors=FALSE)))

We get three columns called align, style and colspan. Each row seems to correspond to one td tag or cell in the data table, read from left to right. If you look at the “Peter” row in the original chart, you’ll see that the left-most cell has no background and is left aligned. That’s row 1 in the table. In week 1, Peter was Star Baker, which has a pale yellow background. That’s row 2 – with style “background:LemonChiffon;”. Peter was a favorite baker in weeks 4 and 5. You can see that there is one merged cell with a dark blue background for those weeks. This is represented in row 5 with style “background:Cornflowerblue;” and colspan = 2. This is a more succinct way of recording the html table data, but we’ll need to ungroup weeks 4 and 5 to get one row per episode. We’ll also have to remove the rows with no style value, as those don’t contain any episode data.

#in columns with colspan NA, replace with 1
colors$colspan <- replace(colors$colspan, is.na(colors$colspan), 1)

#makes a new table with colspan as numeric rather than string
dups <- colors %>% transform(colspan = as.numeric(colspan)) 

#uses the rep function to duplicate rows by the value in colspan - creating one row per episode
colorsduped <- dups[rep(seq_len(nrow(dups)), as.numeric(unlist(c(dups$colspan)))), ]

#keeps only rows with a value with the string "background" in the style column (so removes the meaningless left align rows)
colorsduped <- colorsduped %>% filter(tolower(colorsduped$style) %like% "background")

#now we can make a new column called status that translates the cornflower blue and plum colors into "Favorite" and "Least Favorite" bakers of the week.
colorsduped <- colorsduped %>% mutate(status =
                                        case_when(tolower(style) %like% "cornflower" ~ "Favorite", 
                                                  tolower(style) %like% "plum" ~ "Least Favorite",
                                                  TRUE ~ "")
)

#reset index that got messed with by the duplications
rownames(colorsduped) <- 1:nrow(colorsduped)

Now we have one row per episode per baker. Even though we don’t have a name variable in here, because the data is ordered in the same way as the first table we pivoted with text information, and because we reset the index, we can join this table to our first table on the index using the merge function:

#merge result.pivot and colorsduped to get all information in one place
final <- merge(result.pivot, colorsduped[,c("style","status")], by = 0)

#sorting for the sake of explanation
final <-final[order(final$baker, as.numeric(final$episode)),]

Beautiful! Now we can do this once for each season, combine all the seasons, and merge the columns we want (result and status) to the episodes dataset using the baker.season.episode variable. Complete code for this can be found on this Github page.

Whipping up some Great British Bake Off Data

Whipping up some Great British Bake Off Data

With R !

Following the “recipe” ๐Ÿ˜‰ for blogs here on SofiaZaidman.com, I will begin with a preamble explaining why on earth I would spend so much of my free time collecting and combing through every crumb ๐Ÿ˜‰ of data I can find on The Great British Bake Off (GBBO).

There was about a year-long period of time that I commuted from Brooklyn to Columbia’s Morningside campus, during which I would download GBBO episodes to my phone and watch one on the subway during my morning commute and another in the evening. This behavior has led to me having seen every season available on Netflix probably more than 3x through. It is hands down my favorite TV show.

Data from the GBBO is ripe for analysis because the show treats baking like a sport. An obvious ML place to start is predicting the winner based on demographic information and performance in challenges. This has been done beautifully by Danny Antaki in a project called DeepBake. I absolutely love this and maybe one day will spend enough time learning neural nets to do this myself.

Another fun idea I had was to make a GBBO recipe generator based on past GBBO recipe names. This is something I have desperately wanted to do ever since this Buzzfeed article from the relatively early days of neural net memes, which I still think is one of the most hilarious things ever. Jacqueline Nolis (co-author of Build a Career in Data Science and co-host of the podcast of the same name that a highly recommend and fun data science project kindred spirit) gave a talk on building an AI like this and has a really great tutorial on her Github page.

Before I can do any of these things of course, I have to source the GBBO data. I’ve started enjoying data wrangling in a very nerdy way, so I was excited when I noticed that the data used for DeepBake was pulled directly from each GBBO season’s Wikipedia page using a super cool R package called rvest. I thought I’d take a stab at it myself to learn a new scraping technique in R rather than Python.

Scraping Wikipedia tables in R using rvest

Inspect tables on Wikipedia

The first step in scraping Wikipedia tables is to inspect the Wikipedia page you’d like to scrape. I went to the GBBO master page and clicked on each season’s individual pages. Most season pages have one table per episode with four columns: Baker, Signature, Technical and Showstopper. Here is one example:

Now let’s try rvest

I adapted some all-purpose rvest code to pull all tables on a wikipedia page. The code is pretty straightforward and manages to use very few lines to extract the information we want. Basically, the html_nodes function retrieves all wikitables on the webpage we specify, and the html_table function converts the list of nodes to a tibble of tables.

# open rvest, data.table and some common packages that may come in handy
library("rvest")
library("data.table")
library("magrittr")
library("dplyr")
library("ggplot2")

# get season 3 tables from wikipedia
url <- "https://en.wikipedia.org/wiki/The_Great_British_Bake_Off_(series_3)"
webpage <- read_html(url)
table3nodes <- html_nodes(webpage,'table.wikitable')
table3 <- html_table(table3nodes, header = TRUE)

The list of nodes should look like this:

Once the list of nodes is converted to a tibble of tables, each table should look something like this:

Next I constructed a loop for each season to:

  • Pull and label individual episode data from each season’s page (have to check manually on Wikipedia to know which tables will have episode data)
  • Convert the data to data frames
  • Extract signature, technical and showstopper challenge names and save as new variables
  • Merge all episode data into one data frame
#set count
count <- 0

#create empty data frame with column names
season3 <- data.frame(matrix(ncol = 9, nrow = 0))
x <- c('baker','signature','technical.rank','showstopper','episode', 'signatuare.name','technical.name','showstopper.name', 'season')
colnames(season3) <- x

#build for loop
for (episode in table3[3:12]) {

  ep <- data.frame(episode)
  count = count +1
  ep['episode'] = count
  
  library(stringr)
  signature_name <- str_replace_all(colnames(ep[2]), "[[:punct:]]", " ")
  ep['signature.challenge'] = str_remove(signature_name, 'Signature.')
  
  technical_name <- str_replace_all(colnames(ep[3]), "[[:punct:]]", " ")
  ep['technical.challenge'] = str_remove(technical_name, 'Technical.')
  
  showstopper_name <- str_replace_all(colnames(ep[4]), "[[:punct:]]", " ")
  ep['showstopper.challenge'] = str_remove(showstopper_name, 'Showstopper.')
  
  ep['season'] = '3'
  
  colnames(ep) <- x
  
  season3 <- rbind(season3, ep)
}

This code will create a data frame called season3 with 9 columns and 76 observations – one per baker per episode:

Loops for each season can be built, run, and the resulting data frames can be combined into one master data frame that contains the information from every episode table. The complete code and final dataset are in this Github repository.

I’ll add more to this project soon!

Wrangling an Image Dataset (under the sea)

Wrangling an Image Dataset (under the sea)

Follow my process to scrape and wrangle a dataset of Spongebob title card images for eventual GAN processing.

This post covers:

  • Scraping images from the web using Beautiful Soup
  • Converting image data to a format digestible by a Generative Adversarial Network (GAN) algorithm

Preface: I’m your biggest fan

In 1999 I was bombarded with Spongebob promos while watching CatDog, memorized the date and time of the premiere, and made sure I was seated in front of the TV right when it aired. As history can attest, it did not disappoint.

A scientist from birth, I was won over by the extent to which real-world biological details were used to inform Spongebob’s characters and plot lines. Only recently did I learn that the late creator, Stephen Hillenberg, was actually a trained marine biologist.

I was equally enamored with Spongebob’s visual style. The early seasons’ aesthetic elicits the same feeling for me as visiting San Francisco’s Fisherman’s Wharf at the turn of the millennium. It’s something akin to coloring the children’s menu at an old, dusty seafood restaurant with an assortment of crayons pulled at random from the Crayola 60 pack while wearing a Hawaiian print skort from Limited Too.

One of the most delightful aspects of Spongebob is the title cards. Many of these are seared in my mind to this day, paired with the sounds of a ukulele or slide guitar. Each card has a distinct personality – many in the same general style but each unique enough to remember for years.

I recently came across a Spongebob fan website with a comprehensive archive of title cards. I wondered if it might be possible to scrape the title card images into a dataset and wrangle them into a neural network friendly format. I’ve been interested in training a GAN to generate images for a while; is it possible to generate novel Spongebob title cards with this data?

In this first post of the Spongebob title card adventure, I will walk through scraping and wrangling image data into a format compatible with GAN algorithms.

Scraping

I used the Beautiful Soup library to scrape the title card image links and episode names from this website. Detailed information on using Beautiful Soup can be found in this blog post on scraping web data.

import requests
from bs4 import BeautifulSoup
import bleach
import pandas as pd 
url = "https://spongebob.fandom.com/wiki/List_of_title_cards#.23"
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find(id="content")


images = results.find_all('img')
sources =[]
titles = []

for image in images:
    source = image.attrs['src']
    sources.append(source)

titles_raw = results.find_all(class_= 'lightbox-caption')

for title in titles_raw:
    title = bleach.clean(str(title), tags=[], strip=True).strip()
    titles.append(title)
            
sources = [x for x in sources if not x.startswith('data')]
sources = [x.replace('/scale-to-width-down/185', '') for x in sources]

spongedata = pd.DataFrame(list(zip(sources, titles)),columns = ['Image_Source','Title'])
spongedata

Downloading Images

Now that we have the image links, we can use the requests module to open and save the images. There are a number of ways to do this – this stack overflow thread was particularly helpful in finding the right method.

Essentially, requests is used to download the image from the url into memory. Once downloaded, the shutil module is used to properly store the image to a jpg file.

I imported a random title card below:

import matplotlib.pyplot as plt
import matplotlib.image as mpim
import shutil

response = requests.get(spongedata.Image_Source[390], stream=True) 
with open("sponge1.jpg", 'wb') as out_file:
    shutil.copyfileobj(response.raw, out_file)
del response
img = mpim.imread('sponge1.jpg')
imgplot = plt.imshow(img)

Note that the image is approximately 1000×1400 pixels.

Here is another random title card:

response = requests.get(spongedata.Image_Source[211], stream=True) 
with open("sponge1.jpg", 'wb') as out_file:
    shutil.copyfileobj(response.raw, out_file)
del response
img = mpim.imread('sponge1.jpg')
imgplot = plt.imshow(img)

This title card is approximately 1000×1750 pixels – not quite the same size as the first one we looked at. To avoid issues down the road, I decided to resize each title card to a uniform number of pixels:

image.resize((300, 200))

CIFAR10 Image Structure

Now let’s take a look at the kind of image data that others have used with GANs successfully. I did some research and found an incredibly helpful tutorial on constructing a GAN using the CIFAR10 image database. The article is located here.

The CIFAR10 database consists of 60,000 32×32 color images stored as arrays and split into a 50,000 image train and 10,000 image test dataset.

The train and test datasets are stored as 4D arrays – for example, the shape of the train dataset is: (50000, 32, 32, 3). The deepest dimension of the array represents the 3 RGB values assigned to each pixel in a photograph. The next dimension represents the number of rows of pixels (32) and the next the number of columns of pixels (also 32). There are a total of 50,000 series of 32×32 arrays of RGB values.

An image can be converted to a 3D array very simply by calling the numpy.array() function after opening the image file using the PIL python image library:

from PIL import Image
import numpy as np

image = PIL.Image.open('sponge1.jpg')
image = (np.array(image))
image
image.shape

Now that we know how to get an image into an array format similar to those in the CIFAR10 dataset, we can write code to loop through our scraped data, convert all our title cards to arrays and combine them all into one 4D array like the CIFAR10 data:

title_cards = []
index = 0
for x in spongedata.Image_Source:
    response = requests.get(spongedata.Image_Source[index], stream=True) 
    with open("sponge1.jpg", 'wb') as out_file:
        shutil.copyfileobj(response.raw, out_file)
    del response
    img = PIL.Image.open('sponge1.jpg')
    img = img.resize((300, 200))
    img = (np.array(img))
    title_cards.append(img)
    index = index + 1
    #print(index)
title_cards = np.array(title_cards)
title_cards.shape

Yay! We’ve managed to get a dataset of Spongebob title cards in something analogous to the CIFAR10 format that we can now try and feed into a GAN. Stay tuned for more Spongebob title card neural network posts!