{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Importing ARCOS Data with Dask" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Last week, we used dask to play with a few datasets to get a feel for how dask works. In order to help us develop code that would run quickly, however, we worked with very small, safe datasets. \n", "\n", "Today, we will continue to work with dask, but this time using much larger datasets. This means that (a) doing things incorrectly may lead to your computer crashing (So save all your open files before you start!), and (b) many of the commands you are being asked run will take several minutes each. \n", "\n", "For familiarity, and so you can see what advantages dask can bring to your workflow, today we'll be working with the DEA ARCOS drug shipment database published by the Washington Post! However, to strike a balance between size and speed, we'll be working with a slightly thinned version that has only the last two years of data, instead of all six." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 1\n", "\n", "Download the thinned ARCOS data [from this link](https://www.dropbox.com/s/o7nc6yvrwog4ozi/arcos_2011_2012.tsv.zip?dl=0). It should be about 2GB zipped, 25 GB unzipped. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 2\n", "\n", "Our goal today is going to be to find the pharmaceutical company that has shipped the most opioids (`MME_Conversion_Factor * CALC_BASE_WT_IN_GM`) in the US.\n", "\n", "When working with large datasets, it is good practice to begin by prototyping your code with a subset of your data. So begin by using `pandas` to read in the first 100,000 lines of the ARCOS data and write pandas code to compute the shipments from each shipper (the group that reported the shipment). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 3\n", "\n", "Now let's turn to dask. Re-write your code for dask, and calculate the total shipments by reporting company. Remember: \n", "\n", "- Activate a conda environment with a clean dask installation.\n", "- Start by spinning up a distributed cluster.\n", "- Dask won't read compressed files, so you have to unzip your ARCOS data. \n", "- Start your cluster in a cell all by itself since you don't want to keep re-running the \"start a cluster\" code. \n", "\n", "If you need to review dask basic code, [check here](https://nickeubank.github.io/practicaldatascience_book/notebooks/PDS_not_yet_in_coursera/30_big_data/70_dask.html).\n", "\n", "As you run your code, make sure to click on the Dashboard link below where you created your cluster:\n", "\n", "![dask_dashboard](images/dask_cluster.png)\n", "\n", "Among other things, the bar across the bottom should give you a sense of how long your task will take:\n", "\n", "![dask_progress](images/dask_progress.png)\n", "\n", "(For context, my computer (which has 10 cores) only took a couple seconds. My computer is fast, but most computers should be done within a couple minutes, tops).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 4\n", "\n", "Now let's calculate, *for each state*, what company shipped the most pills?\n", "\n", "Note you will quickly find that you can't sort in dask -- sorting in parallel is *really* tricky! So you'll have to work around that. Do what you need to do on the big dataset first, then compute it all so you get it as a regular pandas dataframe, then finish. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Does this seem like a situation where a single company is responsible for the opioid epidemic?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 5 \n", "\n", "Now go ahead and try and re-do the chunking you did by hand for your project (with this 2 years of data) -- calculate, for each year, the total morphine equivalents sent to each county in the US. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 6\n", "\n", "Now, re-write your opioid project's initial opioid import using dask. Each person on your team should create a NEW branch to try this. The person who wrote the initial chunking code can help everyone else understand what they did originally and the data, but everyone should write their own code. \n", "\n", "**WARNING:** You will probably run into a lot of type errors (depending on how the ARCOS data has changed since last year). With real world messy data one of the biggest problems with dask is that it struggles if halfway through dataset it discovers that the column it *thought* was floats contains text. That's why, in the dask reading, [I specified the column type for so many columns](https://nickeubank.github.io/practicaldatascience_book/notebooks/PDS_not_yet_in_coursera/30_big_data/70_dask.html#what-can-dask-do-for-me) as `objects` explicitly. Then, because occasionally there data cleanliness issues, I had to do some converting data types by hand. " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.0" } }, "nbformat": 4, "nbformat_minor": 4 }