{ "cells": [ { "cell_type": "markdown", "source": [ "# Advanced Command Line Exercises\n", "\n", "## Setup \n", "When we finished our last set of [Command Line Exercises](Exercise_CommandLine_1_basics.ipynb), we had moved all the CSVs that had data from Thursdays and Fridays into their own folder. In this exercise, we'll continue to work with that data, and we will assume that the folder currently has the following structure:\n", "\n", "\n", "`NYC_311calls_2018`: \n", "\n", "- `311_SR_Data_Dictionary_2018.xlsx`\n", "- `CE-20170824.pdf`\n", "- `NYC311_column_names.txt`\n", "- `README.md`\n", "- `raw data` [Folder containing CSVs from all days except Thursdays and Fridays]\n", "- `thursdays_and_fridays` [Folder containing all CSVs from Thursdays and Fridays]\n", "\n", "If this is NOT the directory structure that you currently have, then the first exercise is to move around your files and folders until they have this structure!" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Exercise 1: Checking Files\n", "\n", "When doing data work, it is important to always check your assumptions. For example, we *think* that we have data from every Thursday and Friday in the `thursdays_and_fridays` directory. But are we sure?\n", "\n", "To check, let's quickly google \"how many thursdays were there in 2018\" and \"how many fridays were there in 2018\" (there are 52 weeks in a year so it's probably 52 of each, but important to be sure as 365 is not evenly divided by 52). \n", "\n", "Then let's count the number of files we have in the `thursdays_and_fridays` directory. \n", "\n", "Hint: to do this, we need one command to list out the names of all the files in `thursdays_and_fridays` (`ls`), a second command to count the number of files reported (`wc`), and some way to pass data from one to the other. \n", "\n", "Second hint: you probably want to use the `1` flag for `ls`. The `1` flag tells `ls` to make sure it only prints one file name per line. \n", "\n", "If you have trouble interpreting the output you get, you might have to check the `wc` documentation. " ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Exercise 2: Combining files\n", "\n", "Having all these individual files is not very efficient. Instead, we'd like to merge all these files into a single large CSV file we can import into Python and analyze. \n", "\n", "As you may recall, the `cat` command is often used to print the contents of a file out to the screen. But `cat` actually stands for concatenate: if you give the command multiple files, it will join them together one after the other.\n", "\n", "Use `cat` to merge all the files in `thursdays_and_fridays` into a single CSV called `thursdays_and_fridays.csv` saved to disk. " ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Exercise 3: Viewing your Results\n", "\n", "Great! Now we have a single CSV file with all our data for Thursdays and Fridays. But there's a problem: none of the columns are labele!.\n", "\n", "**Take a look at the first few lines of our merged CSV file using either `less` or `head`.** Head is a new tool! It's like `cat` in that it just prints the contents of a file to the screen, but unlike `cat` it only prints a few lines. There's also a command called `tail` that shows you the last few lines of a file. \n", "\n", "**Try also confirming this by openning `thursdays_and_fridays.csv` in Excel.** As you will discover, it's a very slow process. This is another advantage of the command line: when working with big files, being able to quickly look at a few lines of a file in the command line is very efficient." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Exercise 4: Editing\n", "\n", "In `NYC311_calls_2018` is a file called `NYC311_column_names.txt`. It has all the column names for our CSVs.\n", "\n", "We can add these column names by concatenating the column name file to the top of our CSV, but only if it is properly formatted, which it is not -- when the column names were exported, they included square brackets at the start and end of the column names:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 33, "source": [ "cd ~/desktop/NYC_311calls_2018\n", "head NYC311_column_names.txt" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[\"Unique Key\", \"Created Date\", \"Closed Date\", \"Agency\", \"Agency Name\", \"Complaint Type\", \"Descriptor\", \"Location Type\", \"Incident Zip\", \"Incident Address\", \"Street Name\", \"Cross Street 1\", \"Cross Street 2\", \"Intersection Street 1\", \"Intersection Street 2\", \"Address Type\", \"City\", \"Landmark\", \"Facility Type\", \"Status\", \"Due Date\", \"Resolution Action Updated Date\", \"Community Board\", \"Borough\", \"X Coordinate (State Plane)\", \"Y Coordinate (State Plane)\", \"Park Facility Name\", \"Park Borough\", \"Vehicle Type\", \"Taxi Company Borough\", \"Taxi Pick Up Location\", \"Bridge Highway Name\", \"Bridge Highway Direction\", \"Road Ramp\", \"Bridge Highway Segment\", \"Latitude\", \"Longitude\", \"Location\"]" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Since this is a very small editing task, let's fix it using `nano`. Use `nano` to open the column name file, remove the square brackets at the beginning and end of the column names, and save the result. \n", "\n", "Then concatenate your column names with your CSV to create a new file called `thursdays_and_fridays_w_columnnames.csv`. " ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Exercise 5: Cleaning Up\n", "\n", "Once you're done creating this file, let's clean up by deleting the `raw data` folder and all its contents. " ], "metadata": {} } ], "metadata": { "kernelspec": { "display_name": "Bash", "language": "bash", "name": "bash" }, "language_info": { "codemirror_mode": "shell", "file_extension": ".sh", "mimetype": "text/x-sh", "name": "bash" } }, "nbformat": 4, "nbformat_minor": 4 }