Month: April 2014

GETTING CLEAN DATA: Reading local flat files

Reading  local CSV files

  if (!file.exists(“data”)){



    fileUrl <- “https://web_address&#8221;

    download.file(fileUrl, destfile = “cameras,csv”, method = “curl”)

    dataDownloaded <- data()

So now the data have been downloaded from the website, and actually is sitting on my computer, it’s local data to my computer.

The most common way that they’re loaded is with the read.table function:

Loading flat files – read.table():

  • The main function for reading data into R
  • Flexible and robust but requires more parameter
  • Reads the data into RAM – big data can cause problems
  • Important parameter: file, header, sep, row.names, nrows
  • Related: read.csv(), read.csv2()


cameraData <- read.table(“./data/cameras.csv”, sep = “,”, header = TRUE)

some important parameters:

  • quote: tell R whether there are any quoted values, quote = “” means no quotes
  • na.strings: set the character that represents a missing value
  • nrows: how many rows to read of the file
  • skip: number of lines to skip before starting to read

Reading Excel files

Download the excel file to load:


    fileUrl <- “https://web_address&#8221;

    download.file(fileUrl, destfile = “./data/cameras.xlsx”, method = “curl”)

    dateDownloaded <- data()

The R library that is useful for this is the xlsx package.


    cameraData <- read.xlsx(“./data/cameras.xlsx”, sheetIndex = 1, header = TRUE)

You can read specific rows and specific columns.

colIndex <- 2:3

    rowIndex <- 1:4

    cameraDataSubset <- read.xlsx(“./data/cameras.xlsx”, sheetIndex = 1, colIndex = colIndex, rowIndex = rowIndex)


  • The write.xlsx function will write out an Excel file with similar arguments
  • read.xlsx2  is much faster than read.xlsx but for reading subsets of rows may be slightly unstable
  • The XLConnect package has more options for writing and manipulating Excel files
  • The XLConnect vignette is a good place to start for that package
  • In general it is advised to store your data in either  a database or in comma separated files (.csv) or tab separated files (.tab/.txt) se they are easier to distribute


GETTING CLEAN DATA: Downloading files

Knowing your working directory:

getwd() : gets the working directory, tells you what directory you’re currently in

    setwd(): sets a different working directory that you might want to move to.

Checking for and creating directories:

file.exists(“directoryName”): will check to see if the directory exists

dir.create(“directoryName”): will create a directory if it doesn’t exist

example (checking for a “data” directory and creating it if it doesn’t exist):

if (!file.exists(“data”)) {



Getting data from the internet – download.file():

Downloads a file from the internet

parameters: url: the place that you’re going to be getting data from.

destfile: the destinaiton file where the data is going to go.

method: needs to be specified particularly when dealing with https.

Useful for downloading tab-limited, CSV files, Excel files.

Download a file from the web:

fileUrl <- “https://address&#8221;

download.file(fileUrl, destfile = “./data/cameras.csv”, method = “curl”)



  • If the url starts with http you can use download.file()
  • If the url starts with https on Mac you may need to set method = “curl”
  • If the file is big, this might take a while
  • Be sure to record when you downloaded





Getting Clean Data: Raw data vs. Tidy data

Definition of data:

    “Data are values of qualitative or quantitative variables, belonging to a set of items.”

    The raw data are the original source of data. They’re often very hard to use for data analysis, because they’re complicated or they’re complicated or they’re hard to parse, or they’re very hard to analyze. Data analysis actually includes the processing or the cleaning of the data. In fact, a huge component of a data scientist’s job is performing those sorts of processing operations. A critical component is that all steps should be recorded. Pre-processing often ends up being the most important component of the data analysis in terms of effect on the downstream data. If you’re going to be a data scientist who’s careful about understanding what’s really happening in the entire data processing pipeline.

    Raw data

  • The original source of the data
  • Often hard to use for data analyses
  • Data analysis includes processing
  • Raw data may only need to be processed once

    Processed data

  • Data that is ready for analysis
  • Processing can include merging, subsetting, transforming, etc.
  • There may be standards for processing
  • All steps should be recorded

The four things you should have:

  1. The raw data
  2. A tidy data set
  3. A code book describing each variable and its values in the tidy data set
  4. An explicit and exact recipe you used to go from 1 -> 2,3

You know the raw data is in the right format if you:

  1. Ran no software on the data
  2. Did not manipulate any of the numbers in the data
  3. You did not remove any data from the data set
  4. You did not summarize the data in any way

Final form of tidy data:

  1. Each variable you measure should be in one column
  2. Each different observation of that variable should be in a different row
  3. There should be one table for each “kind” of variable
  4. If you have multiple tables, they should include a column in the table that allows them to be linked
  5. Include a row at the top of each file with variable names
  6. Make variable names human readable
  7. In general data should be saved in one file per table

The Code Book:

  1. Information about the variables (including units) in the data set not contained in the tidy data
  2. Information about the summary choices you made
  3. Information about the experimental study design you used
  4. Common format: Word/text file
  5. “Study design” section: a thorough description of how you collected the data
  6. “Code book: section: describes each variable and its units

The Instruction List:

  1. Ideally a computer script (R or Python or …)
  2. The input for the script is the raw data
  3. The output is the processed, tidy data
  4. There are no parameters to the script