Data are becoming the new raw material of business
The Economist

NumPy and pandas – Crucial Tools for Data Scientists

When it comes to scientific computing and data science, two key python packages are NumPy and pandas. NumPy is a powerful python library that expands Python’s functionality by allowing users to create multi-dimenional array objects (ndarray). In addition to the creation of ndarray objects, NumPy provides a large set of mathematical functions that can operate quickly on the entries of the ndarray without the need of for loops. Below is an example of the usage of NumPy. The code creates a random array and calculates the cosine for each entry.

In [23]:
import numpy as np

X = np.random.random((4, 2))  # create random 4x2 array
y = np.cos(X)                 # take the cosine on each entry of X

print y
print "\n The dimension of y is", y.shape
[[ 0.95819067  0.60474588]
 [ 0.78863282  0.95135038]
 [ 0.82418621  0.93289855]
 [ 0.67706351  0.83420891]]

 The dimension of y is (4, 2)

We can easily access entries of an array, call individual elements, and select certain rows and columns.

In [24]:
print y[0, :]   # select 1st row
print y[:, 1]   # select 1st column
print y[2, 1]   # select element y_12
print y[1:2, :] # select rows 2nd and 3rd row
[ 0.95819067  0.60474588]
[ 0.60474588  0.95135038  0.93289855  0.83420891]
0.932898546321
[[ 0.78863282  0.95135038]]

The pandas (PANel + DAta) Python library allows for easy and fast data analysis and manipulation tools by providing numerical tables and time series data structures called DataFrame and Series, respectively. Pandas was created to do the following:

  • provide data structures that can handle both time and non-time series data
  • allow mathematical operations on the data structures, ignoring the metadata of the data structures
  • use relational operations like those found in programming languages like SQL (join, group by, etc.)
  • handle missing data

Below is an example of the usage of pandas and some of its capabilitites.

In [25]:
import pandas as pd

# create data
states = ['Texas', 'Rhode Island', 'Nebraska'] # string
population = [27.86E6, 1.06E6, 1.91E6]         # float
electoral_votes = [38, 3, 5]                   # integer
is_west_of_MS = [True, False, True]            # Boolean

# create and display DataFrame
headers = ('State', 'Population', 'Electoral Votes', 'West of Mississippi')
data = (states, population, electoral_votes, is_west_of_MS)
data_dict = dict(zip(headers, data))

df1 = pd.DataFrame(data_dict)
df1
Out[25]:
Electoral Votes Population State West of Mississippi
0 38 27860000.0 Texas True
1 3 1060000.0 Rhode Island False
2 5 1910000.0 Nebraska True

In the above code, we created a pandas DataFrame object, a tabular data structure that resembles a spreadsheet like those used in Excel. For those familiar with SQL, you can view a DataFrame as an SQL table. The DataFrame we created consists of four columns, each with entries of different data types (integer, float, string, and Boolean).

Pandas is built on top of NumPy, relying on ndarray and its fast and efficient array based mathematical functions. For example, if we wanted to calculate the mean population across the states, we can run

In [26]:
print df1['Population'].mean()
10276666.6667

Pandas relies on NumPy data types for the entries in the DataFrame. Printing the types of individual entries using iloc shows

In [27]:
print type(df1['Electoral Votes'].iloc[0])
print type(df1['Population'].iloc[0])
print type(df1['West of Mississippi'].iloc[0])
<type 'numpy.int64'>
<type 'numpy.float64'>
<type 'numpy.bool_'>

Another example of the pandas and NumPy compatibility is if we have a DataFrame that is composed of purely numerical data we can apply NumPy functions. For example,

In [28]:
df2 = pd.DataFrame({"times": [1.0, 2.0, 3.0, 4.0], "more times": [5.0, 6.0, 7.0, 8.0]})
df2 = np.cos(df2)
df2.head()
Out[28]:
more times times
0 0.283662 0.540302
1 0.960170 -0.416147
2 0.753902 -0.989992
3 -0.145500 -0.653644

Pandas was built to ease data analysis and manipulation. Two import pandas methods are groupby and apply. The groupbymethod groups the DataFrame by values of a certain column and applies some aggregating function on the resulting groups. For example, if we want to determine the maximum population for states grouped by if they are either west or east of the Mississippi river, the syntax is

In [29]:
df1.groupby('West of Mississippi').agg('max')
Out[29]:
Electoral Votes Population State
West of Mississippi
False 3 1060000.0 Rhode Island
True 38 27860000.0 Texas

The apply method accepts a function to apply to all the entries of a pandas Series object. This method is useful for applying a customized function to the entries of a column in a pandas DataFrame. For example, we can create a Series object that tells us if a state’s population is more than two million. The result is a Series object that we can append to our original DataFrame object.

In [30]:
more_than_two_million = df1['Population'].apply(lambda x: x > 2E6)  # create Series object of Boolean values
df1['More than a Million'] = more_than_two_million  # append Series object to our original DataFrame
df1.head()
Out[30]:
Electoral Votes Population State West of Mississippi More than a Million
0 38 27860000.0 Texas True True
1 3 1060000.0 Rhode Island False False
2 5 1910000.0 Nebraska True False

Accessing columns is inuitive, and returns a pandas Series object.

In [31]:
print df1['Population']
print type(df1['Population'])
0    27860000.0
1     1060000.0
2     1910000.0
Name: Population, dtype: float64
<class 'pandas.core.series.Series'>

A DataFrame is composed of multiple Series. The DataFrame class resembles a collection of NumPy arrays but with labeled axes and mixed data types across the columns. In fact, Series is subclass of NumPy’s ndarray. While you can achieve the same results of certain pandas methods using NumPy, the result would require more lines of code. Pandas expands on NumPy by providing easy to use methods for data analysis to operate on the DataFrame and Series classes, which are built on NumPy’s powerful ndarrayclass.

How memory is configured in NumPy

The power of NumPy comes from the ndarray class and how it is laid out in memory. The ndarray class consists of

  • the data type of the entires of the array
  • a pointer to a contiguous block of memory where the data/entries of the array reside
  • a tuple of the array’s shape
  • a tuple of the array’s stride

The shape refers to the dimension of the array while the stride is the number of bytes to step in a particular dimension when traversing an array in memory. With both the stride and the shape, NumPy has sufficient information to access the array’s entries in memory.

By default, NumPy arranges the data in row-major order, like in C. Row-major order lays out the entries of the array by groupings of rows. An alternative is column-major ordering, as used in Fortran and MATLAB, which uses columns as the grouping. NumPy is capable of implementing both ordering schemes by passing the keyword order when creating an array. See the figure below for the differeneces in the schemes.

The continguous memory layout allows NumPy to use vector processors in modern CPUs and array computations. Array computations are efficient because NumPy can loop through the entries in data properly by knowing the location in memory and the data type of the entries. NumPy can also link to established and highly optimized linear algebra libraries such as BLAS and LAPACK. As you can see, using the NumPy ndarray offers more efficient and fast computations over the native Python list. No wonder pandas and other Python libraries are built on top of NumPy. However, the infrastructure of the ndarray class must require all entries to be the same data type, something that a Python list class is not limited to.

Hetereogeneous data types in pandas

As mentioned earlier, the pandas DataFrame class can store hetereogeneous data; each column contains a Series object of a different data type. The DataFrame is stored as several blocks in memory, where each block contains the columns of the DataFramethat have the same data type. For example, a DataFrame with five columns comprised of two columns of floats, two columns of integers, and one Boolean column will be stored using three blocks.

With the data of the DataFrame stored using blocks grouped by data, operations within blocks are effcient, as described previously on why NumPy operations are fast. However, operations involving several blocks will not be efficient. Information on these blocks of a DataFrame object can be accessed using ._data.

In [32]:
df1._data
Out[32]:
BlockManager
Items: Index([u'Electoral Votes', u'Population', u'State', u'West of Mississippi',
       u'More than a Million'],
      dtype='object')
Axis 1: RangeIndex(start=0, stop=3, step=1)
FloatBlock: slice(1, 2, 1), 1 x 3, dtype: float64
IntBlock: slice(0, 1, 1), 1 x 3, dtype: int64
BoolBlock: slice(3, 4, 1), 1 x 3, dtype: bool
ObjectBlock: slice(2, 3, 1), 1 x 3, dtype: object
BoolBlock: slice(4, 5, 1), 1 x 3, dtype: bool

The DataFrame class can allow columns with mixed data types. For these cases, the data type for the column is referred to as object. When the data type is object, the data is no longer stored in the NumPy ndarray format, but rather a continguous block of pointers where each pointer referrences a Python object. Thus, operations on a DataFrame involving Series of data type object will not be efficient.

Strings are stored in pandas as Python object data type. This is because strings have variable memory size. In contrast, integers and floats have a fixed byte size. However, if a DataFrame has columns with categorial data, encoding the entries using integers will be more memory and computational efficient. For example, a column containing entries of “small”, “medium”, and “large” can be coverted to 0, 1, and 2 and the data type of that new column is now an integer.

The importance of understanding Numpy and pandas

Through this article, we have seen

  • examples of usage of NumPy and pandas
  • how memory is configured in NumPy
  • how pandas relies on NumPy
  • how pandas deals with hetereogeneous data types

While knowing how NumPy and pandas work is not necessary to use these tools, knowing the working of these libraries and how they are related enables data scientists to effectively yield these tools. More effective use of these tools becomes more important for larger data sets and more complex analysis, where even a small improvement in terms of percentage translates to large time savings.


Ranking Popular Distributed Computing Packages for Data Science

At The Data Incubator, we strive to provide the most up-to-date data science curriculum available. Using feedback from our corporate and government partners, we deliver training on the most sought after data science tools and techniques in industry. We wanted to include a more data-driven approach to developing the curriculum for our corporate data science training and our free Data Science Fellowship program for PhD and master’s graduates looking to get hired as professional Data Scientists. To achieve this goal, we started by looking at and ranking popular deep learning libraries for data science. Next, we wanted to analyze the popularity of distributed computing packages for data science. Here are the results.

The Rankings

Below is a ranking of the top 20 of 140 distributed computing packages that are useful for Data Science, based on Github and Stack Overflow activity, as well as Google Search results. The table shows standardized scores, where a value of 1 means one standard deviation above average (average = score of 0). For example, Apache Hadoop is 6.6 standard deviations above average in Stack Overflow activity, while Apache Flink is close to average. See below for methods.
Continue reading


The APIs for Neural Networks in TensorFlow

By Dana Mastropole, Robert Schroll, and Michael Li

TensorFlow has gathered quite a bit of attention as the new hot toolkit for building neural networks. To the beginner, it may seem that the only thing that rivals this interest is the number of different APIs which you can use. In this article we will go over a few of them, building the same neural network each time. We will start with low-level TensorFlow math, and then show how to simplify that code with TensorFlow’s layer API. We will also discuss two libraries built on top of TensorFlow, TFLearn and Keras.

The MNIST database is a collection of handwritten digits. Each is recorded in a $28\times28$ pixel grayscale image. We we build a two-layer perceptron network to classify each image as a digit from zero to nine. The first layer will fully connect the 784 inputs to 64 hidden neurons, using a sigmoid activation. The second layer will connect those hidden neurons to 10 outputs, scaled with the softmax function. The network will be trained with stochastic gradient descent, on minibatches of 64, for 20 epochs. (These values are chosen not because they are the best, but because they produce reasonable results in a reasonable time.)


Bringing Astronomy Down to Earth: Alumni Spotlight on Tim Weinzirl

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists.  Tim was a Fellow in our Spring 2017 cohort who landed a job with one of our hiring partners, First Republic Bank

Tell us about your background. How did it set you up to be a great data scientist 

My education includes a B.S. in Physics from Drake University and a Ph.D. in Astronomy from the University of Texas at Austin. After grad school, I went overseas for a Research Fellowship at the University of Nottingham. Astronomers do a lot of coding relative to other fields, and having been coding in Python since 2006 for work, I was very familiar with the Python SciPy stack. Since 2014, I have also been volunteering time to data science and software engineering projects for a people analytics startup. This was extremely useful because it provided references in industry who could vouch for my data science skills.

What do you think you got out of The Data Incubator?

I got several useful things out of The Data Incubator: Strategies for resume writing, experience building and deploying a live web application, and a comprehensive set of IPython notebooks that encapsulate the advanced features of scikit-learn, SQL, and big data tools (Hadoop, Spark).

Continue reading


Using a convolutional neural network to identify anatomically distinct cervical types: Alumni Spotlight on Rachel Allen

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists.  Rachel was a Fellow in our Spring 2017 cohort and an instructor for our Summer 2017 cohort. 

My background is in neuroscience; specifically I studied how images are processed in the visual system of biological brains. 14903037435-rachel_kay_allenFor my capstone project I knew I wanted to use an artificial neural network to create an image classifier. Intel and Mobile ODT released a large dataset for a medical image classifying competition around the same time I was brainstorming possible projects. Their dataset included thousands of medical images of cervixes that were labeled by medical professionals as one of three types based on anatomy. Healthcare providers often have difficulty determining the anatomical classification of a cervix during an examination. Some types of cervixes require additional screening to determine if pathology is present. Thus, an algorithm-aided decision of cervical type could improve the quality of cervical cancer screening for patients and efficiency for practitioners.

I began my project with some exploration of the images. I used t-SNE (t-distributed stochastic neighbor embedding) in scikit-learn, which is a tool to visualize high-dimensional data. Visualizing each image as a point in a 3-D plot showed that none of the three classes of cervixes clustered together. I also used a hierarchical cluster analysis in seaborn to confirm that the images did not easily group together by their three classes.

Continue reading


How to Catch ‘Em All: Alumni Spotlight on Yina Gu

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists.  Yina was a Fellow in our Winter 2017 cohort who landed a job with one of our hiring partners, Opera Solutions

Tell us about your background. How did it set you up to be a great data scientist 

I received my PhD degree from The Ohio State University majoring computational chemistry. For my PhD research, I developed multiple predictive models and published web servers to solve various biophysics problems using machine learning and statistical methods in Python, R and Matlab. The data science skills and experiences I gained in my 5 years of PhD not only allow me to solve the fundamental scientific problems effectively and efficiently, but also enable my transition from academia to industry to solve the real-world challenges.

What do you think you got out of The Data Incubator?

The 8-weeks intensive training at The Data Incubator really helped me to go deeper into data science field and get fully prepared for the essential skills to work in a big data industry with the cutting-edge analytics techniques, including programming, machine learning, data visualization as well as business mindset. Last but not least, I believe the networking with other very talented fellows are the most valuable thing I got out of TDI!

Continue reading


The Many Facets of Artificial Intelligence

artificial-intelligence-2228610_960_720When you think of artificial intelligence (AI), do you envision C-3PO or matrix multiplication? HAL 9000 or pruning decision trees? This is an example of ambiguous language, and for a field which has gained so much traction in recent years, it’s particularly important that we think about and define what we mean by artificial intelligence – especially when communicating between managers, salespeople, and the technical side of things. These days, AI is often used as a synonym for deep learning, perhaps because both ideas entered popular tech-consciousness at the same time. In this article I’ll go over the big picture definition of AI and how it differs from machine learning and deep learning. Continue reading


Developing a Foundation for Data Science: Alumni Spotlight on Ryan Jadrich

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists.  Ryan was a Fellow in our Fall 2016 cohort who landed a job at Austin based startup OK Roger

Tell us about your background. How did it set you up to be a great data scientist 

My PhD and postdoctoral work was in the field of statistical mechanics with a strong emphasis on the design of new colloidal materials. Such research has required me to develop a hybrid set of strong analytical math and computational skills—of which have been extremely useful for bridging into Data Science. From the deeper level understanding afforded by this mixed skill set, I feel well posed to leverage existing technologies as well as develop novel alternatives.  As an example of the latter, my forays into the fundamentals of Machine Learning helped me to develop a super-computing application capable of inferring the inter-particle forces an experimentalist must engineer to elicit a desired material property. This required the development of both an analytical framework and an underlying large scale molecular simulation element. Combining these general technical skills with what I learned at The Data Incubator, I feel well poised to be successful in a Data Science position

Continue reading


Predicting Visa Wait Times: Alumni Spotlight on Sudhir Raskutti

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists.  Sudhir was a Fellow in our Fall 2016 cohort who landed a job with one of our hiring partners, Red Owl

Tell us about your background. How did it set you up to be a great data scientist 

I am a PhD in computational astrophysics from Princeton University with a background in electrical engineering. Astrophysics gave me both a reasonably strong problem solving background as well as the ability to deal with quite terrible data.

What do you think you got out of The Data Incubator?

The Data Incubator was really useful in a few ways. Firstly, I got a broad brush overview of the tools and technologies most commonly used in Industry. Obviously in 8 weeks, you’re not going to learn all of the tools and concepts in depth, but the Incubator was good at giving a frame of reference for asking deeper questions. More importantly, it was really good at setting up a network and providing a framework for reaching out to employers. It’s a huge advantage to meet employers face to face before reaching out to them, and to have something to show to them and talk about.

Continue reading


4 Data Science Projects That We Can’t Get Enough Of

LI3Y5U376XAt The Data Incubator we run a free advanced 8-week fellowship for PhDs looking to enter the industry as data scientists.  

As part of the application process, we ask potential fellows to propose and begin working on a data science project to highlight their skills to employers.  Regardless of whether you’re selected to be a fellow, this project will be instrumental in attracting employer interest and highlighting your skills.  Here are some projects that we would love to see, and that we hope to see you take on as well.

 

Multi-Axial Political Analysis  

We often think of American politics in terms of a single axis: left versus right, democrat versus republican.  In reality, the parties are composed of varying factions with different identities and political priorities and American politics is actually broken along multiple axes: foreign policy, social issues, regulation, social spending, education, second amendment, just to name a few.  Continue reading