Data Category Archives - General Assembly Blog | Page 3

Tableau vs. Power BI

By

Featuring Insights From Matt Brems

Read: 2 Minutes

Tableau and Power BI are powerful tools for business intelligence, with capabilities to take loads of big data and create elegant visualizations that convey key insights to stakeholders in easily digestible presentations. Both help organizations leverage business intelligence to become more data-driven in their decision-making process. So which tool is better? We asked a few industry experts their thoughts on the data analysis tools Tableau and Power BI. Here’s what they had to say.

Candace Pereira-Roberts, Data Engineer & GA Data Analytics Instructor

“Anyone who works in data should learn tools that help tell data stories with quality visualizations. Tableau is a wonderful tool for the technical and nontechnical to build these visualizations. I love how we teach the Tableau unit in the Data Analytics bootcamp. I see students who are new to analytics learn Tableau desktop and be able to develop Tableau worksheets, dashboards, and story points in a couple of weeks to do a complete analysis project.”

Iun Chen, GA Instructor & Data Analyst at LinkedIn 

“In my professional capacity, I lead data visualization workshops to share best practices on charting and design theory, with a focus on Tableau. But with the growth of big data analytics, there are more players in the data viz space. Looker. Qlik, Domo, and Microstrategy are a few with out-of-the-box solutions. Check out other marketplace BI and analytics leaders and their reviews at Gartner.

Alternatively, if you are up for the challenge you can start from scratch and build out completely customized solutions through coding packages, such as with Python plotting libraries Matplotlib, Pandas, and Seaborn.”

Matt Brems, GA Instructor & Data Consultant at BetaVector 

“Most data analyst roles will expect some experience with data visualization. They may prefer your visualization experience be tied to a certain tool like Tableau or Power BI or simply want you to have experience designing graphics or dashboards. As with any platform, the human element is key. A good data analyst is curious and detail-oriented. Diving into the data and spotting anomalies or identifying patterns requires curiosity. Looking at large datasets for long periods of time can invite mistakes, so being detail-oriented ensures you’re interpreting the data correctly.” 

Vish Srivastava, GA Instructor & Product Leader at Evidation Health

 “Most teams I’ve seen are not comparing Tableau and Power BI. Instead, it’s more about whether to adopt a business intelligence tool at all, or whether to use Tableau or Power BI in place of Excel. Tableau is a great option when you need to quickly create data visualizations.Tableau is incredibly powerful because it’s designed for nontechnical users, meaning business users can set up and tweak dashboards and charts without the support of engineering or data science teams.”

When it comes to research, the most common data analytics tool is SQL — no surprise there. But once you get into more niche industries, that can vary, says Brems.

“In academia, R is probably the most prevalent data analysis tool, though Python is quickly gaining popularity. SAS and Stata are often used in specific industries, though their popularity is diminishing. (R and Python are open source tools, which means, among other things, that they are free.)”

Want to learn more about Candace?
https://www.coursereport.com/blog/how-to-become-a-business-intelligence-analyst
https://generalassemb.ly/instructors/candace-roberts/13840
www.linkedin.com/in/candaceproberts

Want to learn more about Iun?
https://www.linkedin.com/in/iunchen 

Want to learn more about Matt?
https://betavector.com/
https://www.linkedin.com/in/matthewbrems

Want to learn more about Vish?
 https://www.linkedin.com/in/vishrutps

Today’s Best Data Analytics Tools

By

Featuring Insights From Matt Brems

Read: 3 Minutes

Our Data-Driven World

We live in a world of data — swimming in statistics, numbers, information — and the amount of data seems to be growing faster than we can keep up. More people are using data points to make decisions large and small. From which restaurant has the highest Yelp rating to which city has the lowest rates of COVID-19, using data to navigate everyday life is now the norm. Indeed, the pandemic has only increased our reliance on data. We have come to expect this tsunami of data to explain, and in some cases solve, many of the most vexing problems faced by society today. But finding key insights takes careful analysis of a staggering amount of data. No small feat.

It’s true that more data is released than ever before. In the U.S., there are currently over 290,000 datasets on data.gov alone. Clearly, there’s a growing need for data analysts and the data analytics tools that help us understand these numbers. From small businesses to the highest levels of governments, decisions turn on interpretations of data. Big data can have big consequences.
 

So how do data analysts find the insights lurking in a database? And what are the best tools to analyze all those numbers? Read on to discover the best data analytics tools in the market.

Data scientist and GA instructor since 2016, Matt Brems currently runs a data science consultancy called BetaVector. We asked him to share his go-to data analysis tools. “People who want to analyze data use many different tools; I like to break these down into three different types,” he says.

Let’s get to it.

Type #1: Tabular Data Tools

Data analysts need to get data out of databases and analyze that information. And to do that, they use tabular data tools. According to Brems, the most important ones to know are Microsoft Excel, Google Sheets, and SQL, or Structured Query Language. Generally considered the best data analysis tool for research, SQL is the most common qualification found in job descriptions for a data analyst.

“Most data that data analysts analyze comes in the form of a table, called tabular data. This just means that data is organized into rows and columns, like a spreadsheet. Most data analysts will use a spreadsheet tool like Microsoft Excel or Google Sheets. When working with significant amounts of data (large tables, many tables, or both), organizations will often use a database. In order to interact with most databases, SQL is by far the language of choice.”

Type #2: Programming Language Tools

Proficiency in a few programming tools, while not a prerequisite for basic data analysis, can give analysts the ability to perform a wide variety of tasks. While the needed programming language tools will vary from company to company and even from job to job, having this skill set as a data analyst is clearly an advantage for job seekers.

“Python and R are the most common programming language tools in data analysis, though Stata and SAS are also used in some industries. These tools can be used to perform automation, statistical modeling, forecasting, and visualization.”

Type #3: Data Visualization Tools

Since data analysts are frequently tasked with presenting results to stakeholders, a good data visualization tool is essential. Brems recommends Tableau and Microsoft PowerBI.

“While you can visualize data using programming languages, Tableau and PowerBI are two standalone tools that are used almost exclusively for the purposes of building static data visualizations and dashboards.”

A Note on Research 

When it comes to research, the most common data analytics tool is SQL — no surprise there. But once you get into more niche industries, that can vary, says Brems.

“In academia, R is probably the most prevalent data analysis tool, though Python is quickly gaining popularity. SAS and Stata are often used in specific industries, though their popularity is diminishing. (R and Python are open source tools, which means, among other things, that they are free.)”

Want to learn more about Matt?

https://betavector.com/

https://www.linkedin.com/in/matthewbrems

Beginner’s Python Cheat Sheet

By

Do you want to be a data scientist? Data Science and machine learning are rapidly becoming a vital discipline for all types of businesses. An ability to extract insight and meaning from a large pile of data is a skill set worth its weight in gold. Due to its versatility and ease of use, Python programming has become the programming language of choice for data scientists.

In this Python crash course, we will walk you through a couple of examples using two of the most-used data types: the list and the Pandas DataFrame. The list is self-explanatory; it’s a collection of values set in a one-dimensional array. A Pandas DataFrame is just like a tabular spreadsheet, it has data laid out in columns and rows.

Let’s take a look at a few neat things we can do with lists and DataFrames in Python!
Get the PDF here.

BEGINNER’S Python Cheat Sheet

Lists

Creating Lists

Let’s start this Python tutorial by creating lists. Create an empty list and use a for loop to append new values. What you need to do is:

#add two to each value
my_list = []
for x in range(1,11):
my_list.append(x+2)

We can also do this in one step using list comprehension:

my_list = [x + 2 for x in range(1,11)]

Creating Lists with Conditionals

As above, we will create a list, but now we will only add 2 to the value if it is even.

#add two, but only if x is even
my_list = []
for x in range(1,11):
if x % 2 == 0:
my_list.append(x+2)
else:
my_list.append(x)

Using a list comp:

my_list = [x+2 if x % 2 == 0 else x \
for x in range(1,11)]

Selecting Elements and Basic Stats

Select elements by index.

#get the first/last element
first_ele = my_list[0]
last_ele = my_list[-1]

Some basic stats on lists:

#get max/min/mean value
biggest_val = max(my_list)
smallest_val = min(my_list)avg_val = sum(my_list) / len(my_list)

DataFrames

Reading in Data to a DataFrame

We first need to import the pandas module.

import pandas as pd

Then we can read in data from csv or xlsx files:

df_from_csv = pd.read_csv(‘path/to/my_file.csv’,
sep=’,’,
nrows=10)
xlsx = pd.ExcelFile(‘path/to/excel_file.xlsx’)
df_from_xlsx = pd.read_excel(xlsx, ‘Sheet1’)

Slicing DataFrames

We can slice our DataFrame using conditionals.

df_filter = df[df[‘population’] > 1000000]
df_france = df[df[‘country’] == ‘France’]

Sorting values by a column:

df.sort_values(by=’population’,
ascending=False)

Filling Missing Values

Let’s fill in any missing values with that column’s average value.

df[‘population’] = df[‘population’].fillna(
value=df[‘population’].mean()
)

Applying Functions to Columns

Apply a custom function to every value in one of the DataFrame’s columns.

def fix_zipcode(x):
”’
make sure that zipcodes all have leading zeros
”’
return str(x).zfill(5)
df[‘clean_zip’] = df[‘zip code’].apply(fix_zipcode)

Ready to take on the world of machine learning and data science? Now that you know what you can do with lists and DataFrames using Python language, check out our other Python beginner tutorials and learn about other important concepts of the Python programming language.

8 Tips for Learning Python Fast

By

It’s possible to learn Python fast. How fast depends on what you’d like to accomplish with it and how much time you can allocate to study and practice Python on a regular basis. Before we dive in further, I’d like to establish some assumptions I’ve made about you and your reasons for reading this article:

First, I’ll address how quickly you should be able to learn Python. If you’re interested in learning the fundamentals of Python programming, it could take you as little as two weeks to learn, with routine practice.

If you’re interested in mastering Python in order to complete complex tasks or projects or spur a career change, then it’s going to take much longer. In this article, I’ll provide tips and resources geared toward helping you gain Python programming knowledge in a short timeframe.

If you’re wondering how much it’s going to cost to learn Python, the answer there is also, “it depends”. There is a large selection of free resources available online, not to mention the various books, courses, and platforms that have been published for beginners.

Another question you might have is, “how hard is it going to be to learn Python?” That also depends. If you have any experience programming in another language such as R, Java, or C++, it’ll probably be easier to learn Python fast than someone who hasn’t programmed before.

But learning a programming language like Python is similar to learning a natural language, and everyone’s done that before. You’ll start by memorizing basic vocabulary and learning the rules of the language. Over time, you’ll add new words to your repertoire and test out new ways to use them. Learning Python is no different.

By now you’re thinking, “Okay, this is great. I can learn Python fast, cheap, and easily. Just tell me what to read and point me on my way.” Not so fast. There’s a fourth thing you need to consider and that’s how to learn Python.

Research on learning has identified that not all people learn the same way. Some learn best by reading, while others learn best by seeing and hearing. Some people enjoy learning through games rather than courses or lectures. As you review the curated list of resources below, consider your own learning preferences as you evaluate options.

Now let’s dig in. Below are my eight tips to help you learn Python fast.

Continue reading

Data Literacy for Leaders

By

For years, the importance of data has been echoed in boardroom discussions and listed on company roadmaps. Now, with 99% of businesses reporting active investment in big data and AI, it’s clear that all businesses are beginning to recognize the power of data to transform our world of work.

While all leaders recognize the needs and benefits of becoming data-driven, only 24% have successfully created a data-driven organization. That is because transformation is not considered holistically and instead leaders focus on business, tools and technology and talent in silos. Usually leaving skill acquisition amongst leaders and the broader organization for last. It’s no wonder that 67% of leaders say they are not comfortable accessing or using data.

We’ve worked with businesses, such as Bloomberg, to help them gain the skills they need to successfully leverage data within their organizations & we haven’t left leaders out of the conversation. In fact, we know that leaders are crucial to the success of data transformation efforts & just like their teams, they need to be equipped with the skills to understand and communicate with data.

Why Should I Train My Leaders on Data?

When embarking on a data transformation, we always recommend that leaders be trained as the first step in company-wide skill acquisition. We recommend this approach for a few reasons:

  • Leaders Need to Understand Their Role in Data Transformation:  Analytics can’t be something data team members do in a silo. They need to be fully incorporated into the business, rather than an afterthought. However, businesses will struggle to make that change if every leader does not understand his or her responsibility in data transformation.
  • Leadership Training Shows a Commitment to Change: According to New Vantage Partners, 92% of data transformation failures are attributed to the inability of leaders to form a data-driven culture. In order for your employees to truly become data-driven, they have to be able to see a real commitment from leaders to organizational goals and operational change. Training your leaders first sends that message that data is here to stay. 
  • Leaders Need to Be Prepared to Work With Data-Driven Teams: Increasingly, leaders are expected to make data-driven decisions that impact the success of the organization. Without literacy, leaders will continue to feel uncomfortable communicating with and using data to make decisions. This discomfort will trickle down to employees and real change will never be felt. 

Just like your broader organization, leaders cannot be expected to understand the role they play or the importance of data transformation without proper training. 

What Does Data Literacy For Leaders Look Like? 

Leaders need to be able to readily identify opportunities to use data effectively. In order to get there leaders need to:

Build a Data-Driven Mindset:

While every leader brings a wealth of experience to your org, many leaders are not data natives, and it can be a big leap to make this shift in thinking. Training leaders all at once gives you the opportunity to get your leaders on the same page and build a shared understanding and vocabulary.

So what does building a data-driven mindset look like in practice? To truly have a data-driven mindset leaders must be aware of the data landscape, as well as the opportunity of data, be mindful of biases inherent in data with an eye towards overcoming that bias, as well as being curious about how data can influence our decisions.

Leaders should walk away from training with a baseline understanding of key data concepts, a shared vocabulary, knowing how data flows through an organization and be able to pinpoint where data can have an impact in the org.

Understand the Data Life Cycle

Leaders are responsible for having oversight of every phase of the data life cycle and must be able to help teams weed out bias at any point. Without this foundation, leaders will have a hard time knowing where to invest in a data transformation and how to lead projects and teams.

All leaders should be equipped to think about and ask questions about each phase of the life cycle. For example:

  • Data Identification: What data do we have, and what form is it in? 
  • Data Generation: Where will the data come from and how reliable is the source? 
  • Data Acquisition: How will the data get from the source to us? 

It is not the role of the leader to know where all the data comes from or what gaps exist, but being able to understand what questions to ask, is important to acquire the necessary insights to inform a sound business strategy.

Get to Know the Role of Data Within the Org

In an organization that’s undergoing a data transformation, there’s no shortage of projects that could command a leader’s attention and investment. Leaders must be equipped to understand where to invest to put their plans into action.

Based on existing structure, leaders need to understand the key data roles, such as data analysts or machine learning engineers, why they are important and how they differ. Once a leader has the knowledge of the data teams, they will be able to identify the opportunity of data within their team and role.

Make Better Data-Driven Decisions

Leaders who rely on intuition alone run the huge risk of being left behind by competitors that use data-driven insights. With more and more companies adjusting to this new world order, it’s imperative that leaders become more data literate in order to make important business-sustaining decisions moving forward. 

Leaders should walk away from training with a baseline understanding of key data concepts, a shared vocabulary, knowing how data flows through an organization and be able to pinpoint where data can have an impact in the org.

Getting Started With Leadership Training 

Including data training specifically for your leaders in your data transformation efforts is crucial. While leaders are busy tackling other important business initiatives, they, just like the rest of your organization must be set up with the right skills to successfully meet the future of work. Investment in data skills for leaders will help you to forge a truly data-driven culture and business.

To learn more about how GA equips leaders and organizations to take on data transformation get in touch with us here.

15 Data Science Projects to get you Started

By

When it comes to getting a job in data science, data scientists need to think like Creatives. Yes, that’s correct. Those looking to enter this field need to have a data science portfolio of previously completed data science projects, similar to those in Creative professions. What better way to prove to your future data science team that you’re capable of being a data scientist than proving you can do the work?

A common problem for data science entrants is that employers want candidates with experience, but how do you get experience without having access to experience? Suppose you’re looking to get that first foot in the door. It will behoove you to undertake a couple of data science projects to show future employers you’ve got what it takes to use big data to identify opportunities and succeed in the field.

The good news is that we live in a time of open and abundant data. Websites like Kaggle offer a treasure trove of free data for deep learning on everything from crime statistics to Pokemon to Bitcoin and more. However, the wealth of easily accessible data can be overwhelming, which is why we’ve taken it upon ourselves to present 15 data science projects you can execute in Python to showcase and improve your skills in data analytics. Our data science project ideas cover various topics, from Spotify songs to fake news to fraud detection and techniques such as clustering, regression, and natural language processing.

Before you dive in, be sure to adhere to these four guidelines no matter which data science project idea you choose:

1. Articulate the Problem and/or Scenario

It’s not enough to do a project where you use “X” to predict “Y”; you need to add some context to your work because data science does not occur in a vacuum. Tell us what you’re trying to solve and how data science can address that. Employers want to know if you can turn a problem into a question and a question into a solution. A good place to start is to depict a real-world scenario in which your data project would be useful.

2. Publish & Explain Your Work

Create a GitHub repository where you can upload your Jupyter Notebooks and data. Write a blog post in which you narrate your project from start to finish. Talk about the problem or question at the heart of the project, and explain your decision to clean the data in a certain way or why you decided to use a certain algorithm. Why all this? Potential employers need to understand your methodology.

3. Use Domain Expertise

If you’re trying to break into a specific field such as finance, health, or sports, use your knowledge of this area to enhance your project. This could mean deriving a useful question to a pressing problem or articulating a well-thought-out interpretation of your project’s results. For example, if you’re looking to become a data scientist in the finance sector, it would be worthwhile to show how your methods can generate a return on investment.

4. Be Creative & Different

Anyone can copy and paste code that trains a machine learning algorithm. If you want to stand out, review existing data science projects that use the same data and fill in the gaps left by them. If you’re working on a prediction project, try coming up with an unexpected variable that you think would be beneficial.

Data Science Projects

1. Titanic Data

Working on the Titanic dataset is a rite of passage in data science. It’s a useful dataset that beginners can work with to improve their feature engineering and classification skills. Try using a decision tree to visualize the relationships between the features and the probability of surviving the Titanic.

2. Spotify Data

Spotify has an amazing API that provides access to rich data on their entire catalog of songs. You can grab cool attributes such as a song’s acoustics, danceability, and energy. The great thing about this data source is that the project possibilities are almost endless. You can use these features to try to predict genre or popularity. One fun idea would be to better understand your music by training a machine learning classifier on two sets of songs; songs you like and songs you do not.

3. Personality Data Clustering

You’ve probably heard the phrase, “There are X types of people.” Well, now you can actually find out how many types of people there really are. Using this dataset of almost 20k responses to the Big Five Personality Test, you can actually answer this question. Throw this data into a clustering algorithm such as KMeans and sort this into K number of groups. Once you decide on the optimal number of clusters, it’s incumbent on you to define each cluster. Come up with labels that add meaning to each group, and don’t be afraid to use plenty of charts and graphs to support your interpretation.

4. Fake News

If you are interested in natural language processing, building a classifier to differentiate between fake and real news is a great way to demonstrate that. Fake news is a problem that social media platforms have been struggling with for the past several years and a project that tackles this problem is a great way to show you care about solving real-world problems. Use your classifier to identify interesting insights about the patterns in fake versus real news; for example, tell us which words or phrases are most associated with fake news articles.

5. COVID-19 Dataset

There probably isn’t a more relevant use of data science than a project analyzing COVID-19. This dataset provides a wealth of information related to the pandemic. It provides a great opportunity to show off your exploratory data analysis chops. Take a deep dive into this data, and through data visualization unearth patterns about the rate of COVID infection by county, state, and country.

6. Telco Customer Churn

If you’re looking for a straightforward project that is extremely applicable to the business world, then this one’s for you. Use this dataset to train a classifier that predicts customer churn. If you can show employers you know how to prevent customers from leaving their business, you’ll most definitely grab their attention. Pro tip: this is a great projection to show your understanding of classification metrics besides accuracies, such as precision and recall.

7. Lending Club Loans

Like the Telco project, the Lending Club loan dataset is extremely relevant to the business world. Here you can train a classifier that predicts whether or not a Lending Club loanee will pay back a loan using a wealth of information such as credit score, loan amount, and loan purpose. There are a lot of variables at your disposal, so I’d recommend starting with a handful of features and working your way up from there. See how far you can get with just the basics.

Also, this is a fairly untidy dataset that will require extensive cleaning and feature engineering, which is a good thing because that is often the case with real-world data. Be sure to explain your methodology behind preparing your dataset for the machine learning algorithm — this informs the audience of your domain expertise.

8. Breast Cancer Detection

This dataset provides a simpler classification scenario in which you can use health-related variables to predict instances of breast cancer. If you’re looking to apply your data science skills to the medical field, this is certainly worth a shot.

9. Housing Regression

If classification isn’t your thing, then might I recommend this ready-made regression project in which you can predict home prices using variables like square footage, number of bedrooms, and year built. A project such as this can help you understand the factors driving home sales and let you get creative in your feature engineering. Try to involve outside data that can serve as proxies for quality of life, education, and other things that might influence home prices. And if you want to show off your scraping skills, you can always create your dataset by scraping Zillow.

10. Seeds Clustering

The seeds dataset from UCI provides a simple opportunity to use clustering. Use the seven attributes to sort the 210 seeds into K number of groups. If you’re looking to go beyond KMeans, try using hierarchical clustering, which can be useful for this dataset because the low number of samples can be easily visualized with a dendrogram.

11. Credit Card Fraud Detection

Another project idea for those of you intent on using business world data is to train a classifier to predict instances of credit card fraud. The value of this project to you comes from the fact that it’s an imbalanced dataset, meaning that one class vastly outweighs the other (in this case, non-fraudulent transactions versus fraudulent). Training a model that is 99% accurate is essentially useless, so it’s up to you to use non-accuracy metrics to demonstrate the success of your model.

12. AutoMPG

This is a great beginner regression project in which you can use car features to predict their fuel efficiency. Given that this data is from the past, an interesting idea you can use is to see how well this model does on data from recent cars to show how car fuel efficiency has evolved over the years.

13. World Happiness

Using data science to unlock what’s behind happiness? Maybe you can with this dataset on world happiness rankings. You can go a number of ways with this project; you can use regression to predict happiness scores, cluster countries based on socio-economic characteristics, or visualize the change in happiness throughout the world from 2015 to 2019.

14. Political Identity

The Nationscape Data Set is an absolute goldmine of data on the demographics and political identities of Americans. If you’re a politics junkie, it’ll be sure to satisfy your fix. Their most recent round of data features over 300,000 instances of data collected from extensive surveys of Americans. If you’re interested in using demographic information for political ideology or party identification this is the dataset for you. This is an especially great project to flex your domain expertise in study design, research, and conclusion. Political analysis is replete with shoddy interpretations that lack empirical data analysis, and you could use this dataset to either confirm or dispel them. But be warned that this data will require plenty of cleaning, which you’ll need to get used to, given that’s the majority of the job.

15. Box Office Prediction

If you’re a movie buff, then we’ve got you covered with the TMDB dataset. See if you can build a workable box office revenue prediction model trained on 5000 movies worth of data. Does genre actually correlate with box office success? Can we use runtime and language to help explain the variation in the revenue? Find out the answers to those questions and more with this project.

How to Get a Job in Data Science Fast

By

You want to get a data science job fast. Obviously, no one wants one to get a job slowly. But the time it takes to find a job is relative to you and your situation. When I was seeking my first data science job, I had normal just Kevin bills and things to budget for, plus a growing family who was hoping I’d get a job fast. This was different from some of my classmates, while others had their own versions of why they needed a job fast, too. I believe that when writing a how-to guide on getting a data science job quickly, we should really acknowledge that we’re talking about getting you, the reader, a job faster. Throughout this article, we’ll discuss how to get a job as a data scientist faster than you might otherwise, all things considered.

Getting a job faster is not an easy task in any industry, and getting a job faster as a data scientist has additional encumbrances. Some jobs, extremely well-paying jobs, require a nebulous skill set that most adults could acquire after several years in the professional working world. Data science is not one of those jobs. For all the talk about what a data scientist actually does, there’s a definite understanding that the set of skills necessary to successfully execute any version of the job are markedly technical, a bit esoteric, and specialized. This has pros and cons, which we’ll discuss. The community of people who aspire to join this field, as well as people already in the field, is fairly narrow which also has pros and cons.

Throughout this article, we’ll cover two main ways to speed up the time it takes to get a data science job: becoming aware of the wealth of opportunities, and increasing the likelihood that you could be considered employable.

Becoming Aware of the Wealth of Opportunities

Data science is a growing, in-demand field. See for yourself in Camm, Bowers, and Davenport’s article, “The Recession’s Impact on Analytics and Data Science” and “Why data scientist is the most promising job of 2019” by Alison DeNisco Rayome. It’s no secret however that these reports often only consider formal data science job board posts. You may have heard or already know that there exists a hidden job market. It stands to reason that if this hidden job market exists, there may also be a number of companies who have not identified their need for a data scientist yet, but likely need some portion of data science work. Here’s your action plan, assuming you already have the requisite skills to be a data scientist:

1. Find a company local to your region. This is easier if you know someone at that company, but if you don’t know anyone, just think through the industries that you’d like to build a career in. Search for several companies in those fields and consider a list of problems that might be faced by that organization, or even those industries at large.

2. Do some data work. Try to keep the scope of the project limited to something you could accomplish in one to two weekends. The idea here is not to create a thesis on some topic, but rather to add to your list of projects you can comfortably talk about in a future interview. This also does not have to be groundbreaking, bleeding edge work. Planning, setting up, and executing a hypothesis test for a company who is considering two discount rates for an upcoming sale will give you a ton more fodder for interviews over a half-baked computer vision model with no clear deliverable or impact on a business.

3. You have now done data science work. If you didn’t charge money for your services on the first run, shame on you. Charge more next time.

4. Repeat this process. The nice thing about these mini projects is that you can queue up your next potential projects while you execute the work for your current project at the same time.

Alternatively, you could consider jobs that are what I call the “yeah but there’s this thing…” type jobs. For example, let’s say you’re setting up a database for a non-profit and really that’s all they need. The thing is… it’s really your friend’s non-profit, all they need is their website to log some info into a database, and they can’t pay you. Of course you should not do things that compromise your morals or leave you feeling as though you’ve lowered your self worth in any way. Of course you’d help out your friend. Of course you would love some experience setting up a database, even if you don’t get to play with big data. Does that mean that you need to explain all of those in your next job interview? Of course not! Take the job and continue to interview for others. Do work as a data engineer. Almost everyone’s jobs have a “yeah but” element to them; it’s about whether the role will help increase your likelihood of being considered employable in the future.

Increasing the Likelihood That You Could Be Considered Employable

Thought experiment: a CTO comes to you with a vague list of Python libraries, deep learning frameworks, and several models which seem relevant to some problems your company is facing and tasks you with finding someone who can help solve those issues. Who would you turn to if you had to pick a partner in this scenario? I’ll give you a hint — you picked the person who satisfied three, maybe four criteria on what you and that team are capable of.

Recruiting in the real world is no different. Recruiters are mitigating their risk of hiring someone that won’t be able to perform the duties of the position. The way they execute is by figuring out the skills (usually indicated by demonstrated use of a particular library) necessary for the position, then finding the person who seems like they can execute on the highest number of the listed skills. In other words, a recruiter is looking to check a lot of boxes that limit the risk of you as a candidate. As a candidate, the mindset shift you need to come to terms with is that they want and need to hire someone. The recruiter is trying to find the lowest risk person, because the CTO likely has some sort of bearing on that recruiter’s position. You need to basically become the least risky hire, which makes you the best hire, amongst a pool of candidates.

There are several ways to check these boxes if you’re the recruiter. The first is obvious: find out where a group of people who successfully complete the functions of the job were trained, and then hire them. In data science, we see many candidates with training from a bootcamp, a master’s program, or PhDs. Does that mean that you need these degrees to successfully perform the function of the job? I’d argue no — it just means that people who are capable of attaining those relevant degrees are less risky to hire. Attending General Assembly is a fantastic way to show that you have acquired the relevant skills for the job.

Instead of having your resume alone speak to your skill, you can have someone in your network speak to your skills. Building a community of people who recognize your value in the field is incredibly powerful. While joining other pre-built networks is great, and opens doors to new opportunities, I’ve personally found that the communities I co-created are the strongest for me when it comes to finding a job as a data scientist. These have taken two forms: natural communities (making friends), and curated communities. Natural communities are your coworkers, friends, and fellow classmates. They become your community who can eventually speak up and advocate for you when you’re checking off those boxes. Curated communities might be a Meetup group that gathers once a month to talk about machine learning, or an email newsletter of interesting papers on Arxiv, or a Slack group you start with former classmates and data scientists you meet in the industry. In my opinion, the channel matters less, as long as your community is in a similar space as you.

Once you have the community, you can rely on them to pass things your way and you can do the same. Another benefit of General Assembly is its focus on turning thinkers into a community of creators. It’s almost guaranteed that someone in your cohort, or at a workshop or event has a similar interest as you. I’ve made contacts that passed alongside gig opportunities, and I’ve met my cofounder inside the walls of General Assembly! It’s all there, just waiting for you to act.

Regardless of what your job hunt looks like, it’s important to remember that it’s your job hunt. You might be looking for a side gig to last while you live nomadically, a job that’s a stepping stone, or a new career as a data scientist. You might approach the job hunt with a six-pack of post-graduate degrees; you might be switching from a dead end role or industry, or you might be trying out a machine learning bootcamp after finishing your PhD. Regardless of your unique situation, you’ll get a job in data science fast as long as you acknowledge where you’re currently at, and work ridiculously hard to move forward.

What is Data Science?

By

It’s been anointed “the sexiest job of the 21st century”, companies are rushing to invest billions of dollars into it, and it’s going to change the world — but what do people mean when they mention “data science”? There’s been a lot of hype about data science and deservedly so, but the excitement has helped obfuscate the fundamental identity of the field. Anyone looking to involve themselves in data science needs to understand what it actually is and is not.

In this article, we’ll lay out a deep definition of the field, complete descriptions of the data science workflow, and data science tasks used in the real world. We hope that any would-be entrants into this line of work will come away reading this article with a nuanced understanding of data science that can help them decide to enter and navigate this exciting line of work.

So What Actually is Data Science?

A quick definition of data science might be articulated as an interdisciplinary field that primarily uses statistics and computer programming to derive insights from and base decisions from a collection of information represented as numerical figures. The “science” part in data science is quite apt because data science very much follows a scientific process that involves formulating a hypothesis and using a specific toolset to confirm or dispel that hypothesis. At the end of the day, data science is about turning a problem into a question and a question into an answer and/or solution.

Tackling the meaning of data science also means interrogating the meaning of data. Data can be easily described as “information encoded as numbers” but that doesn’t tell us why it’s important. The value of data stems from the notion that data is a tangible manifestation of the intangible. Data provides solid support to aid our interpretations of the world. For example, a weather app can tell you it’s cold outside but telling you that the temperature is 38 degrees fahrenheit provides you with a stronger and specific understanding of the weather.

Data comes in two forms: qualitative and quantitative.

Qualitative data is categorical data that does not naturally come in the form of numbers, such as demographic labels that you can select on a census form to indicate gender, state, and ethnicity.

Quantitative data is numerical data that can be processed through mathematical functions; for example stock prices, sports stats, and biometric information.

Quantitative can be subdivided into smaller categories such as ordinal, discrete, and continuous.

Ordinal: A sort of qualitative and quantitative hybrid variable in which the values have a hierarchical ranking. Any sort of star rating system of reviews is a perfect example of this; we know that a four-star review is greater than a three-star review, but can’t say for sure that a four- star review is twice as good as a two-star review.

Discrete: These are countable and finite values that often appear in the form of integers. Examples include number of franchises owned by a company and number of votes cast in an election. It’s important to remember discrete variables have a finite range of numbers and can never be negative.

Continuous: Unlike discrete variables, continuous can appear in decimal form and have an infinite range of possibilities. Things like company profit, temperature, and weight can all be described as continuous. 

What Does Data Science Look Like?

Now that we’ve established a base understanding of data science, it’s time to delve into what data science actually looks like. To answer this question, we need to go over the data science workflow, which encapsulates what a data science project looks like from start to finish. We’ll touch on typical questions at the heart of data science projects and then examine an example data science workflow to see how data science was used to achieve success.

The Data Science Checklist

A good data science project is one that satisfies the following criteria:

Specificity: Derive a hypothesis and/or question that’s specific and to the point. Having a vague approach can often lead to a waste of time with no end product.

Attainability: Can your questions be answered? Do you have access to the required data? It’s easy to come up with an interesting question but if it can’t be answered then it has no value. The same goes for data, which is only useful if you can get your hands on it.

Measurability: Can what you’re applying data science to be quantified? Can the problem you’re addressing be represented in numerical form? Are there quantifiable benchmarks for success? 

As previously mentioned, a core aspect of data science is the process of deriving a question, especially one that is specific and achievable. Typical data science questions ask things like, does X predict Y and what are the distinct groups in our data? To get a sense of data science questions, let’s take a look at some business-world-appropriate ones:

  • What is the likelihood that a customer will buy this product?
  • Did we observe an increase in sales after implementing a new policy?
  • Is this a good or bad review?
  • How much demand will there be for my service tomorrow?
  • Is this the cheapest way to deliver our goods?
  • Is there a better way to segment our marketing strategies?
  • What groups of products are customers purchasing together?
  • Can we automate this simple yes/no decision?

All eight of these questions are excellent examples of how businesses use data science to advance themselves. Each question addresses a problem or issue in a way that can be answered using data science.

The Data Science Workflow

Once we’ve established our hypothesis and questions, we can now move onto what I like to call the data science workflow, a step-by-step description of a typical data science project process.

After asking a question, the next steps are:

  1. Get and Understand the Data. We obviously need to acquire data for our project, but sometimes that can be more difficult than expected if you need to scrape for it or if privacy issues are involved. Make sure you understand how the data was sampled and the population it represents. This will be crucial in the interpretation of your results.
  1. Data Cleaning and Exploration. The dirty secret of data science is that data is often quite dirty so you can expect to do significant cleaning which often involves constructing your variables in a way that makes your project doable. Get to know your data through exploratory data analysis. Establish a base understanding of the patterns in your dataset through charts and graphs.
  1. Modeling. This represents the main course of the data science process; it’s where you get to use the fancy powerful tools. In this part, you build a model that can help you answer a question such as can we predict future sales of a product from your dataset.
  1. Presentation. Now it’s time to present the results of your findings. Did you confirm or dispel your hypothesis? What are the answers to the questions you started off with? How do your results advance our understanding of the issue at hand? Articulate your project in a clear and concise manner that makes it digestible for your audience, which could be another team in your company or your company’s executives.

Data Science Workflow Example: Predicting Neonatal Infection

Now let’s parse out an example of how data science can affect meaningful real-world impact, taken from the book Big Data: A Revolution That Will Transform How We Live, Work, and Think.

We start with a problem: Children born prematurely are at high risk of developing infections, many of which are not detected until after a child is sick.

Then we turn that problem into a question: Can we detect patterns in the data that accurately predict infection before it occurs?

Next, we gather relevant data: variables such as heart rate, respiration rate, blood pressure, and more.

Then we decide on the appropriate tool: a machine learning model that uses past data to predict future outcomes.

Finally, what impact do our methods have? The model is able to predict the onset of infection before symptoms appear, thus allowing doctors to administer treatment earlier in the infection process and increasing the chances of survival for patients.

This is a fantastic example of data science in action because every step in the process has a clear and easily understandable function towards a beneficial outcome.

Data Science Tasks

Data scientists are basically Swiss Army knives, in that they possess a wide range of abilities — it’s why they’re so valuable. Let’s go over the specific tasks that data scientists typically perform on the job.

Data acquisition: For data scientists, this usually involves querying databases set up by their companies to provide easy access to reams of data. Data scientists frequently write SQL queries to retrieve data. Outside of querying databases, data scientists can use APIs or web scraping to acquire data.

Data cleaning: We touched on this before, but it can’t be emphasized enough that data cleaning will take up the vast majority of your time. Cleaning oftens means dealing with null values, dropping irrelevant variables, and feature engineering which means transforming data in a way so that it can be processed by a model.

Data visualization: Crafting and presenting visually appealing and understandable charts is a hugely valuable skill. Visualization has an uncanny ability to communicate important bits of information from a mass of data. Good data scientists will use data visualization to help themselves and their audiences better understand what’s going on.

Statistical analysis: Statistical tests are used to confirm and/or dispel a data scientist’s hypothesis. A t-test or chi-square are used to evaluate the existence of certain relationships. A/B testing is a popular use case of statistical analysis; if a team wants to know which of two website designs leads to more clicks, then an A/B test is the right solution.

Machine learning: This is where data scientists use models that make predictions based on past observations. If a bank wants to know which customers are likely to pay back loans, then they can use a machine learning model trained on past loans to answer that question.

Computer science: Data scientists need adequate computer programming skills because many of the tasks they undertake involve writing code. In addition, some data science roles require data scientists to function as software engineers because data scientists have to implement their methodologies into their company’s backend servers.

Communication: You can be a math and computer whiz, but if you can’t explain your work to a novice audience, your talents might as well be useless. A great data scientist can distill digestible insights from complex analyses for a non-technical audience, translating how a p-value or correlation score is relevant to a part of the company’s business. If your company is going to make a potentially costly or lucrative decision based on your data science work, then it’s incumbent on you to make sure they understand your process and results as much as possible.

Conclusion

We hope this article helped to demystify this exciting and increasingly important line of work. It’s pertinent to anyone who’s curious about data science — whether it’s a college student or an executive thinking about hiring a data science team — that they understand what this field is about and what it can and cannot do.

Designing a Dashboard in Tableau for Business Intelligence

By

Tableau is a data visualization platform that focuses on business intelligence. It has become very popular in recent years because of its flexibility and beauty. Clients love the way Tableau presents data and how easy it makes performing analyses. It is one of my favorite analytical tools to work with.

A simple way to define a Tableau dashboard is as a glance view of a company’s key performance indicators, or KPIs. There are different kinds of dashboards available — it all depends on the business questions being asked and the end-user. Is this for an operational team (like one at a distribution center) that needs to see the number of orders by hour and if sales goals are achieved? Or, is this for a CEO who would like to measure the productivity of different departments and products against forecast? The first case will require the data to be updated every 10 minutes, almost in real-time. The second doesn’t require the same cadence, and once a day will be enough to track the company performance.

Over the past few years, I’ve built many dashboards for different types of users, including department heads, business analysts, and directors, and helped many mid-level managers with data analysis. If you are looking for Tableau dashboard examples, you have come to the right place. Here are some best practices for creating Tableau dashboards I’ve learned throughout my career.

Why Use a Data Visualization?

A data visualizations tool is one of the most effective ways to analyze data from any business process (sales, returns, purchase orders, warehouse operation, customer shopping behavior, etc.).

Below we have a grid report and bar chart that contain the same data source information. Which is easier to interpret?

Grid report

Bar Chart
Grid report vs. bar chart.

That’s right — it’s quicker to identify the category with the lowest sales, Tops, using the chart.

Many companies previously used grid reports to operate and make decisions, and many departments still do today, especially in retail. I once went to a trading meeting on a Monday morning where team members printed pages of Excel reports with rows and rows of sales and stock data by product and took them to a meeting room with a ruler and a highlighter to analyze sales trends. Some of these reports took at least two hours to prepare and required combining data from different data sources with VLOOKUPs — a function that allows users to search through columns in Excel. After the meeting, they threw the papers away (a waste of paper and ink), and then the following Monday it all started again.

Wouldn’t it be better to have an effective dashboard and reporting tool in which the company’s KPIs were updated daily and presented in an interactive dashboard that could be viewed on tablets/laptops and digitally sliced and diced? That’s where tools like Tableau server dashboards come in. You can drill down into details and answer questions raised in the meeting in real-time when creating a Tableau project – something you couldn’t do with paper copies.

How to Design a Dashboard in Tableau SERVER

Step 1: Identify who will use the dashboard and with what frequency.

Tableau dashboards can be used for many different purposes, such as measuring different KPIs, and therefore will be designed differently for each circumstance. This means that, before you can begin designing a new dashboard, you need to know who is going to use it and how often.

Step 2: Define your topic.

The stakeholder (i.e., director, sales manager, CEO, business analyst, buyer) should be able to tell you what kind of business questions need to be answered and the decisions that will be made based on the dashboard.

Here, I am going to use the dataset for my Tableau dashboard example from a fictional retail company to report on monthly sales.

The commercial director would like to know 1) the countries to which the company’s products have been shipped, 2) which categories are performing well, and 3) sales by product. The option of browsing products is a plus, so the tableau dashboard should include as much detail as possible.

Step 3: Initially, make sure you have all of the necessary data available to answer the questions specified in your new dashboard.

Clarify how often you will get the data, the format in which you will receive the data (inside a database or in loose files), the cleanliness of the data, and if there are any data quality issues. You need to evaluate all of this before you promise a delivery date.

Step 4: Create your dashboard.

When it comes to dashboard design, it’s best-practice to present data from top to bottom when in presentation mode. The story should go from left to right, like a comic book, where you start at the top left and finish at the bottom right.

Let’s start by adding the data set to Tableau. For this demo, the data is contained in an Excel file generated by software I developed myself. It’s all dummy data.

To connect to an Excel file from Tableau, select “Excel” from the Connect menu. The tables are on separate Excel sheets, so we’re going to use Tableau to join them, as shown in the image below. Once the tables are joined, go to the bottom and select Sheet 1 to create your first visualization.

Excel Sheet in Tableau
Joining Excel sheet in Tableau.

We have two columns in the Order Details table: Quantity and Unit Price. The sales amount is Quantity x Unit Price, so we’re going to create the new metric, “Sales Amount.” Right-click on the measures and select Create > Calculated Field.

Creating a Map in Tableau

We can use maps to visualize data with a geographical component and compare values across geographical regions. To answer our first question — “Which countries the company’s products have been shipped to?” — we’ll create a map view of sales by country.

1. Add Ship Country to the rows and Sales Amount to the columns.

2. Change the view to a map.

Map
Visualizing data across geographical regions.

3. Add Sales Amount to the color pane. Darker colors mean higher sales amounts aggregated by country.

4. You can choose to make the size of the bubbles proportional to the Sales Amount. To do this, drag the Sales Amount measure to the Size area.

5. Finally, rename the sheet “Sales by Country.”

Creating a Bar Chart in Tableau

Now, let’s visualize the second request, “Which categories are performing well?” We’ll need to create a second sheet. The best way to analyze this data is with bar charts, as they are to compare data across categories. Pie charts work in a similar way, but in this case we have too many categories (more than four) so they wouldn’t be effective.

1. To create a bar chart, add Category Name to the rows and Sales Amount to the columns.

2. Change the visualization to a bar chart.

3. Switch columns and rows, sort it by descending order, and show the values so users can see the exact value that the size of the rectangle represents.

4. Drag the category name to “Color.”

5. Now, rename the sheet to “Sales by Category.”

Sales category bar chart
Our Sales by Category breakdown.

Assembling a Dashboard in Tableau

Finally, the commercial director would like to see the details of the products sold by each category.

Our last page will be the product detail page. Add Product Name and Image to the rows and Sales Amount to the columns. Rename the sheet as “Products.”

We are now ready to create our first dashboard! Rearrange the chart on the dashboard so that it appears similar to the example below. To display the images, drag the Web Page object next to the Products grid.

Dashboard Assembly
Assembling our dashboard.

Additional Actions in Tableau

Now, we’re going to add some actions on the dashboard such that when we click on a country, we’ll see both the categories of products and a list of individual products sold.

1. Go to Dashboard > Actions.

2. Add Action > Filter.

3. Our “Sales by Country” chart is going to filter Sales by Category and Products.

4. Add a second action: Sales by Category will filter Products.

5. Add a third action, this time selecting URL.

6. Select Products, <Image> on URL, and click on the Test Link to test the image’s URL.

What we have now is an interactive dashboard with a worldwide sales view. To analyze a specific country, we click on the corresponding bubble on the map and Sales by Category will be filtered to what was sold in that country.

When we select a category, we can see the list of products sold for that category. And, when we hover on a product, we can see an image of it.

In just a few steps, we have created a simple dashboard from which any department head would benefit.

Dashboard
The final product.

Dashboards in Tableau at General Assembly

In GA’s Data Analytics course, students get hands-on training with the versatile Tableau platform. Students will learn the ins and outs of the data visualization tool and create dashboards to solve real-world problems in 1-week, accelerated or 10-week, part-time course formats — on campus and online. You can also get a taste in our interactive tableau training with these classes and workshops.

Meet Our Expert

Samanta Dal Pont is a business intelligence and data analytics expert in retail, eCommerce, and online media. With an educational background in software engineer and statistics, her great passion is transforming businesses to make the most of their data. Responsible for the analytics, reporting, and visualization in a global organization, Samanta has been an instructor for Data Analytics courses and SQL bootcamps at General Assembly London since 2016.

Samanta Dal Pont, Data Analytics Instructor, General Assembly London

How is Python Used in Data Science?

By

Python is a popular programming language used by both developers and data scientists. But what makes it so popular and why are so many data scientists choosing Python over other programming languages? In this article, we’ll explore the advantages of Python programming and why it’s useful for data science.

What is Python?

No, we’re not talking about the giant, tropical snake. Python is a general-purpose, high-level programming language. It supports object oriented, structured, and functional programming paradigms.

Python was created in the late 1980s by the Dutch programmer Guido van Rossum who wanted a project to fill his time over the holiday break. His goal was to create a programming language that was a descendant of the ABC programming language but would appeal to Unix/C hackers. Van Rossum writes that he chose the name Python for this language, “being in a slightly irreverent mood (and a big fan of Monty Python’s Flying Circus).”

Python went through many updates and iterations and by the year 2008, Python 3.0 was released. This was designed to fix many of the design flaws in the language, with an emphasis on removing redundant features. While this update had some growing pains as it was not backwards compatible, the new updates made way for Python as we know it today. It continues to be well-maintained and supported as a popular, open source programming language.

In “The Zen of Python,” developer Tim Peters summarizes van Rossum’s guiding principles for writing code in Python:

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren’t special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one– and preferably only one –obvious way to do it.
Although that way may not be obvious at first unless you’re Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it’s a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea — let’s do more of those!

These principles touch on some of the advantages of Python in data science. Python is designed to be readable, simple, explicit, and explainable. Even the first principle states that Python code should be beautiful. In general, Python is a great programming language for many tasks and is becoming increasingly popular for developers. But now you may be wondering, why learn Python for data science?

Why Python for Data Science?

The first of many benefits of Python in data science is its simplicity. While some data scientists come from a computer science background or know other programming languages, many come from backgrounds in statistics, mathematics, or other technical fields and may not have as much coding experience when they enter the field of data science. Python syntax is easy to follow and write, which makes it a simple programming language to get started with and learn quickly. 

In addition, there are plenty of free resources available online to learn Python and get help if you get stuck. Python is an open source language, meaning the language is open to the public and freely available. This is beneficial for data scientists looking to learn a new language because there is no up-front cost to start learning Python. This also means that there are a lot of data scientists already using Python, so there is a strong community of both developers and data scientists who use and love Python.

The Python community is large, thriving, and welcoming. Python is the fourth most popular language among all developers based on a 2020 Stack Overflow survey of nearly 65,000 developers. Python is especially popular among data scientists. According to SlashData, there are 8.2 million active Python users with “a whopping 69% of machine learning developers and data scientists now us[ing] Python (compared to 24% of them using R).”4 A large community brings a wealth of available resources to Python users. Not only are there numerous books and tutorials available, there are also conferences such as PyCon where Python users across the world can come together to share knowledge and connect. Python has created a supportive and welcoming community of data scientists willing to share new ideas and help one another. 

If the sheer number of people using Python doesn’t convince you of the importance of Python for data science, maybe the libraries available to make your data science coding easier will. A library in Python is a collection of modules with pre-built code to help with common tasks. They essentially allow us to benefit from and build on top of the work of others. In other languages, some data science tasks would be cumbersome and time consuming to code from scratch. There are countless libraries like NumPy, Pandas, and Matplotlib available in Python to make data cleaning, data analysis, data visualization, and machine learning tasks easier. Some of the most popular libraries include:

  • NumPy: NumPy is a Python library that provides support for many mathematical tasks on large, multidimensional arrays and matrices.
  • Pandas: The Pandas library is one of the most popular and easy-to-use libraries available. It allows for easy manipulation of tabular data for data cleaning and data analysis.
  • Matplotlib: This library provides simple ways to create static or interactive boxplots, scatterplots, line graphs, and bar charts. It’s useful for simplifying your data visualization tasks.
  • Seaborn: Seaborn is another data visualization library built on top of Matplotlib that allows for visually appealing statistical graphs. It allows you to easily visualize beautiful confidence intervals, distributions, and other graphs.
  • Statsmodels: This statistical modeling library builds all of your statistical models and statistical tests including linear regression, generalized linear models, and time series analysis models.
  • Scipy: Scipy is a library used for scientific computing that helps with linear algebra, optimization, and statistical tasks.
  • Requests: This is a useful library for scraping data from websites. It provides a user-friendly and responsive way to configure HTTP requests.

In addition to all of the general data manipulation libraries available in Python, a major advantage of Python in data science is the availability of powerful machine learning libraries. These machine learning libraries make data scientists’ lives easier by providing robust, open source libraries for any machine learning algorithm desired. These libraries offer simplicity without sacrificing performance. You can easily build a powerful and accurate neural network using these frameworks. Some of the most popular machine learning and deep learning libraries in Python include:

  • Scikit-learn: This popular machine learning library is a one-stop-shop for all of your machine learning needs with support for both supervised and unsupervised tasks. Some of the machine learning algorithms available are logistic regression, k-nearest neighbors, support vector machine, random forest, gradient boosting, k-means, DBSCAN, and principal component analysis.
  • Tensorflow: Tensorflow is a high-level library for building neural networks. Since it was mostly written in C++, this library provides us with the simplicity of Python without sacrificing power and performance. However, working with raw Tensorflow is not suited for beginners.
  • Keras: Keras is a popular high-level API that acts as an interface for the Tensorflow library. It’s a tool for building neural networks using a Tensorflow backend that’s extremely user friendly and easy to get started with.
  • Pytorch: Pytorch is another framework for deep learning created by Facebook’s AI research group. It provides more flexibility and speed than Keras, but since it has a low-level API, it is more complex and may be a little bit less beginner friendly than Keras. 

What Other Programming Languages are Used for Data Science?

Python is the most popular programming language for data science. If you’re looking for a new job as a data scientist, you’ll find that Python is also required in most job postings for data science roles. Jeff Hale, a General Assembly data science instructor, scraped job postings from popular job posting sites to see what was required for jobs with the title of “Data Scientist.” Hale found that Python appears in nearly 75% of all job postings. Python libraries including Tensorflow, Scikit-learn, Pandas, Keras, Pytorch, and Numpy also appear in many data science job postings.

Image source: The Most In-Demand Tech Skills for Data Scientists by Jeff Hale

R, another popular programming language for data science, appeared in roughly 55% of the job postings. While R is a useful tool for data science and has many benefits including data cleaning, data visualization, and statistical analysis, Python continues to become more popular and preferred among data scientists for a majority of tasks. In fact, the average percentage of job postings requiring R dropped by about 7% between 2018 and 2019, while Python increased in the percentage of job postings requiring the language. This isn’t to say that learning R is a waste of time; data scientists that know both of these languages can benefit from the strengths of both languages for different purposes. However, since Python is becoming increasingly popular, there’s a high chance that your team uses Python, and it’s important to use the language that your team is comfortable with and prefers.

What is the Future of Python for Data Science?

As Python continues to grow in popularity and as the number of data scientists continues to increase, the use of Python for data science will inevitably continue to grow. As we advance machine learning, deep learning, and other data science tasks, we’ll likely see these advancements available for our use as libraries in Python. Python has been well-maintained and continuously growing in popularity for years, and many of the top companies use Python today. With its continued popularity and growing support, Python will be used in the industry for years to come.

Whether you’ve been a data scientist for years or you are just beginning your data science journey, you can benefit from learning Python for data science. The simplicity, readability, support, community, and popularity of the language — as well as the libraries available for data cleaning, visualization, and machine learning — all set Python apart from other programming languages. If you aren’t already using Python for your work, give it a try and see how it can simplify your data science workflow.