Data Category Archives - General Assembly Blog | Page 4

Designing a Dashboard in Tableau for Business Intelligence

By

Tableau is a data visualization platform that focuses on business intelligence. It has become very popular in recent years because of its flexibility and beauty. Clients love the way Tableau presents data and how easy it makes performing analyses. It is one of my favorite analytical tools to work with.

A simple way to define a Tableau dashboard is as a glance view of a company’s key performance indicators, or KPIs. There are different kinds of dashboards available — it all depends on the business questions being asked and the end-user. Is this for an operational team (like one at a distribution center) that needs to see the number of orders by hour and if sales goals are achieved? Or, is this for a CEO who would like to measure the productivity of different departments and products against forecast? The first case will require the data to be updated every 10 minutes, almost in real-time. The second doesn’t require the same cadence, and once a day will be enough to track the company performance.

Over the past few years, I’ve built many dashboards for different types of users, including department heads, business analysts, and directors, and helped many mid-level managers with data analysis. If you are looking for Tableau dashboard examples, you have come to the right place. Here are some best practices for creating Tableau dashboards I’ve learned throughout my career.

Why Use a Data Visualization?

A data visualizations tool is one of the most effective ways to analyze data from any business process (sales, returns, purchase orders, warehouse operation, customer shopping behavior, etc.).

Below we have a grid report and bar chart that contain the same data source information. Which is easier to interpret?

Grid report

Bar Chart
Grid report vs. bar chart.

That’s right — it’s quicker to identify the category with the lowest sales, Tops, using the chart.

Many companies previously used grid reports to operate and make decisions, and many departments still do today, especially in retail. I once went to a trading meeting on a Monday morning where team members printed pages of Excel reports with rows and rows of sales and stock data by product and took them to a meeting room with a ruler and a highlighter to analyze sales trends. Some of these reports took at least two hours to prepare and required combining data from different data sources with VLOOKUPs — a function that allows users to search through columns in Excel. After the meeting, they threw the papers away (a waste of paper and ink), and then the following Monday it all started again.

Wouldn’t it be better to have an effective dashboard and reporting tool in which the company’s KPIs were updated daily and presented in an interactive dashboard that could be viewed on tablets/laptops and digitally sliced and diced? That’s where tools like Tableau server dashboards come in. You can drill down into details and answer questions raised in the meeting in real-time when creating a Tableau project – something you couldn’t do with paper copies.

How to Design a Dashboard in Tableau SERVER

Step 1: Identify who will use the dashboard and with what frequency.

Tableau dashboards can be used for many different purposes, such as measuring different KPIs, and therefore will be designed differently for each circumstance. This means that, before you can begin designing a new dashboard, you need to know who is going to use it and how often.

Step 2: Define your topic.

The stakeholder (i.e., director, sales manager, CEO, business analyst, buyer) should be able to tell you what kind of business questions need to be answered and the decisions that will be made based on the dashboard.

Here, I am going to use the dataset for my Tableau dashboard example from a fictional retail company to report on monthly sales.

The commercial director would like to know 1) the countries to which the company’s products have been shipped, 2) which categories are performing well, and 3) sales by product. The option of browsing products is a plus, so the tableau dashboard should include as much detail as possible.

Step 3: Initially, make sure you have all of the necessary data available to answer the questions specified in your new dashboard.

Clarify how often you will get the data, the format in which you will receive the data (inside a database or in loose files), the cleanliness of the data, and if there are any data quality issues. You need to evaluate all of this before you promise a delivery date.

Step 4: Create your dashboard.

When it comes to dashboard design, it’s best-practice to present data from top to bottom when in presentation mode. The story should go from left to right, like a comic book, where you start at the top left and finish at the bottom right.

Let’s start by adding the data set to Tableau. For this demo, the data is contained in an Excel file generated by software I developed myself. It’s all dummy data.

To connect to an Excel file from Tableau, select “Excel” from the Connect menu. The tables are on separate Excel sheets, so we’re going to use Tableau to join them, as shown in the image below. Once the tables are joined, go to the bottom and select Sheet 1 to create your first visualization.

Excel Sheet in Tableau
Joining Excel sheet in Tableau.

We have two columns in the Order Details table: Quantity and Unit Price. The sales amount is Quantity x Unit Price, so we’re going to create the new metric, “Sales Amount.” Right-click on the measures and select Create > Calculated Field.

Creating a Map in Tableau

We can use maps to visualize data with a geographical component and compare values across geographical regions. To answer our first question — “Which countries the company’s products have been shipped to?” — we’ll create a map view of sales by country.

1. Add Ship Country to the rows and Sales Amount to the columns.

2. Change the view to a map.

Map
Visualizing data across geographical regions.

3. Add Sales Amount to the color pane. Darker colors mean higher sales amounts aggregated by country.

4. You can choose to make the size of the bubbles proportional to the Sales Amount. To do this, drag the Sales Amount measure to the Size area.

5. Finally, rename the sheet “Sales by Country.”

Creating a Bar Chart in Tableau

Now, let’s visualize the second request, “Which categories are performing well?” We’ll need to create a second sheet. The best way to analyze this data is with bar charts, as they are to compare data across categories. Pie charts work in a similar way, but in this case we have too many categories (more than four) so they wouldn’t be effective.

1. To create a bar chart, add Category Name to the rows and Sales Amount to the columns.

2. Change the visualization to a bar chart.

3. Switch columns and rows, sort it by descending order, and show the values so users can see the exact value that the size of the rectangle represents.

4. Drag the category name to “Color.”

5. Now, rename the sheet to “Sales by Category.”

Sales category bar chart
Our Sales by Category breakdown.

Assembling a Dashboard in Tableau

Finally, the commercial director would like to see the details of the products sold by each category.

Our last page will be the product detail page. Add Product Name and Image to the rows and Sales Amount to the columns. Rename the sheet as “Products.”

We are now ready to create our first dashboard! Rearrange the chart on the dashboard so that it appears similar to the example below. To display the images, drag the Web Page object next to the Products grid.

Dashboard Assembly
Assembling our dashboard.

Additional Actions in Tableau

Now, we’re going to add some actions on the dashboard such that when we click on a country, we’ll see both the categories of products and a list of individual products sold.

1. Go to Dashboard > Actions.

2. Add Action > Filter.

3. Our “Sales by Country” chart is going to filter Sales by Category and Products.

4. Add a second action: Sales by Category will filter Products.

5. Add a third action, this time selecting URL.

6. Select Products, <Image> on URL, and click on the Test Link to test the image’s URL.

What we have now is an interactive dashboard with a worldwide sales view. To analyze a specific country, we click on the corresponding bubble on the map and Sales by Category will be filtered to what was sold in that country.

When we select a category, we can see the list of products sold for that category. And, when we hover on a product, we can see an image of it.

In just a few steps, we have created a simple dashboard from which any department head would benefit.

Dashboard
The final product.

Dashboards in Tableau at General Assembly

In GA’s Data Analytics course, students get hands-on training with the versatile Tableau platform. Students will learn the ins and outs of the data visualization tool and create dashboards to solve real-world problems in 1-week, accelerated or 10-week, part-time course formats — on campus and online. You can also get a taste in our interactive tableau training with these classes and workshops.

Meet Our Expert

Samanta Dal Pont is a business intelligence and data analytics expert in retail, eCommerce, and online media. With an educational background in software engineer and statistics, her great passion is transforming businesses to make the most of their data. Responsible for the analytics, reporting, and visualization in a global organization, Samanta has been an instructor for Data Analytics courses and SQL bootcamps at General Assembly London since 2016.

Samanta Dal Pont, Data Analytics Instructor, General Assembly London

How is Python Used in Data Science?

By

Python is a popular programming language used by both developers and data scientists. But what makes it so popular and why are so many data scientists choosing Python over other programming languages? In this article, we’ll explore the advantages of Python programming and why it’s useful for data science.

What is Python?

No, we’re not talking about the giant, tropical snake. Python is a general-purpose, high-level programming language. It supports object oriented, structured, and functional programming paradigms.

Python was created in the late 1980s by the Dutch programmer Guido van Rossum who wanted a project to fill his time over the holiday break. His goal was to create a programming language that was a descendant of the ABC programming language but would appeal to Unix/C hackers. Van Rossum writes that he chose the name Python for this language, “being in a slightly irreverent mood (and a big fan of Monty Python’s Flying Circus).”

Python went through many updates and iterations and by the year 2008, Python 3.0 was released. This was designed to fix many of the design flaws in the language, with an emphasis on removing redundant features. While this update had some growing pains as it was not backwards compatible, the new updates made way for Python as we know it today. It continues to be well-maintained and supported as a popular, open source programming language.

In “The Zen of Python,” developer Tim Peters summarizes van Rossum’s guiding principles for writing code in Python:

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren’t special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one– and preferably only one –obvious way to do it.
Although that way may not be obvious at first unless you’re Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it’s a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea — let’s do more of those!

These principles touch on some of the advantages of Python in data science. Python is designed to be readable, simple, explicit, and explainable. Even the first principle states that Python code should be beautiful. In general, Python is a great programming language for many tasks and is becoming increasingly popular for developers. But now you may be wondering, why learn Python for data science?

Why Python for Data Science?

The first of many benefits of Python in data science is its simplicity. While some data scientists come from a computer science background or know other programming languages, many come from backgrounds in statistics, mathematics, or other technical fields and may not have as much coding experience when they enter the field of data science. Python syntax is easy to follow and write, which makes it a simple programming language to get started with and learn quickly. 

In addition, there are plenty of free resources available online to learn Python and get help if you get stuck. Python is an open source language, meaning the language is open to the public and freely available. This is beneficial for data scientists looking to learn a new language because there is no up-front cost to start learning Python. This also means that there are a lot of data scientists already using Python, so there is a strong community of both developers and data scientists who use and love Python.

The Python community is large, thriving, and welcoming. Python is the fourth most popular language among all developers based on a 2020 Stack Overflow survey of nearly 65,000 developers. Python is especially popular among data scientists. According to SlashData, there are 8.2 million active Python users with “a whopping 69% of machine learning developers and data scientists now us[ing] Python (compared to 24% of them using R).”4 A large community brings a wealth of available resources to Python users. Not only are there numerous books and tutorials available, there are also conferences such as PyCon where Python users across the world can come together to share knowledge and connect. Python has created a supportive and welcoming community of data scientists willing to share new ideas and help one another. 

If the sheer number of people using Python doesn’t convince you of the importance of Python for data science, maybe the libraries available to make your data science coding easier will. A library in Python is a collection of modules with pre-built code to help with common tasks. They essentially allow us to benefit from and build on top of the work of others. In other languages, some data science tasks would be cumbersome and time consuming to code from scratch. There are countless libraries like NumPy, Pandas, and Matplotlib available in Python to make data cleaning, data analysis, data visualization, and machine learning tasks easier. Some of the most popular libraries include:

  • NumPy: NumPy is a Python library that provides support for many mathematical tasks on large, multidimensional arrays and matrices.
  • Pandas: The Pandas library is one of the most popular and easy-to-use libraries available. It allows for easy manipulation of tabular data for data cleaning and data analysis.
  • Matplotlib: This library provides simple ways to create static or interactive boxplots, scatterplots, line graphs, and bar charts. It’s useful for simplifying your data visualization tasks.
  • Seaborn: Seaborn is another data visualization library built on top of Matplotlib that allows for visually appealing statistical graphs. It allows you to easily visualize beautiful confidence intervals, distributions, and other graphs.
  • Statsmodels: This statistical modeling library builds all of your statistical models and statistical tests including linear regression, generalized linear models, and time series analysis models.
  • Scipy: Scipy is a library used for scientific computing that helps with linear algebra, optimization, and statistical tasks.
  • Requests: This is a useful library for scraping data from websites. It provides a user-friendly and responsive way to configure HTTP requests.

In addition to all of the general data manipulation libraries available in Python, a major advantage of Python in data science is the availability of powerful machine learning libraries. These machine learning libraries make data scientists’ lives easier by providing robust, open source libraries for any machine learning algorithm desired. These libraries offer simplicity without sacrificing performance. You can easily build a powerful and accurate neural network using these frameworks. Some of the most popular machine learning and deep learning libraries in Python include:

  • Scikit-learn: This popular machine learning library is a one-stop-shop for all of your machine learning needs with support for both supervised and unsupervised tasks. Some of the machine learning algorithms available are logistic regression, k-nearest neighbors, support vector machine, random forest, gradient boosting, k-means, DBSCAN, and principal component analysis.
  • Tensorflow: Tensorflow is a high-level library for building neural networks. Since it was mostly written in C++, this library provides us with the simplicity of Python without sacrificing power and performance. However, working with raw Tensorflow is not suited for beginners.
  • Keras: Keras is a popular high-level API that acts as an interface for the Tensorflow library. It’s a tool for building neural networks using a Tensorflow backend that’s extremely user friendly and easy to get started with.
  • Pytorch: Pytorch is another framework for deep learning created by Facebook’s AI research group. It provides more flexibility and speed than Keras, but since it has a low-level API, it is more complex and may be a little bit less beginner friendly than Keras. 

What Other Programming Languages are Used for Data Science?

Python is the most popular programming language for data science. If you’re looking for a new job as a data scientist, you’ll find that Python is also required in most job postings for data science roles. Jeff Hale, a General Assembly data science instructor, scraped job postings from popular job posting sites to see what was required for jobs with the title of “Data Scientist.” Hale found that Python appears in nearly 75% of all job postings. Python libraries including Tensorflow, Scikit-learn, Pandas, Keras, Pytorch, and Numpy also appear in many data science job postings.

Image source: The Most In-Demand Tech Skills for Data Scientists by Jeff Hale

R, another popular programming language for data science, appeared in roughly 55% of the job postings. While R is a useful tool for data science and has many benefits including data cleaning, data visualization, and statistical analysis, Python continues to become more popular and preferred among data scientists for a majority of tasks. In fact, the average percentage of job postings requiring R dropped by about 7% between 2018 and 2019, while Python increased in the percentage of job postings requiring the language. This isn’t to say that learning R is a waste of time; data scientists that know both of these languages can benefit from the strengths of both languages for different purposes. However, since Python is becoming increasingly popular, there’s a high chance that your team uses Python, and it’s important to use the language that your team is comfortable with and prefers.

What is the Future of Python for Data Science?

As Python continues to grow in popularity and as the number of data scientists continues to increase, the use of Python for data science will inevitably continue to grow. As we advance machine learning, deep learning, and other data science tasks, we’ll likely see these advancements available for our use as libraries in Python. Python has been well-maintained and continuously growing in popularity for years, and many of the top companies use Python today. With its continued popularity and growing support, Python will be used in the industry for years to come.

Whether you’ve been a data scientist for years or you are just beginning your data science journey, you can benefit from learning Python for data science. The simplicity, readability, support, community, and popularity of the language — as well as the libraries available for data cleaning, visualization, and machine learning — all set Python apart from other programming languages. If you aren’t already using Python for your work, give it a try and see how it can simplify your data science workflow.

Data at Work: 3 Real-World Problems Solved by Data Science

By

At first glance, data science seems to be just another business buzzword — something abstract and ill-defined. While data can, in fact, be both of these things, it’s anything but a buzzword. Data science and its applications have been steadily changing the way we do business and live our day-to-day lives — and considering that 90% of all of the world’s data has been created in the past few years, there’s a lot of growth ahead of this exciting field.

While traditional statistics and data analysis have always focused on using data to explain and predict, data science takes this further. It uses data to learn — constructing algorithms and programs that collect from various sources and apply hybrids of mathematical and computer science methods to derive deeper actionable insights. Whereas traditional analysis uses structured data sets, data science dares to ask further questions, looking at unstructured “big data” derived from millions of sources and nontraditional mediums such as text, video, and images. This allows companies to make better decisions based on its customer data.

So how is this all manifesting in the market? Here, we look at three real-world examples of how data science drives business innovation across various industries and solves complex problems.

Continue reading

How to Quickly get an Internship in Data Science

By

After studying statistics, probability, programming, algorithms, and data structures for long hours, putting all the knowledge in action is essential. An internship at a great company is a great way to practice your skills, but at the same time is one of the most difficult jobs. Especially with such vast competition.  

Nowadays, many other opportunities are branded as “internship experiences” but they’re not actually internships. A key distinction is as follows: if you’re asked to pay for an internship, then it’s not an internship. An internship is a free opportunity to work in a specific industry for a short period of time, usually shadowing an existing employee or team.

This article will provide you with five tips to help you secure your first data science internship. However, first we’ll discuss what exactly data science is and what the job entails.

What is data science?

Data science focuses on obtaining actionable insights from data, both raw and unstructured, often in large quantities. This big data is analyzed by data analysts as it’s so complex it cannot be understood by existing software or machines.

Ultimately, data science is concerned with providing solutions to problems we don’t yet know are problems or concerns. It’s essentially about looking into the future and finding fixes for things that may happen or might be implemented. On the other hand, a data analyst’s role is to investigate current data and how this impacts the now.

What is the role of a data scientist?

As a data science intern, you will be responsible for collecting, cleaning, and analyzing various datasets to gather valuable insights. Later, with the help of other data scientists, these insights will be shared with the company in an effort to contribute to business strategies or product development. Within the role of a data scientist, you will be expected to be independent in your work collecting and cleaning data, finding patterns, building algorithms, and even conducting your own experiments and sharing these with your team.

5 Tips to Finding Your First Data Science Internship

Now that you know what data science is and what a data science analyst does, you may be wondering how to get a data science internship. Here are five actionable tips to land your first data science internship, beginning with a more obvious one: acquiring the right skills.

1.   Acquire the right skills

As a data scientist, you’re expected to possess a variety of complex skills. Therefore, you should begin learning these now to set yourself aside from your competition and increase the likelihood of landing a data science internship.

In fact, regardless of your internship role, you should be actively learning new skills all of the time, preferably skills that are related to your industry (e.g. data science). There’s no set formula to acquire skills; there are numerous ways to get started, such as online data science courses (some of which are free), additional University modules, or conducting some data science work yourself, perhaps in your free time.

The more relevant data science skills you have, the more appealing you’ll be to employees looking for a data science intern. So, start learning now and distinguish yourself from your competition; you won’t regret it.

2.   Customize each data science application

A common problem many graduate students make when applying for internships online is bulk-applying and using the same CV and cover letter for each application. This is a lengthy and tedious process, and rarely pays dividends.

Instead, students should customize each data science application to each company or organization that they’re applying for. Not all data science jobs are the same — their requirements are somewhat different, both in the industry and the company’s goals and beliefs. To increase your likelihood of landing a data science internship, you need to be genuinely interested in the company you are applying for, and show this in your application. Be sure to read through their website, look at their previous work, initiatives, goals, and beliefs. And finally, make sure that the companies you are applying for are places you actually want to work at, or else the sincerity of your application may be cast in a negative light, even if you don’t realize this.

3.   Create a portfolio

To stand out in such a saturated market, it’s essential to create your very own portfolio. Ideally, your portfolio should consist of one or several of your own projects where you collect your own data. It’s good to indicate you have the experience on paper, but showing this to potential employers first-hand shows that you’re willing to go above and beyond, and that you truly do understand datasets and other data scientist tasks.

Your portfolio project(s) should be demonstrable, covering all typical steps of machine learning and general data science tasks such as collecting and cleaning data, looking for outliers, building models, evaluating models, and drawing conclusions based on your data and findings.  Furthermore, go ahead and create a short brief to explain your project(s), to include as a preface to your portfolio.

4.   Practicing for interviews is crucial

While your application may land you an interview, your interview is the penultimate deciding factor as to whether or not you get the data science internship. Therefore, it’s essential to prepare the best you can. 

There are several things you can do to prepare:

●  Research what to expect in the interview.

●  Know your project and portfolio like the back of your hand.

●  Research common interview questions and company information.

●  Practice interview questions and scenarios with a friend or family member. 

Let’s break down each of these points further.

Research what to expect in the interview.

Every interview is different, but you can research roughly what to expect. For example, you could educate yourself on the company’s latest policies and events, ongoing initiatives, or their plans for the coming months. Taking the time to research the company will come through in your interview and show the interviewer that you’re dedicated and willing to do the work.

Know your project and portfolio like the back of your hand.

To show your competence and expertise, it’s essential to have a deep and thorough understanding of your project and portfolio. You’ll need to be able to answer any questions your interviewer asks, and provide detailed and knowledgeable answers.

Prior to the interview, familiarize yourself with your project, revisiting past data, experiments, and conclusions. The more you know, the better equipped you’ll be.

Research common interview questions and company information.

Most data science internship interviews follow a similar series of questions. Before your interview, research these, create a list of the most popular and difficult questions, and prepare your answers for each question. Even if these exact questions may not come up, similar ones are likely to. Preparing thoughtful answers in advance provides you with the best opportunity to express professional and knowledgeable answers that are sure to impress your interviewers.

This leads us to our next point: practicing these questions.

Practice interview questions and scenarios with a friend or family member.

Once you’ve researched a variety of different questions, try answering these with a friend or family member, ideally in a similar environment as the interview. Practicing your answers to these questions will help you be more confident and less nervous. 

Be sure to go over the more difficult questions, just in case they come up in your actual data science internship interview.

Ask whomever is interviewing you (the friend or family member, for example) to ask some of their own questions, too, catching you off guard and forcing you to think on your feet. This too helps you get ready for the interview, since this is likely to happen regardless of how well you prepare.

5.   Don’t be afraid to ask for feedback

You’re not going to get every data science internship you apply for. Even if you did, you wouldn’t be able to take them all. Therefore, we recommend asking for feedback on your interview and application in general.

If you didn’t land the internship the first time, you can use this feedback and perhaps re-apply at a future date. Most organizations and companies will be happy to offer feedback unless they have policies in place preventing them. With clear feedback, you’ll be able to work on potential weaknesses in your application and interview and identify areas of improvement for next time.

Over time, after embracing and implementing this feedback, you’ll become more confident and better suited to the interview environment — a skill that will undoubtedly help you out later in life.

Frequently Asked Questions

What do data analyst interns do?

Data analyst interns are responsible for collecting and analyzing data and creating visualizations of this data, such as written reports, graphs, and presentations.

How do I get a data science job with no experience?

Getting a data science job with no experience will be very difficult. Therefore, we recommend obtaining a degree in a relevant subject (e.g. computer science) if possible and creating your own portfolio to showcase your expertise to potential employers.

What does a data science intern do?

Data science interns perform very similar roles and tasks to full-time data scientists. However, the main difference here is that interns often shadow or work with another data scientist, not alone. As an intern, you can expect to collect and clean data, create experiments, find patterns in data, build algorithms, and more.

To Conclude

Data science internships are few and far between, and landing one can be difficult. But it’s not impossible and the demand for these roles is slowly increasing as the field becomes more popular.

The role of a data scientist intern includes analyzing data, creating experiments, building algorithms, and utilizing machine learning, amongst a variety of other tasks. To successfully get a data science internship, you should begin acquiring the right skills now, customize each application, create your very own portfolio and project, practice for interviews, and don’t be afraid to ask for feedback on unsuccessful applications.

Best of luck to all those applying, and remember: preparation is key.

Explore Data Workshops

3 Tips for Preparing for a Data Science Interview

By

Hello intrepid data scientist! First off, I’d like to congratulate you; you’re likely reading this post because you’re preparing to interview for a data science job. This means I’ll assume that: (a) you’re the type of person that researches ways to improve and level up in your career, and (b) you’re reached the interview stage — congrats!

As a data science instructor, I’m often asked for advice on how to prepare for a data science interview. In response, I usually bring up three major themes. You need to:

1. Have a background that includes sufficient knowledge of the field of data science to fulfill the job’s tasks.

2. Have implemented that knowledge in some way that the community recognizes.

3. Be able to convince your interviewer of your knowledge and abilities.

1. Knowledge of Data Science

I’ve taken part in interviewing many data scientists and have also been interviewed. Through being on both sides of the table, I’ve seen that there are usually three-ish areas of knowledge that an interviewer is looking for: prerequisite knowledge of data science at large, which includes: mathematics[1], coding[2], databases[3], and the ability to communicate findings and insights[4]; knowledge of the company and its vertical; and knowledge of the tech stack of that company.

If you’re reading this article with a fairly long time horizon and not trying to cram, then you can prepare ahead of time with the knowledge of data science at large by taking a look at this blog post which has a long list of curated resources. If you are reading this and trying to prepare for a data science interview on a short time horizon, this article and this article have a list of questions with answers to get you in the zone.

Knowledge of the company is going to come from research of that company. Read up on the company and if you have time, find second and third degree connections through LinkedIn or people you know and reach out. As a General Assembly alum, I’ve found it incredibly helpful to go to a company’s LinkedIn page, check out who the fellow alumni are, and connect through a LinkedIn message or offering to buy them coffee. Reading up on the company usually takes the form of doing research about the company itself (founding principles, place in the market, investment stage, etc.), but it also takes the form of looking up who you’d be working alongside if you started working there. What does the data team look like? Are there data engineers or other data scientists?[5]

During a data science interview, your background will likely speak to your knowledge of the vertical you’re applying to. In the absence of that, some portfolio projects are a great second option to show your domain expertise.

Thomas Hughes, Manager of Data Science and Machine Learning at Etsy, shared this bit of advice on striking a balance between generalized skills, specific skills, and knowledge in a vertical:

“Companies who do not have much experience in data work generally look for candidates who specialize in their industry vertical. Since they don’t know what they’re looking for, they often will say, ‘I’m looking for someone who has solved problems similar to my problems, which I’m assuming means they have to be coming from my industry.’

More mature companies, with experience in the data space, recognize that many of the techniques are applicable across industries and don’t require industry specific knowledge, and furthermore, someone who’s deeply trained in a specific technique often adds more value than someone who’s just familiar with an industry vertical.”

Theodore Villacorta, Executive Director of Analytics at Warner Brothers, shared with me that, “regarding vertical, your background matters less; it’s more about skills to get data from a database and how you can perform with it.”

Lastly, you need to be fairly well versed in the tech stack that the company primarily uses. Villacorta offers: “Since knowledge of one of the two main open source languages is a strong requisite, along with the ability to use the corresponding SQL packages for those languages, it might be a great idea to showcase those in a portfolio piece. Most organizations have some form of SQL database.” At minimum, be prepared to answer questions about any tech stack that the company uses within the realm of data science and especially be prepared to answer questions about any tech that your resume lists. I usually like to do two things in preparation, to get an idea of what’s being used: first, I’ll head to stackshare.io and see if the company is listed. Second, I’ll look at the skills that current employees list on LinkedIn.

2. Community Recognition

The second piece is the community piece, especially if you have plenty of time before the data science interview. Community is purposefully a fairly amorphous term here. You can attend in-person events like meetups or conferences, or you can also have a community of coworkers, or a community of social media followers. I suggest laying the groundwork naturally. Networking can feel uncomfortable, but finding people you genuinely like being around in this field is usually pretty easy (didn’t anyone tell you that data scientists are the coolest people in any room?). If you don’t find a community that you’re into, try building one: set up a talk featuring other data scientists. Think like a starfish here, not a spider. You’re trying to create interactions and connections that continue to build new interactions in your absence; not interactions and connections that fall into a void once you’re no longer making them happen.

3. Convince Your Interviewer

In your data science interview, you need to convince the interviewer of your capabilities of both areas above. Interviewers are looking to make sure that you’re someone that generally fits into the puzzle board of other employees that make up the company culture. Show them that you’re great at the community thing through past coworkers or your involvement in open source projects online, engagements with people on Twitter, your writing style on blog posts, and the like. As Villacorta mentions, “For everyone, regardless of how cross functional of a role, I think it’s important to find someone who has an ability to collaborate, share resources…I’ll usually ask behavioral questions like ‘tell me a time when…’ in order to get a sense of a candidate’s abilities in this area.”

Hughes explains, “Senior level positions generally need to be providing leadership and influence over non-technical stakeholders. So they need experience explaining how the work they and their team is doing is valuable in non-technical ways.” Demonstrating your knowledge in an interview comes down to staying open. You’ve done the studying, now just get out of your own way.

I like employing the beginner’s mind here. Take every question in as though you’re uncovering the answer alongside the interviewer. In other words, think of it kind of like an archeological dig, rather than a tennis match. When you get an interview question like, “what’s a P value?” you can respond with, “are you curious about calculating and interpreting P values in the context of hypothesis testing in a project? Because I had a great project I worked on [insert teaser to a project here]… or are you looking for a definition?” This gives your interviewer a ton more fodder to work with and opens you up to answer questions in the Situation, Task, Action, Results (STAR) format, especially as it relates to former projects and jobs.

Regardless of where you are in the interviewing process, know that there is a position and great fit for a company for you somewhere. I think it’s helpful to consider the process of interviewing through the lens of a company — they’ve been looking for you! Don’t let your own ego get in the way of letting a genuine interaction take place during the data science interview. Interviews aren’t something you’re “stuck with” having to put up with on your march towards another job. In fact, they can be incredibly rewarding moments to find new areas to learn about in this fascinating field we’re in. Good luck, and let me know how it went!


[1] Stats questions are incredibly popular fodder for data science interviews. Linear Algebra is less often questioned in interviews, but more helpful on the job.

[2] You should be fluent in at least one of the two major open source languages: Python or R.

[3] Data lives in databases, unless it lives in dozens of Excel files on a Shared Drive. You don’t want to work at places without a database though.

[4] This is actually really difficult to gauge in an interview because everyone gives candidates leeway for being nervous. Often you can pass this test by being affable and confident in your answer.  

[5] Note that if the answer to either of these questions is “no”, then you’re going to be playing both roles.

How to Run a Python Script

By

As a blooming Python developer who has just written some Python code, you’re immediately faced with the important question, “how do I run it?” Before answering that question, let’s back up a little to cover one of the fundamental elements of Python.

An Interpreted Language

Python is an interpreted programming language, meaning Python code must be run using the Python interpreter.

Traditional programming languages like C/C++ are compiled, meaning that before it can be run, the human-readable code is passed into a compiler (special program) to generate machine code – a series of bytes providing specific instructions to specific types of processors. However, Python is different. Since it’s an interpreted programming language, each line of human-readable code is passed to an interpreter that converts it to machine code at run time.

So to run Python code, all you have to do is point the interpreter at your code.

Different Versions of the Python Interpreter

It’s critical to point out that there are different versions of the Python interpreter. The major Python version you’ll likely see is Python 2 or Python 3, but there are sub-versions (i.e. Python 2.7, Python 3.5, Python 3.7, etc.). Sometimes these differences are subtle. Sometimes they’re dramatically different. It’s important to always know which Python version is compatible with your Python code.

Run a script using the Python interpreter

To run a script, we have to point the Python interpreter at our Python code…but how do we do that? There are a few different ways, and there are some differences between how Windows and Linux/Mac operating systems do things. For these examples, we’re assuming that both Python 2.7 and Python 3.5 are installed.

Our Test Script

For our examples, we’re going to start by using this simple script called test.py.

test.py
print(“Aw yeah!”)'

How to Run a Python Script on Windows

The py Command

The default Python interpreter is referenced on Windows using the command py. Using the Command Prompt, you can use the -V option to print out the version.

Command Prompt
> py -V
Python 3.5

You can also specify the version of Python you’d like to run. For Windows, you can just provide an option like -2.7 to run version 2.7.

Command Prompt
> py -2.7 -V
Python 2.7

On Windows, the .py extension is registered to run a script file with that extension using the Python interpreter. However, the version of the default Python interpreter isn’t always consistent, so it’s best to always run your scripts as explicitly as possible.

To run a script, use the py command to specify the Python interpreter followed by the name of the script you want to run with the interpreter. To avoid using the full file path to your script (i.e. X:\General Assembly\test.py), make sure your Command Prompt is in the same directory as your Python script file. For example, to run our script test.py, run the following command:

Command Prompt
> py -3.5 test.py
Aw yeah!

Using a Batch File

If you don’t want to have to remember which version to use every time you run your Python program, you can also create a batch file to specify the command. For instance, create a batch file called test.bat with the contents:

test.bat
@echo off
py -3.5 test.py

This file simply runs your py command with the desired options. It includes an optional line “@echo off” that prevents the py command from being echoed to the screen when it’s run. If you find the echo helpful, just remove that line.

Now, if you want to run your Python program test.py, all you have to do is run this batch file.

Command Prompt
> test.bat
Aw yeah!

How to Run a Python Script on Linux/Mac

The py Command

Linux/Mac references the Python interpreter using the command python. Similar to the Windows py command, you can print out the version using the -V option.

Terminal
$ python -V
Python 2.7

For Linux/Mac, specifying the version of Python is a bit more complicated than Windows because the python commands are typically a bunch of symbolic links (symlinks) or shortcuts to other commands. Typically, python is a symlink to the command python2, python2 is a symlink to a command like python2.7, and python3 is a symlink to a command like python3.5. One way to view the different python commands available to you is using the following command:

Terminal
$ ls -1 $(which python)* | egrep ‘python($|[0-9])’ | egrep -v config
/usr/bin/python
/usr/bin/python2
/usr/bin/python2.7
/usr/bin/python3
/usr/bin/python3.5

To run our script, you can use the Python interpreter command and point it to the script.

Terminal
$ python3.5 test.py
Aw yeah!

However, there’s a better way of doing this.

Using a shebang

First, we’re going to modify the script so it has an additional line at the top starting with ‘#!’ and known as a shebang (shebangs, shebangs…).

test.py
#!/usr/bin/env python3.5
print(“Aw yeah!”)

This special shebang line tells the computer how to interpret the contents of the file. If you executed the file test.py without that line, it would look for special instruction bytes and be confused when all it finds is a text file. With that line, the computer knows that it should run the contents of the file as Python code using the Python interpreter.

You could also replace that line with the full file path to the interpreter:

#!/usr/bin/python3.5

However, different versions of Linux might install the Python interpreter in different locations, so this method can cause problems. For maximum portability, I always use the line with /usr/bin/env that looks for the python3.5 command by searching the PATH environment variable, but the choice is up to you.

Next, we’re going to set the permissions of this file to be Python executable with this command:

Terminal
$ chmod +x test.py

Now we can run the program using the command ./test.py!

Terminal
$ ./test.py
Aw yeah!

Pretty sweet, eh?

Run the Python Interpreter Interactively

One of the awesome things about Python is that you can run the interpreter in an interactive mode. Instead of using your py or python command pointing to a file, run it by itself, and you’ll get something that looks like this:

Command Prompt
> py
Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 21:26:53) [MSC v.1916 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>

Now you get an interactive command prompt where you can type in individual lines of Python!

Command Prompt (Python Interpreter)
>>> print(“Aw yeah!”)
Aw yeah!

What’s great about using the interpreter in interactive mode is that you can test out individual lines of Python code without writing an entire program. It also remembers what you’ve done, just like in a script, so things like functions and variables work the exact same way.

Command Prompt (Python Interpreter)
>>> x = "Still got it."
>>> print(x)
Still got it.

How to Run a Python Script from a Text Editor

Depending on your workflow, you may prefer to run your Python program or Python script file directly from your text editor. Different text editors provide fancy ways of doing the same thing we’ve already done — pointing the Python interpreter at your Python code. To help you along, I’ve provided instructions on how to do this in four popular text editors.

  1. Notepad++
  2. VSCode
  3. Sublime Text
  4. Vim

1. Notepad++

Notepad++ is my favorite general purpose text editor to use on Windows. It’s also super easy to run a Python program from it.

Step 1: Press F5 to open up the Run… dialogue

Step 2: Enter the py command like you would on the command line, but instead of entering the name of your script, use the variable FULL_CURRENT_PATH like so:

py -3.5 -i "$(FULL_CURRENT_PATH)"

You’ll notice that I’ve also included a -i option to our py command to “inspect interactively after running the script”. All that means is it leaves the command prompt open after it’s finished, so instead of printing “Aw yeah!” and then immediately quitting, you get to see the Python program’s output.

Step 3: Click Run

2. VSCode

VSCode is a Windows text editor designed specifically to work with code, and I’ve recently become a big fan of it. Running a Python program from VSCode is a bit complicated to set it up, but once you’ve done that, it works quite nicely.

Step 1: Go to the Extensions section by clicking this symbol or pressing CTRL+SHIFT+X.

Step 2: Search and install the extensions named Python and Code Runner, then restart VSCode.

Step 3: Right click in the text area and click the Run Code option or press CTRL+ALT+N to run the code.

Note: Depending on how you installed Python, you might run into an error here that says ‘python’ is not recognized as an internal or external command. By default, Python only installs the py command, but VSCode is quite intent on using the python command which is not currently in your PATH. Don’t worry, we can easily fix that.

Step 3.1: Locate your Python installation binary or download another copy from www.python.org/downloads. Run it, then select Modify.

Step 3.2: Click next without modifying anything until you get to the Advanced Options, then check the box next to Add Python to environment variables. Then click Install, and let it do its thing.

Step 3.3: Go back to VSCode and try again. Hopefully, it should now look a bit more like this:

A screenshot of a code editor showing how to run a Python script.

3. Sublime Text

Sublime Text is a popular text editor to use on Mac, and setting it up to run a Python program is super simple.

Step 1: In the menu, go to Tools → Build System and select Python.

A screenshot of a code editor showing how to run a Python script.

Step 2: Press command +b or in the menu, go to Tools → Build.

4. Vim

Vim is my text editor of choice when it comes to developing on Linux/Mac operating systems, and it can also be used to easily run a Python program.

Step 1: Enter the command :w !python3 and hit enter.

A terminal window showing how to run a Python script.

Step 2: Profit.

A terminal window showing how to run a Python script.

Now that you can successfully run your Python code, you’re well on your way to speaking parseltongue!

– – – – –

A Beginner’s Guide to Learn Python Programming

By

Estimated reading time: 7 minutes

WHAT IS PYTHON?: AN INTRODUCTION

Python is one of the most popular and user-friendly programming languages out there. As a developer who’s learned a number of programming languages, Python is one of my favorites due to its simplicity and power. Whether I’m rapidly prototyping a new idea or developing a robust piece of software to run in production, Python is usually my language of choice.

The Python programming language is ideal for folks first learning to program. It abstracts away many of the more complicated elements of computer programming that can trip up beginners, and this simplicity gets you up-and-running much more quickly!

For instance, the classic “Hello world” program (it just prints out the words “Hello World!”) looks like this in C:

However, to understand everything that’s going on, you need to understand what #include means (am I excluding anyone?), how to declare a function, why there’s an “f” appended to the word “print,” etc., etc.

Not only is this an easier starting point, but as the complexity of your Python programming grows, this simplicity will make sure you’re spending more time writing awesome code and less time tracking down bugs! 

Since Python is popular and open-source, there’s a thriving community of Python application developers online with extensive forums and documentation for whenever you need help. No matter what your issue is, the answer is usually only a quick Google search away.

If you’re new to programming or just looking to add another language to your arsenal, I would highly encourage you to join our community.

What Type of Language is Python?

Named after the classic British comedy troupe Monty Python, Python is a general-purpose, interpreted, object-oriented, high-level programming language with dynamic semantics. That’s a bit of a mouthful, so let’s break it down.

General-Purpose

Python is a general-purpose language which means it can be used for a wide variety of development tasks. Unlike a domain-specific language that can only be used for specific types of applications (think JavaScript and HTML/CSS for web development), a general-purpose language like Python can be used for:

Web applications: Popular frameworks like the Django web application and Flask are written in Python.

Desktop applications: The Dropbox client is written in Python.

Scientific and numeric computing: Python is the top choice for data science and machine learning.

Cybersecurity: Python is excellent for data analysis, writing system scripts that interact with an operating system, and communicating over network sockets.

Interpreted

Python is an interpreted language, meaning Python program code must be run using the Python interpreter.

Traditional programming languages like C/C++ are compiled, meaning that before it can be run, the human-readable code is passed into a compiler (special program) to generate machine code — a series of bytes providing specific instructions to specific types of processors. However, Python is different. Since it’s an interpreted programming language, each line of human-readable code is passed to an interpreter that converts it to machine code at run time.

In other words, instead of having to go through the sometimes complicated and lengthy process of compiling your code before running it, you just point the Python interpreter at your code, and you’re off!

Part of what makes an interpreted language great is how portable it is. Compiled languages must be compiled for the specific type of computer they’re run on (i.e. think your phone vs. your laptop). For Python, as long as you’ve installed the interpreter for your computer, the exact same code will run almost anywhere!

Object-Oriented

Python is an Object-Oriented Programming (OOP) language which means that all of its elements are broken down into things called objects. A Python object is very useful for software architecture and often makes it simpler to write large, complicated applications. 

High-Level

Python is a high-level language which really just means that it’s simpler and more intuitive for a human to use. Low-level languages such as C/C++ require a much more detailed understanding of how a computer works. With a high-level language, many of these details are abstracted away to make your life easier.

For instance, say you have a list of three numbers — 1, 2, and 3 — and you want to append the number 4 to that list. In C, you have to worry about how the computer uses memory, understands different types of variables (i.e., an integer vs. a string), and keeps track of what you’re doing.

Implementing this in C code is rather complicated:

However, implementing this in Python code is much simpler:

Since a list in Python is an object, you don’t need to specifically define what the data structure looks like or explain to the computer what it means to append the number 4. You just say “list.append(4)”, and you’re good.

Under the hood, the computer is still doing all of those complicated things, but as a developer, you don’t have to worry about them! Not only does that make your code easier to read, understand, and debug, but it means you can develop more complicated programs much faster.

Dynamic Semantics

Python uses dynamic semantics, meaning that its variables are dynamic objects. Essentially, it’s just another aspect of Python being a high-level language.

In the list example above, a low-level language like C requires you to statically define the type of a variable. So if you defined an integer x, set x = 3, and then set x = “pants”, the computer will get very confused. However, if you use Python to set x = 3, Python knows x is an integer. If you then set x = “pants”, Python knows that x is now a string.

In other words, Python lets you assign variables in a way that makes more sense to you than it does to the computer. It’s just another way that Python programming is intuitive.

It also gives you the ability to do something like creating a list where different elements have different types like the list [1, 2, “three”, “four”]. Defining that in a language like C would be a nightmare, but in Python, that’s all there is to it.

Being so powerful, flexible, and user-friendly, the Python language has become incredibly popular. Python’s popularity is important for a few reasons.

Python Programming is in Demand

If you’re looking for a new skill to help you land your next job, learning Python is a great move. Because of its versatility, Python is used by many top tech companies. Netflix, Uber, Pinterest, Instagram, and Spotify all build their applications using Python. It’s also a favorite programming language of folks in data science and machine learning, so if you’re interested in going into those fields, learning Python is a good first step. With all of the folks using Python, it’s a programming language that will still be just as relevant years from now.

Dedicated Community

Python developers have tons of support online. It’s open-source with extensive documentation, and there are tons of articles and forum posts dedicated to it. As a professional Python developer, I rely on this community everyday to get my code up and running as quickly and easily as possible.

There are also numerous Python libraries readily available online! If you ever need more functionality, someone on the internet has likely already written a library to do just that. All you have to do is download it, write the line “import <library>”, and off you go. Part of Python’s popularity in data science and machine learning is the widespread use of its libraries such as NumPy, Pandas, SciPy, and TensorFlow.

Conclusion

Python is a great way to start programming and a great tool for experienced developers. It’s powerful, user-friendly, and enables you to spend more time writing badass code and less time debugging it. With all of the libraries available, it will do almost anything you want it to.

The final answer to the question “What is Python”? Awesome. Python is awesome.

SQL: Using Data to Boost Business and Increase Efficiency

By

In today’s digital age, we’re constantly bombarded with information about new apps, transformative technologies, and the latest and greatest artificial intelligence system. While these technologies may serve very different purposes in our life, all of them share one thing in common: They rely on data. More specifically, they all use databases to capture, store, retrieve, and aggregate data. This begs the question: How do we actually interact with databases to accomplish all of this? The answer: We use Structured Query Language, or SQL (pronounced “sequel” or “ess-que-el”).

Put simply, SQL is the language of data — it’s a programming language that enables us to efficiently create, alter, request, and aggregate data from those mysterious things called databases. It gives us the ability to make connections between different pieces of information, even when we’re dealing with huge data sets. Modern applications are able to use SQL to deliver really valuable pieces of information that would otherwise be difficult for humans to keep track of independently. In fact, pretty much every app that stores any sort of information uses a database. This ubiquity means that developers use SQL to log, record, alter, and present data within the application, while analysts use SQL to interrogate that same data set in order to find deeper insights.

Finding SQL in Everyday Life

Think about the last time you looked up the name of a movie on IMDB. I’ll bet you quickly noticed an actress on the cast list and thought something like, “I didn’t realize she was in that,” then clicked a link to read her bio. As you were navigating through that app, SQL was responsible for returning the information you “requested” each time you clicked a link. This sort of capability is something we’ve come to take for granted these days.

Let’s look at another example that truly is cutting-edge, this time at the intersection of local government and small business. Many metropolitan cities are supporting open data initiatives in which public data is made easily accessible through access to the databases that store this information. As an example, let’s look at Los Angeles building permit data, business listings, and census data.

Imagine you work at a real estate investment firm and are trying to find the next up-and-coming neighborhood. You could use SQL to combine the permit, business, and census data in order to identify areas that are undergoing a lot of construction, have high populations, and contain a relatively low number of businesses. This might be a great opportunity to purchase property in a soon-to-be thriving neighborhood! For the first time in history, it’s easy for a small business to leverage quantitative data from the government in order to make a highly informed business decision.

Leveraging SQL to Boost Your Business and Career

There are many ways to harness SQL’s power to supercharge your business and career, in marketing and sales roles, and beyond. Here are just a few:

  • Increase sales: A sales manager could use SQL to compare the performance of various lead-generation programs and double down on those that are working.
  • Track ads: A marketing manager responsible for understanding the efficacy of an ad campaign could use SQL to compare the increase in sales before and after running the ad.
  • Streamline processes: A business manager could use SQL to compare the resources used by various departments in order to determine which are operating efficiently.

SQL at General Assembly

At General Assembly, we know businesses are striving to transform their data from raw facts into actionable insights. The primary goal of our data analytics curriculum, from workshops to full-time courses, is to empower people to access this data in order to answer their own business questions in ways that were never possible before.

To accomplish this, we give students the opportunity to use SQL to explore real-world data such as Firefox usage statistics, Iowa liquor sales, or Zillow’s real estate prices. Our full-time Data Science Immersive and part-time Data Analytics courses help students build the analytical skills needed to turn the results of those queries into clear and effective business recommendations. On a more introductory level, after just a couple of hours of in one of our SQL workshops, students are able to query multiple data sets with millions of rows.

Meet Our Expert

Michael Larner is a passionate leader in the analytics space who specializes in using techniques like predictive modeling and machine learning to deliver data-driven impact. A Los Angeles native, he has spent the last decade consulting with hundreds of clients, including 50-plus Fortune 500 companies, to answer some of their most challenging business questions. Additionally, Michael empowers others to become successful analysts by leading trainings and workshops for corporate clients and universities, including General Assembly’s part-time Data Analytics course and SQL/Excel workshops in Los Angeles.

“In today’s fast-paced, technology-driven world, data has never been more accessible. That makes it the perfect time — and incredibly important — to be a great data analyst.”

– Michael Larner, Data Analytics Instructor, General Assembly Los Angeles

Harnessing the Power of Data for Disaster Relief

By

2455_header

Data is the engine driving today’s digital world. From major companies to government agencies to nonprofits, business leaders are hunting for talent that can help them collect, sort, and analyze vast amounts of data — including geodata — to tackle the world’s biggest challenges.

In the case of emergency management, disaster preparedness, response, and recovery, this means using data to expertly identify, manage, and mitigate the risks of destructive hurricanes, intense droughts, raging wildfires, and other severe weather and climate events. And the pressure to make smarter data-driven investments in disaster response planning and education isn’t going away anytime soon — since 1980, the U.S. has suffered 246 weather and climate disasters that topped over $1 billion in losses according to the National Centers for Environmental Information.

Employing creative approaches for tackling these pressing issues is a big reason why New Light Technologies (NLT), a leading company in the geospatial data science space, joined forces with General Assembly’s (GA) Data Science Immersive (DSI) course, a hands-on intensive program that fosters job-ready data scientists. Global Lead Data Science Instructor at GA, Matt Brems, and Chief Scientist and Senior Consultant at NLT, Ran Goldblatt, recognized a unique opportunity to test drive collaboration between DSI students and NLT’s consulting work for the Federal Emergency Management Agency (FEMA) and the World Bank.

The goal for DSI students: build data solutions that address real-world emergency preparedness and disaster response problems using leading data science tools and programming languages that drive visual, statistical, and data analyses. The partnership has so far produced three successful cohorts with nearly 60 groups of students across campuses in Atlanta, Austin, Boston, Chicago, Denver, New York City, San Francisco, Los Angeles, Seattle, and Washington, D.C., who learn and work together through GA’s Connected Classroom experience.

Taking on Big Problems With Smart Data

nlt-ga-2

DSI students present at NLT’s Washington, D.C. office.

“GA is a pioneering institution for data science, so many of its goals coincide with ours. It’s what also made this partnership a unique fit. When real-world problems are brought to an educational setting with students who are energized and eager to solve concrete problems, smart ideas emerge,” says Goldblatt.

Over the past decade, NLT has supported the ongoing operation, management, and modernization of information systems infrastructure for FEMA, providing the agency with support for disaster response planning and decision-making. The World Bank, another NLT client, faces similar obstacles in its efforts to provide funding for emergency prevention and preparedness.

These large-scale issues served as the basis for the problem statements NLT presented to DSI students, who were challenged to use their newfound skills — from developing data algorithms and analytical workflows to employing visualization and reporting tools — to deliver meaningful, real-time insights that FEMA, the World Bank, and similar organizations could deploy to help communities impacted by disasters. Working in groups, students dived into problems that focused on a wide range of scenarios, including:

  • Using tools such as Google Street View to retrieve pre-disaster photos of structures, allowing emergency responders to easily compare pre- and post-disaster aerial views of damaged properties.
  • Optimizing evacuation routes for search and rescue missions using real-time traffic information.
  • Creating damage estimates by pulling property values from real estate websites like Zillow.
  • Extracting drone data to estimate the quality of building rooftops in Saint Lucia.

“It’s clear these students are really dedicated and eager to leverage what they learned to create solutions that can help people. With DSI, they don’t just walk away with an academic paper or fancy presentation. They’re able to demonstrate they’ve developed an application that, with additional development, could possibly become operational,” says Goldblatt.

Students who participated in the engagements received the opportunity to present their work — using their knowledge in artificial intelligence and machine learning to solve important, tangible problems — to an audience that included high-ranking officials from FEMA, the World Bank, and the United States Agency for International Development (USAID). The students’ projects, which are open source, are also publicly available to organizations looking to adapt, scale, and implement these applications for geospatial and disaster response operations.

“In the span of nine weeks, our students grew from learning basic Python to being able to address specific problems in the realm of emergency preparedness and disaster response,” says Brems. “Their ability to apply what they learned so quickly speaks to how well-qualified GA students and graduates are.”

Here’s a closer look at some of those projects, the lessons learned, and students’ reflections on how GA’s collaboration with NLT impacted their DSI experience.

Leveraging Social Media to Map Disasters

2455_sec1_socialmediamap_560x344

The NLT engagements feature student work that uses social media to identify “hot spots” for disaster relief.

During disasters, one of the biggest challenges for disaster relief organizations is not only mapping and alerting users about the severity of disasters but also pinpointing hot spots where people require assistance. While responders employ satellite and aerial imagery, ground surveys, and other hazard data to assess and identify affected areas, communities on the ground often turn to social media platforms to broadcast distress calls and share status updates.

Cameron Bronstein, a former botany and ecology major from New York, worked with group members to build a model that analyzes and classifies social media posts to determine where people need assistance during and after natural disasters. The group collected tweets related to Hurricane Harvey of 2017 and Hurricane Michael of 2018, which inflicted billions of dollars of damage in the Caribbean and Southern U.S., as test cases for their proof-of-concept model.

“Since our group lacked premium access to social media APIs, we sourced previously collected and labeled text-based data,” says Bronstein. “This involved analyzing and classifying several years of text language — including data sets that contained tweets, and transcribed phone calls and voice messages from disaster relief organizations.”

Contemplating on what he enjoyed most while working on the NLT engagement, Bronstein states, “Though this project was ambitious and open to interpretation, overall, it was a good experience and introduction to the type of consulting work I could end up doing in the future.”

Quantifying the Economic Impact of Natural Disasters

2455_sec2_economicimpact_560x344

Students use interactive data visualization tools to compile and display their findings.

Prior to enrolling in General Assembly’s DSI course in Washington D.C., Ashley White learned early in her career as a management consultant how to use data to analyze and assess difficult client problems. “What was central to all of my experiences was utilizing the power of data to make informed strategic decisions,” states White.

It was White’s interest in using data for social impact that led her to enroll in DSI where she could be exposed to real-world applications of data science principles and best practices. Her DSI group’s task: developing a model for quantifying the economic impact of natural disasters on the labor market. The group selected Houston, Texas as its test case for defining and identifying reliable data sources to measure the economic impact of natural disasters such as Hurricane Harvey.

As they tackled their problem statement, the group focused on NLT’s intended goal, while effectively breaking their workflow into smaller, more manageable pieces. “As we worked through the data, we discovered it was hard to identify meaningful long-term trends. As scholarly research shows, most cities are pretty resilient post-disaster, and the labor market bounces back quickly as the city recovers,” says White.

The team compiled their results using the analytics and data visualization tool Tableau, incorporating compelling visuals and story taglines into a streamlined, dynamic interface. For version control, White and her group used GitHub to manage and store their findings, and share recommendations on how NLT could use the group’s methodology to scale their analysis for other geographic locations. In addition to the group’s key findings on employment fluctuations post-disaster, the team concluded that while natural disasters are growing in severity, aggregate trends around unemployment and similar data are becoming less predictable.

Cultivating Data Science Talent in Future Engagements

Due to the success of the partnership’s three engagements, GA and NLT have taken steps to formalize future iterations of their collaboration with each new DSI cohort. Additionally, mutually beneficial partnerships with leading organizations such as NLT present a unique opportunity to uncover innovative approaches for managing and understanding the numerous ways data science can support technological systems and platforms. It’s also granted aspiring data scientists real-world experience and visibility with key decision-makers who are at the forefront of emergency and disaster management.

“This is only the beginning of a more comprehensive collaboration with General Assembly,” states Goldblatt. “By leveraging GA’s innovative data science curriculum and developing training programs for capacity building that can be adopted by NLT clients, we hope to provide students with essential skills that prepare them for the emerging, yet competitive, geospatial data job market. Moreover, students get the opportunity to better understand how theory, data, and algorithms translate to actual tools, as well as create solutions that can potentially save lives.”

***

New Light Technologies, Inc. (NLT) provides comprehensive information technology solutions for clients in government, commercial, and non-profit sectors. NLT specializes in DevOps enterprise-scale systems integration, development, management, and staffing and offers a unique range of capabilities from Infrastructure Modernization and Cloud Computing to Big Data Analytics, Geospatial Information Systems, and the Development of Software and Web-based Visualization Platforms.

In today’s rapidly evolving technological world, successfully developing and deploying digital geospatial software technologies and integrating disparate data across large complex enterprises with diverse user requirements is a challenge. Our innovative solutions for real-time integrated analytics lead the way in developing highly scalable virtualized geospatial microservices solutions. Visit our website to find out more and contact us at https://NewLightTechnologies.com.

A Machine Learning Guide for Beginners

By

Ever wonder how apps, websites, and machines seem to be able to predict the future? Like how Amazon knows what your next purchase may be, or how self-driving cars can safely navigate a complex traffic situation?

The answer lies in machine learning.

Machine learning is a branch of artificial intelligence (AI) that often leverages Python to build systems that can learn from and make decisions based on data. Instead of explicitly programming the machine to solve the problem, we show it how it was solved in the past and the machine learns the key steps that are required to do the same task on its own.

Machine learning is revolutionizing every industry by bringing greater value to companies’ years of saved data. Leveraging machine learning enables organizations to make more precise decisions instead of following intuition.

There’s an explosive amount of innovation around machine learning that’s being used within organizations, especially given that the technology is still in its early days. Many companies have invested heavily in building recommendation and personalization engines for their customers. But, machine learning is also being applied in a huge variety of back-office use cases as well, like to forecast sales, identify production bottlenecks, build efficient traffic routing systems, and more.

Machine learning algorithms fall into two categories: supervised and unsupervised learning.

Supervised Learning

Supervised learning tries to predict a future value by relying on training from past data. For instance, Netflix’s movie-recommendation engine is most likely supervised. It uses a user’s past movie ratings to train the model, then predicts what their rating would likely be for movies they haven’t seen and recommends the ones that score highly.

Supervised learning enjoys more commercial success than unsupervised learning. Some common use cases include fraud detection, image recognition, credit scoring, product recommendation, and malfunction prediction.

Unsupervised Learning

Unsupervised learning is about uncovering hidden structures within data sets. It’s helpful in identifying segments or groups, especially when there is no prior information available about them. These algorithms are commonly used in market segmentation. They enable marketers to identify target segments in order to maximize revenue, create anomaly detection systems to identify suspicious user behavior, and more.

For instance, Netflix may know how many customers it has, but wants to understand what kind of groupings they fall into in order to offer services targeted to them. The streaming service may have 50 or more different customer types, aka, segments, but its data team doesn’t know this yet. If the company knows that most of its customers are in the “families with children” segment, it can invest in building specific programs to meet those customer needs. But, without that information, Netflix’s data experts can’t create a supervised machine learning system.

So, they build an unsupervised machine learning algorithm instead, which identifies and extracts various customer segments within the data and allows them to identify groups such as “families with children” or “working professionals.”

How Python, SQL, and Machine Learning Work Together

To understand how SQLPython, and machine learning relate to one another, let’s think of them as a factory. As a concept, a factory can produce anything if it has the right tools. More often than not, the tools used in factories are pretty similar (e.g., hammers and screwdrivers).

What’s amazing is that there can be factories that use those same tools but produce completely different products (e.g., tables versus chairs). The difference between these factories is not the tools, but rather how the factory workers use their expertise to leverage these tools and produce a different result.

In this case, our goal would be to produce a machine learning model, and our tools would be SQL and Python. We can use SQL to extract data from a database and Python to shape the data and perform the analyses that ultimately produce a machine learning model. Your knowledge of machine learning will ultimately enable you to achieve your goal.

To round out the analogy, an app developer, with no understanding of machine learning, might choose to use SQL and Python to build a web app. Again, the tools are the same, but the practitioner uses their expertise to apply them in a different way.

Machine Learning at Work

A wide variety of roles can benefit from machine learning know-how. Here are just a few:

  • Data scientist or analyst: Data scientists or analysts use machine learning to answer specific business questions for key stakeholders. They might help their company’s user experience (UX) team determine which website features most heavily drive sales.
  • Machine learning engineer: A machine learning engineer is a software engineer specifically responsible for writing code that leverages machine learning models. For example, they might build a recommendation engine that suggests products to customers.
  • Research scientist: A machine learning research scientist develops new technologies like computer vision for self-driving cars or advancements in neural networks. Their findings enable data professionals to deliver new insights and capabilities.

Machine Learning in Everyday Life: Real-World Examples

While machine learning-powered innovations like voice-activated robots seem ultra-futuristic, the technology behind them is actually widely used today. Here are some great examples of how machine learning impacts your daily life:

  • Recommendation engines: Think about how Spotify makes music recommendations. The recommendation engine peeks at the songs and albums you’ve listened to in the past, as well as tracks listened to by users with similar tastes. It then starts to learn the factors that influence your music preferences and stores them in a database, recommending similar music that you haven’t listened to — all without writing any explicit rules!
  • Voice-recognition technology: We’ve seen the emergence of voice assistants like Amazon’s Alexa and Google’s Assistant. These interactive systems are based entirely on voice-recognition technology powered by machine learning models.
  • Risk mitigation and fraud prevention: Insurers and creditors use machine learning to make accurate predictions on fraudulent claims based on previous consumer behavior, rather than relying on traditional analysis or human judgement. They also can use these analyses to identify high-risk customers. Both of these analyses help companies process requests and claims more quickly and at a lower cost.
  • Photo identification via computer vision: Machine learning is common among photo-heavy services like Facebook and the home-improvement site Houzz. Each of these services use computer vision — an aspect of machine learning — to automatically tag objects in photos without human intervention. For Facebook, these tend to be faces, whereas Houzz seeks to identify individual objects and link to a place where users can purchase them.

Why You and Your Business Need to Understand Data Science

As the world becomes increasingly data-driven, learning to leverage key technologies like machine learning — along with the programming languages Python (which helps power machine learning algorithms) and SQL — will create endless possibilities for your career and your organization. There are many pathways into this growing field, as detailed by our Data Science Standards Board, and now’s a great time to dive in.

In our paper A Beginner’s Guide to SQL, Python, and Machine Learning, we break down these three data sectors. These skills go beyond data to bring delight, efficiency, and innovation to countless industries. They empower people to drive businesses forward with a speed and precision previously unknown.

Individuals can use data know-how to improve their problem-solving skills, become more cross-functional, build innovative technology, and more. For companies, leveraging these technologies means smarter use of data. This can lead to greater efficiency, employees who are empowered to use data in innovative ways, and business decisions that drive revenue and success.

Download the paper to learn more.

Boost your business and career acumen with data.
Find out why machine learning, Python, and SQL are the top technologies to know.