Skip to content

Data visualisation for predictive analytics

Author: Achyuthuni Sri Harsha
Data visualisation can be performed in many ways. There are infinite ways to visualise the data, and what works is dependant on the patterns in the data. In this post, we are trying to categorise the visualisation of data for regression and classification problems.
Every regression, classification and clustering problem has some or all of the following assumptions:
1. Change in independent variables changes the dependant variable. In other words, there is a relationship between the dependant variable and the independent variables. Before building a model, it is advised to visualise this relationship.
2. Assumptions on the distribution of the dependant or independent variable. For example, for Naive Bayes classifier, the independent variables should follow a normal distribution.
3. Assumptions of relationships between independent variables. For example, for linear regression, the independent variables should not be correlated.
4. Unbalanced dataset. The frequency of the smaller class should be significant when compared to the frequency of the larger class.
5. The time series of data/features are stationary.

Apart from validating the assumptions and identifying trends in the data, data visualisation can also be used for gathering insights and feature engineering.
The below example is from the marketing department of a consulting firm. The problem is to identify the projects that they can win.

# Importing the necessary libraries
import pandas as pd
import numpy as np
import os 
import matplotlib.pyplot as plt
from statsmodels.graphics.mosaicplot import mosaic
import seaborn as sns
%matplotlib inline
# loading th data
path="data/marketing dept.csv"
df = pd.read_csv(path)
df.head()
reporting_status product industry region strength_in_segment profit_for_customer sales_value profit_perc joint_bid_portion
0 Lost F Cap Oth 57 1.225 6.5 64 59
1 Lost L Def UK 51 1.469 9.9 56 58
2 Lost Lo Cli UK 79 0.887 7.0 59 48
3 Lost G Fin UK 55 1.316 8.9 34 41
4 Won G Sec UK 32 1.010 5.7 43 63

Univariate analysis

The univariate analysis deals with EDA on one variable alone. In describing or characterising the observations of an individual variable, three basic properties are of interest:
1. The location of observations, or how large or small the values of the individual observations are
2. The dispersion (sometimes called scale or spread) of the observations
3. The distribution of the observations

Uni-variate plots provide one way to find out about those properties. There are two basic kinds of univariate plots:
1. Enumeration plots, or plots that show every observation
2. Summary plots that generalise the data into a simplified representation
3. Enumerative plots

Index Plot/Univariate Scatter Diagram

The most common enumerative plot is the index plot. It displays the values of a single variable for each observation using symbols plotted relative to the observation number.

plt.plot(df.sales_value, 'o', color='black')
plt.title("Index plot")
plt.xlabel('Sales Value');

png

From the above plot, we can infer that there are around 3000 observations for sales, and they are captured randomly along the data.

Strip Plot/Strip Chart (univariate scatter diagram)

Displays the values of a single variable as symbols plotted along a line. This is a basic plot where we can see the spread of the data.

ax = sns.stripplot(x=df.sales_value)
ax.set(xlabel = 'Sales_value', title = 'Strip Chart');

png

Dot Plot/Dot Chart

Dot plot displays the values plotted along a line. It is generally constructed after sorting the rows. This can help us in determining the distribution of the data. It can also help us identify the continuity of the data.

plt.plot(df.sort_values(by = 'sales_value').reset_index().sales_value, 'o', color='black')
plt.title("Dot plot")
plt.ylabel('Sales Value');

png

From looking at the plot, most of the data lies within 6-12 while the frequeny of the data decreases as we go away from the mean. The graph is also symmetric. This indicates the distribution could be Normal distribution.

Univariate Summary Plots

Summary plots display an object or a graph that gives a more concise expression of the location, dispersion, and distribution of a variable than an enumerative plot, but this comes at the expense of some loss of information: In a summary plot, it is no longer possible to retrieve the individual data value, but the gain usually matches this loss in understanding that results from the efficient representation of the data. Summary plots generally prove to be much better than the enumerative plots in revealing the distribution of the data.

Box plot

A simple way of representing statistical data on a plot in which a rectangle is drawn to represent the second and third quartiles, usually with a vertical line inside to indicate the median value. The lower and upper quartiles are shown as horizontal lines on either side of the rectangle.

ax = sns.boxplot(x = df.sales_value)
ax.set(xlabel = 'Sales value', title = 'Box Chart');

png

Histograms

The other summary plots are of various types:

Histograms: Histograms are a type of bar chart that displays the counts or relative frequencies of values falling in different class intervals or ranges.
Density Plots: A density plot is a plot of the local relative frequency or density of points along the number line or x-axis of a plot. Where points occur more frequently, this sum, and consequently the local density, will be greater.

# For continuous data
ax = df.sales_value.plot.hist(alpha = 0.75)
df.groupby('sales_value')['sales_value'].count().plot()
ax.set(xlabel = 'Sales value', title = 'Histogram');

png

Q-Q plot

In statistics, a Q-Q (quantile-quantile) plot is a probability plot, which is a graphical method for comparing two probability distributions by plotting their quantiles against each other.

If the two distributions being compared are similar, the points in the Q-Q plot will approximately lie on the line y = x. If the distributions are linearly related, the points in the Q-Q plot will approximately lie on a line, but not necessarily on the line y = x. Q-Q plots can also be used as a graphical means of estimating parameters in a location-scale family of distributions.

A Q-Q plot is used to compare the shapes of distributions, providing a graphical view of how properties such as location, scale, and skewness are similar or different in the two distributions.

Below is a Q-Q plot of the sales data with a normal distribution

from scipy import stats
stats.probplot(df.sales_value, plot=sns.mpl.pyplot);

png

From the above plot, it is clear that the distribution is normal.

Bar chart

Whereas the above plots are applicable for continuous data, a simple bar chart can help us with categorical data.

df.groupby('region')['region'].count().plot.bar().set(xlabel = 'Sales value', title = 'Histogram');

png

Combining the univariate EDA's

The below code will do the following for all the columns in the dataset:
1. For continuous data, it will plot the scatter plot, box plot, histogram and q-q plot with normal distribution
2. For categorical data, it will plot the bar chart

def univariate_analysis(dataset):
    # For catogorical data
    for i in (dataset.select_dtypes(exclude=['int', 'int64', 'float', 'float64']).columns):
        dataset.groupby(i)[i].count().plot.bar()
        plt.show();

    # For continuous data
    ## Selecting the columns that are continuous
    for i in (dataset.select_dtypes(include=['int', 'int64', 'float', 'float64']).columns):
        # Index plot
        plt.subplot(221)
        plt.plot(dataset[i], 'o', color='black')
        plt.xlabel(i)

         # q-q plot
        plt.subplot(222)
        stats.probplot(dataset[i], plot=sns.mpl.pyplot)

        #Box chart
        plt.title(i)
        plt.subplot(223)
        ax = sns.boxplot(x = dataset[i])

        # Histogram
        plt.subplot(224)
        ax2 = dataset[i].plot.hist(alpha = 0.75)
        dataset.groupby(i)[i].count().plot()

        plt.show();
univariate_analysis(df)

png

png

png

png

png

png

png

png

png

Bivariate analysis

The bivariate analysis deals with visualisations between two variables. The bi-variate analysis is used to identify the relationship between dependant and independent variable. The dependent and independent variables can be of the following types:

Problem Independent var Dependent var
Classification Categorical Categorical
Classification Continuous Categorical
Classification Categorical Continuous
Classification Continuous Continuous

For all the four types, we want to identify the relation between the dependant variable and the independent variable.

Classification Visualisations

First, let us consider the classification problem. Let's say we have to predict the reporting status of the bid. We have three categorical independent variables and five continuous independent variables.

Joint Histograms

The five continuous variables are:
1. Strength in segment
2. Profit for customer
3. Sales Value
4. Profit percentage
5. joint bid portion
For these variables, we can look at joint histograms. What we are trying to see is the overlap between the distributions for the two different classes. If the overlap between the two variables is small, then that variable can be a good predictor and vice versa.

bi_con_cat = df.groupby(['reporting_status'])['strength_in_segment'].plot.hist(alpha = 0.5)
plt.xlabel('strength_in_segment')
plt.legend(df.groupby(['reporting_status'])['strength_in_segment'].count().axes[0].tolist())
plt.title('Joint histogram');

png

bi_con_cat = df.groupby(['reporting_status'])['profit_for_customer'].plot.hist(alpha = 0.5)
plt.xlabel('profit_for_customer')
plt.legend(df.groupby(['reporting_status'])['profit_for_customer'].count().axes[0].tolist())
plt.title('Joint histogram');

png

From the above graphs, we can see that profit for customer can explain the status of the bid when compared to strength of segment. We can also see the mean, variance and the distributions of the independent variables between the classes.

In a decision tree, the tree will split with profit_for_customer > 1 as 'Lost' class and profit_for_customer < 1 as 'Won'. In logistic regression, the pseudo R-squared will be greater for profit_for_customer than for strength_in_segment. Similar thinking can be applied to SVM, Naive-Bates classifiers etc.

Mosaic Plots

The three categorical variables are:
1. Product
2. Industry
3. Region
For these variables, a mosaic plot will be useful. In the mosaic plot, the area of the rectangles is proportional to the frequency of the class. In the x-axis, we have the dependant variables, and in the y-axis, we have the continuous variables. Using this, we can see the relative frequencies of the 'Won' and 'Lost' in each of the dependant variable classes.

# from statsmodels.graphics.mosaicplot import mosaic
mosaic(df, ['product', 'reporting_status']);

png

For example, the ratio of Lost to won cases is same in products 'G', 'Li', 'P'. Product 'F' has more wins than normal, while product 'L' has more losses than normal. The products 'C' and 'Lo' are too small to be statistically significant.

Intuitively, in logistic regression, the products 'G', 'Li', 'P' can be considered as base classes with 'F' having a positive slope value and 'L' having a negative slope value. In decision-trees, the products 'G', 'Li', 'P' will be part of one branch while products 'L' and 'F' will be part of different branches. Similar thinking can be applied to SVM, Naive-Bates classifiers etc.

Combining the classification EDA's

The below code will do the following for all the columns in the dataset:
1. For continuous data, it will plot the joint histograms
2. For categorical data, it will plot the mosaic plot

def classification_bivariate_analysis(dataset, dependant_variable):
    # For continuous data
    for i in (dataset.select_dtypes(include=['int', 'int64', 'float', 'float64']).columns):
        bi_con_cat = dataset.groupby([dependant_variable])[i].plot.hist(alpha = 0.5)
        plt.xlabel(i)
        plt.legend(dataset.groupby([dependant_variable])[i].count().axes[0].tolist())
        plt.title(i)
        plt.show();

    # For catogorical data
    for i in (dataset.select_dtypes(exclude=['int', 'int64', 'float', 'float64']).columns):
        if(i != dependant_variable):
            mosaic(dataset, [i, dependant_variable]);
            plt.show();
classification_bivariate_analysis(df, 'reporting_status')

png

png

png

png

png

png

png

png

Bivariate Regression Visualisations

Let us consider the regression problem. Let's say we have to predict sales_value of the successful bids. We have three categorical independent variables and four continuous independent variables.

successful_bids = df[df['reporting_status'] == 'Won']

Scatter plots

There are four continuous variables:
1. Strength in segment
2. Profit for customer
3. Profit percentage
4. Joint bid portion
Scatter plots show how much and how one variable is affected by another. We can use them to identify how changing the independent variable changes the dependant variable. Using this, we can identify if we have to do any transformations to the variables.

plt.scatter(successful_bids['joint_bid_portion'], successful_bids['sales_value'])
plt.xlabel('joint_bid_portion')
plt.ylabel('sales_value')
plt.title('Scatter plot');

png

In the above plot, there seems to be no relation between joint_bid_portion and sales_value. We can also observe how joint bid portion behaves after 80.

Box plots

The three categorical variables are: 1. Product 2. Industry 3. Region
For these variables, a box plot will be useful. While showing the relative means among the classes, we can also visualise the variations and distributions in the data.

bi_variate_boxplot = sns.boxplot(x="industry", y="sales_value", data=successful_bids)
bi_variate_boxplot.set(title = 'Box Chart');

png

From the above plot, the mean of sales for 'Sec', 'Air, 'Ban', 'Cap', 'Con', 'Oth', 'Def', 'Agr' are similar with similar distributions. The mean of 'Ins', 'OG', 'Gov', 'Hea','Whi' classes seems to be higher and the mean of 'Mob', 'Fin', 'Tel' is lower. In a linear regression, the following industries would be considered as base classes: 'Sec', 'Air, 'Ban', 'Cap', 'Con', 'Oth', 'Def', 'Agr' while 'Ins', 'OG', 'Gov', 'Hea','Whi' will have positive slope value and 'Mob', 'Fin', 'Tel' will have a negative slope.

Combining the bivariate regression EDA's

The below code will do the following for all the columns in the dataset:
1. For continuous data, it will plot the scatter plots
2. For categorical data, it will plot the bar charts

def regression_bivariate_analysis(dataset, dependant_variable):
    # For continuous data
    for i in (dataset.select_dtypes(include=['int', 'int64', 'float', 'float64']).columns):
        if(i != dependant_variable):
            plt.scatter(dataset[i], dataset[dependant_variable])
            plt.show();

    # For catogorical data
    for i in (dataset.select_dtypes(exclude=['int', 'int64', 'float', 'float64']).columns):

        bi_variate_boxplot = sns.boxplot(x=i, y=dependant_variable, data=dataset)
        bi_variate_boxplot.set(title = i)
        plt.show();
regression_bivariate_analysis(successful_bids, 'sales_value')

png

png

png

png

png

png

png

png

Back to top