Tag : data analysis


Convert Jupyter notebooks to PDF

Jupyter lab is the next-generation web-based UI experience for Jupyter notebook users. It facilitates a tab-based programming interface that is highly extensible and interactive. It supports 40+ programming languages. We have already discussed how we can use Jupyter notebooks for interactive data analysis with SQL Server. With the help of Jupyter notebooks, we can keep headings, comments, code, output, and advanced charts and visuals in a single document in an orderly fashion. It helps Data Scientists and Data Analysts to have highly interactive presentations. In case you have already installed Jupyter notebooks and want to know how we can change the home directory for Jupyter notebooks, visit the blog “Change Jupyter Notebook startup folder on Windows and Mac OS “. Let’s discuss how we can Convert Jupyter notebooks to PDF documents directly from the web-browser or using nbconvert command from command prompt.

Convert Jupyter notebooks to PDF

During … More


Interactive Data Analysis with SQL Server using Jupyter Notebooks

In this post “Interactive Data Analysis with SQL Server using Jupyter Notebooks“, we will demonstrate how we can use Jupyter Notebooks for interactive data analysis with SQL Server. Jupyter notebooks are one of the most useful tools for any Data Scientist/Data Analyst. It supports 40+ programming languages and facilitates web-based interactive programming IDE. We can put comments, headings, codes, and output in one single document. This document maintains the context to the original data source which means we can re-execute the code whenever we need it. This feature facilitates Data scientists/Data analysts to play with the code during the presentations. Also, these notebooks are very handy in sharing and can be shared easily across the teams.

What is Jupyter Lab

Jupyter Lab is the next-generation web-based tool for Jupyter notebooks. It enables tab based programming model which is highly extensible. We can arrange multiple windows … More


Building Decision Tree model in python from scratch – Step by step

In previous post, we created our first Machine Learning model using Logistic Regression to solve a classification problem. We used “Wisconsin Breast Cancer dataset” for demonstration purpose. Now, in this post “Building Decision Tree model in python from scratch – Step by step”, we will be using IRIS dataset which is a standard dataset that comes with Scikit-learn library. Let’s have a quick look at IRIS dataset.

The IRIS dataset

The IRIS dataset is a multi-class classification dataset introduced by British statistician and biologist Ronald Fisher in 1936. This dataset has 150 observations which consists 50 samples of each of three species of Iris flower which are “setosa“, “versicolor” or “virginica“. It is a standard, cleansed and preprocessed multivariate dataset which comes preloaded with Scikit-learn library. Each sample has four input features which are:

  1. Sepal length (cm)
  2. Sepal width (cm)
  3. Petal length (cm)
More

Exploratory Data Analysis (EDA) using Python – Second step in Data Science and Machine Learning

In the previous post, “Tidy Data in Python – First Step in Data Science and Machine Learning”, we discussed the importance of the tidy data and its principles. In a Machine Learning project, once we have a tidy dataset in place, it is always recommended to perform EDA (Exploratory Data Analysis) on the underlying data before fitting it into a Machine Learning model. Let’s start understanding the importance of EDA and some basic EDA techniques which are very useful.

What is Exploratory Data Analysis (EDA)

Exploratory Data Analysis or EDA, is the process of organizing, plotting and summarizing the data to find trends, patterns, and outliers using statistical and visual methods. It takes input data from a tabular format and represents it in a graphical format which makes it more human interpretable. It is an important step in a Machine Learning/Data Science project which should be performed before … More


Tidy Data in Python – First Step in Data Science and Machine Learning 1

Most of the Data Science / Machine Learning projects follow the Pareto principle where we spend almost 80% of the time in data preparation and remaining 20% in choosing and training the appropriate ML model. Mostly, the datasets we get to create Machine Learning models are messy datasets and cannot be fitted into the model directly. We need to perform some data cleaning steps in order to get a dataset which then can be fitted into the model. We need to make sure that the data we are inputting into the model is a tidy data. Indeed, it is the first step in a Machine Learning / Data Science project. We may need to repeat the data cleaning process many times as we face new challenges and problems while cleaning the data. Data cleaning is one of the most important and time taking process a Data Scientist performs before … More