Data Science Jobs
Data Science Salaries in 2023
-
The aim of this project is to analyse the trends of the Data Science Industry. Employees can be classified based on their job role, experience level, residence, company, mode of work etc. The goal is to observe the trends by classifying the employees into groups. For example, salary variation with respect to work experience, job role and so on. This will help us get a good insight on the trends in the field of Data Science.
-
The dataset has been taken from the following link:
https://www.kaggle.com/datasets/arnabchaki/data-science-salaries-2023 -
For the analysis, I will be using a Google Colab Notebook, with Python as the Progamming Language. The libraries used are the following:
- Numpy for array operations.
- Pandas for modifying and performing operations on a dataset by converting it into a dataframe.
- Matplotlib, plotly and seaborn for plotting visual data.
- Country_converter for world map plotting.
and some other useful libraries.
-
The standard technique for data analysis is to first clean, edit and rearrange the data. This involves removing rows with null entries or replacing null values with a calculated estimation, removing unneccessary columns, adding extra columns based on the given columns, splitting/merging the dataset and so on. The goal is to have a dataframe that can be analysed smoothly without any errors.
-
Next, we obtain results by rearranging and performing computations on the dataframe to obtain useful insights.
-
Finally, we represent these insights through visual interpretation, such as using a line graph to show the variation of a column with respect to the other, bar plot to look at the number in each category, heat map to get a feel of the location of the peak/low values of a column and many more.
The course 'Data Analysis with Python: Zero to Pandas' offered by Jovian has helped me learn Data Analysis from scratch, and all of the information I've described above has been learnt from this course.
How to run the code
This is an executable Jupyter notebook hosted on Jovian.ml, a platform for sharing data science projects. You can run and experiment with the code in a couple of ways: using free online resources (recommended) or on your own computer.
Option 1: Running using free online resources (1-click, recommended)
The easiest way to start executing this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks. You can also select "Run on Colab" or "Run on Kaggle".
Option 2: Running on your computer locally
-
Install Conda by following these instructions. Add Conda binaries to your system
PATH
, so you can use theconda
command on your terminal. -
Create a Conda environment and install the required libraries by running these commands on the terminal:
conda create -n zerotopandas -y python=3.8
conda activate zerotopandas
pip install jovian jupyter numpy pandas matplotlib seaborn opendatasets --upgrade
- Press the "Clone" button above to copy the command for downloading the notebook, and run it on the terminal. This will create a new directory and download the notebook. The command will look something like this:
jovian clone notebook-owner/notebook-id
- Enter the newly created directory using
cd directory-name
and start the Jupyter notebook.
jupyter notebook
You can now access Jupyter's web interface by clicking the link that shows up on the terminal or by visiting http://localhost:8888 on your browser. Click on the notebook file (it has a .ipynb
extension) to open it.
Downloading the Dataset
We can directly download a given dataset by opendatasets library which takes the url as argument and downloads the file from it.