Sign In

Web Scrapping Of Top Insurance Companies

Scraping the data of world's top insurance companies by market capitalization


Data is the collection of facts!

Web Scraping is a technique used to automatically extract large amounts of data from websites and save it to a file or database. The data scraped will usually be in tabular or spreadsheet format(e.g : CSV file)

Here, in this web scrapping we will scrap data from

We'll use the Python libraries requests and beautifulsoup4 to perform scrapping from the webpage.

Here's an outline of the steps we'll follow:

  1. Download the webpage using requests
  2. Parse the HTML source code using beautifulsoup4
  3. Extract company names, CEOs, world ranks, Market capitalization, Annual revenue, number of employees, company URLs
  4. Compile the extracted information into and Python lists and dictionaries
  5. Extract and combine data from multiple pages
  6. Save the extracted information to a CSV file.

By the end of the project, we'll create a CSV file in the following format:


How to Run the Code

You can execute the code using the "Run" button at the top of this page using "Run on binder". You can make changes and save your version of the notebook to Jovian by executing the following cells:

!pip install jovian --upgrade --quiet
import jovian
# Execute this to save new versions of the notebook