Best Open Source Web Scraping Frameworks and Tools

Open Source has fueled a massive part of the technology boom we are all experiencing. Even in the world of web scraping tools, open source tools play a large part to help gather data from the Internet. We will walk through open source web scraping frameworks and tools that are great for crawling, scraping the web, and parsing out the data.

Here is a list of various web scraping frameworks or languages and the best Open Source web scraping tools available in each language or platform:

Web Scraping Frameworks

  1. Based on Python
    1. Scrapy
    2. MechanicalSoup
    3. PySpider
    4. Portia
  2. Based on JavaScript
    1. Nodecrawler
  3. Browser Based
    1. Selenium WebDriver
    2. Puppeteer
    3. Webscraper.io

Here is a comparison chart showing the important features of all the web scraping frameworks and tools that we will go through in this post:

web-scraping-tool-chart

You can click here or on the image above to get the full view of the chart.

Based on Python

Scrapy

scrapy-web-crawling-framework

Scrapy is an open source web scraping framework in Python used to build web scrapers. It gives you all the tools you need to efficiently extract data from websites, process them as you want, and store them in your preferred structure and format. One of its main advantages is that it’s built on top of a Twisted asynchronous networking framework. If you have a large web scraping project and want to make it as efficient as possible with a lot of flexibility then you should definitely use Scrapy. 

Scrapy has a couple of handy built-in export formats such as JSON, XML, and CSV. Its built for extracting specific information from websites and allows you to focus on the data extraction using CSS selectors and choosing XPath expressions. Scraping web pages using Scrapy is much faster than other open source tools so its ideal for extensive large-scale scaping. It can also be used for a wide range of purposes, from data mining to monitoring and automated testing. 

What stands out about Scrapy is its ease of use and detailed documentation. If you are familiar with Python you’ll be up and running in just a couple of minutes. It runs on Linux, Mac OS, and Windows systems.

Scrapy is under BSD license.

Requires Version – Python 2.7, 3.4+

Available Selectors – CSS, XPath

Available Data Formats – CSV, JSON, XML

Pros

  • Suitable for broad crawling
  • Easy setup and detailed documentation
  • Active Community

Cons

  • Since it is a full-fledged framework, it is not beginner friendly
  • Does not handle JavaScript

MechanicalSoup

MechanicalSoup is a python library that is designed to simulate the behavior of a human using a web browser and built around the parsing library BeautifulSoup. If you need to scrape data from simple sites or if heavy scraping is not required, using MechanicalSoup is a simple and efficient method. MechanicalSoup automatically stores and sends cookies, follows redirects and can follow links and submit forms.

It’s best to use MechanicalSoup when interacting with a website that doesn’t provide a web service API, out of a browser. If the website provides a web service API, then you should use this API and you don’t need MechanicalSoup. If the website relies on JavaScript, then you probably need a fully-fledged browser, like Selenium.

MechanicalSoup is licensed under MIT.

Requires Version – Python 3.0+

Available Selectors – CSS, XPath

Available Data Formats – CSV, JSON, XML

Pros

  • Preferred for fairly simple websites

Cons

  • Does not handle JavaScript

PySpider

PySpider is a web crawler written in Python. It supports Javascript pages and has a distributed architecture. This way you can have multiple crawlers. PySpider can store the data on a backend of your choosing such as MongoDB, MySQL, Redis, etc. You can use RabbitMQ, Beanstalk, and Redis as message queues.

One of the advantages of PySpider its easy to use UI where you can edit scripts, monitor ongoing tasks and view results. If you are working with a website-based user interface, PySpider is the Internet scrape to consider. It also supports AJAX heavy websites. To know more about PySpider, you can check out their documentation and or their community resources. It’s currently licensed under Apache License 2.0.

Requires Version – Python 2.6+, Python 3.3+

Available Selectors – CSS, XPath

Available Data Formats – CSV, JSON

Pros

  • Facilitates more comfortable and faster scraping
  • Powerful UI

Cons

  • Difficult to deploy

Portia 

Portia is a visual scraping tool created by Scrapinghub that does not require any programming knowledge. If you are not a developer, its best to go straight with Portia for your web scraping needs. You can try Portia for free without needing to install anything, all you need to do is sign up for an account at Scrapinghub and you can use their hosted version.

Making a crawler in Portia and extracting web contents is very simple if you do not have programming skills. You won’t need to install anything as Portia runs on the web page. With Portia, you can use the basic point-and-click tools to annotate the data you wish to extract, and based on these annotations Portia will understand how to scrape data from similar pages. Once the pages are detected Portia will create a sample of the structure you have created. Actions such as click, scroll, wait are all simulated by recording and replaying user actions on a page.

Portia is great to crawl Ajax powered based websites (when subscribed to Splash) and should work fine with heavy Javascript frameworks like Backbone, Angular, and Ember. It filters the pages it visits for an efficient crawl. Its currently licensed under BSD license. 

Requirements – If you are using Linux you will need Docker installed or if you are using a Windows or Mac OS machine you will need boot2docker.

Available Selectors – CSS, XPath

Available Data Formats – CSV, JSON, XML

Pros

  • Defines CSS or XPath selectors
  • Filters the page it visits

Cons

  • Quite time-consuming as compared to other open source tools
  • Navigating websites are difficult to control. You always need to start the crawl with the target pages, else Portia will visit unnecessary pages and may lead to unwanted results

JavaScript

NodeCrawler

nodecrawler-web-scraping-framework

Nodecrawler is a popular web crawler for NodeJS, making it a very fast crawling solution. If you prefer coding in JavaScript, or you are dealing with mostly a Javascript project, Nodecrawler will be the most suitable web crawler to use. Its installation is pretty simple too. JSDOM and Cheerio (used for HTML parsing) use it for server-side rendering, with JSDOM being more robust.

Requires Version – Node v4.0.0 or greater

Available Selectors – CSS, XPath

Available Data Formats – CSV, JSON, XML

Pros

  • Easy installation

Cons

  • It has no Promise support

Browser-based

Selenium Web Driver

selenium-web-scraping-tool

When it comes to websites that use very complex and dynamic code, it’s better to have all the page content rendered using a browser first. Selenium WebDriver uses a real web browser to access the website, so it would like its activity wouldn’t look any different from a real person accessing information in the same way. When you load a page using Web Driver, the browser loads all the web resources and executes the javascript on the page. At the same time, it stores all the cookies created by websites and sends complete HTTP headers as all browsers do. This makes it very hard to determine whether a real person accesses the website or if its a bot. 

Although it’s mostly used for testing, WebDriver can be used for scraping dynamic web pages. It is the right solution if you want to test if a website works properly with various browsers or Javascript-heavy websites. Using WebDriver makes web scraping easier, but the scraping process is much slower as compared to simple HTTP request to the web browser. When you are using the WebDriver, the browser waits until the whole page is loaded and then can you only access the elements. Selenium has a very large and active community which is great for beginners.

Requires Version – Python 2.7 and 3.5+ and provides bindings for languages Javascript, Java, C, Ruby, and Python.

Available Selectors – CSS, XPath

Available Data Formats – Customizable

Pros

  • Suitable for scraping heavy Javascript websites
  • Large and active community
  • Detailed documentation, making it easy to grasp for beginners

Cons

  • Hard to maintain when there are any changes in the website structure
  • High CPU and memory usage

Puppeteer

puppeteer-web-scraping-framework

Puppeteer is a Node library which provides a powerful but simple API that allows you to control Google’s headless Chrome browser. A headless browser means you have a browser that can send and receive requests but has no GUI. It works in the background, performing actions as instructed by an API. You can truly simulate the user experience, typing where they type and clicking where they click.

The best case to use Puppeteer for web scraping is if the information you want is generated using a combination of API data and Javascript code. A headless browser is a great tool for automated testing and server environments where you don’t need a visible UI shell. For example, you may want to run some tests against a real web page, create a PDF of it, or just inspect how the browser renders a URL. Puppeteer can also be used to take screenshots of web pages visible by default when you open a web browser.

Puppeteer’s API is very similar to Selenium WebDriver, but works only with Google Chrome, while WebDriver works with most popular browsers. Puppeteer has a more active support than Selenium, so if you are working with Chrome, Puppeteer is your best option for web scraping.

Requires Version – Node v6.4.0, Node v7.6.0 or greater

Available Selectors – CSS

Available Data Formats – JSON

Pros

  • With its full-featured API, it covers a majority of use cases
  • The best option for scraping Javascript websites on Chrome

Cons

  • Only available for Chrome
  • Supports only JSON format

Webscraper.io

webscraper.io-web-scraping-tool

Web scraper, a standalone chrome extension, is a great web scraping tool for extracting data from dynamic web pages. Using the extension you can create a sitemap to how the website should be traversed and what data should be extracted. With the sitemaps, you can easily navigate the site the way you want and the data can be later exported as a CSV or into CouchDB.

The advantage of webscraper.io is that you just need basic coding skills. If you aren’t proficient with programming or need large volumes of data to be scraped, Webscraper.io will make the job easier for you. The extension requires Chrome 31+ and has no OS limitations.

You can download and add the extension to Chrome using the link – https://chrome.google.com/webstore/detail/web-scraper/jnhgnonknehpejjnehehllkliplmbmhn?hl=en

Required Version – Chrome 31+

Available Selectors – CSS

Available Data Formats – CSV

Pros

  • Best Google Chrome extension for basic web scraping from websites into CSV format
  • Easy to install, learn and understand

Cons

  • It cannot be used if you have complex web scraping scenarios such as bypasing CAPTCHA, submitting forms, etc.

_______________________________________________________________________________________

These are just some of the open source web scraping frameworks you can use for your web scraping projects. If you have greater scraping requirements or would like to scrape on a much larger scale it’s better to use web scraping services.

If you aren’t proficient with programming or your needs are complex, or you need large volumes of data to be scraped, there are great web scraping services that will suit your requirements to make the job easier for you.

You can save time and get clean, structured data by trying us out instead – we are a full-service provider that doesn’t require the use of any tools and all you get is clean data without any hassles.

You can also get data delivered to you, as a Service from us. Interested?

Turn websites into meaningful and structured data through our web data extraction service

 

 

Join the conversation


Turn websites into meaningful and structured data through our web data extraction service