Best Free and Paid Web Scraping Tools and Software

Web scraping tools automate web-based data collection. These tools generally fall in the categories of tools that you install on your computer or in your computer’s browser (Chrome or Firefox) and services that are designed to be self-service. Web scraping tools (free or paid) and self-service websites/applications can be a good choice if your data requirements are small, and the source websites aren’t complicated.

However, if the websites you want to scrape are complicated or you need a lot of data from one or more sites, these tools do not scale well. The cost of these tools and services pales in comparison to the time and effort you require to implement scrapers using these tools and the complexity of maintaining and running these tools. For such cases, a full-service provider is a better and economical option.

In this post, we will first give a brief description of the tools and then quickly walk through how these tools work so that you can quickly evaluate if these work for you.

The best web scraping tools

  1. Web Scraper (Chrome Extension)
  2. Scrapy
  3. Data Scraper (Chrome Extension)
  4. Scraper (Chrome Extension)
  5. ParseHub
  6. OutWitHub
  7. FMiner
  8. Dexi.io
  9. Octoparse
  10. Web Harvey
  11. PySpider
  12. Apify SDK
  13. Content Grabber
  14. Mozenda
  15. Cheerio

Web Scraper

webscraper-extension-logo Web scraper, a standalone chrome extension, is a great tool for extracting data from web pages. Using the extension you can create a sitemap to how the website should be traversed and what data should be extracted. With the sitemaps, you can easily navigate the site the way you want and the data can be later exported as a CSV. Download and add the extension to Chrome using the link here.

Scrapy

scrapy-web-crawling-framework Scrapy is an open source web scraping framework in Python used to build web scrapers. It gives you all the tools you need to efficiently extract data from websites, process them as you want, and store them in your preferred structure and format. One of its main advantages is that it’s built on top of a Twisted asynchronous networking framework. If you have a large web scraping project and want to make it as efficient as possible with a lot of flexibility then you should definitely use Scrapy. It can also be used for a wide range of purposes, from data mining to monitoring and automated testing. You can export data into JSON, CSV and XML formats. What stands out about Scrapy is its ease of use, detailed documentation, and active community. If you are familiar with Python you’ll be up and running in just a couple of minutes. It runs on Linux, Mac OS, and Windows systems. To learn how to scrape websites using Scrapy you can check out our tutorial:

Data Scraper

data-scraper-logo Data Scraper is a simple web scraping tool for extracting data from a single page into CSV and XSL data files. It is a personal browser extension that helps you transform data into a clean table format. You will need to install the plugin in a Google Chrome browser. The free version lets you scrape 500 pages per month, if you want to scrape more pages you have to upgrade to the paid plans. Download the extension from the link here

Scraper

scraper-chrome-extension Scraper is a chrome extension for scraping simple web pages. It is easy to use and will help you scrape a website’s content and upload the results to Google Docs. It can extract data from tables and convert it into a structured format. You can download the extension from the link here.

Parsehub

parsehub-logo ParseHub is a web-based scraping tool which is built to crawl single and multiple websites with the support for JavaScript, AJAX, cookies, sessions, and redirects. The application can analyze and grab data from websites and transform it into meaningful data. It uses machine learning technology to recognize the most complicated documents and generates the output file in JSON, CSV or Google Sheets. Parsehub is a desktop app available for Windows, Mac, and Linux users and works as a Firefox extension. The user-friendly web app can be built into the browser and has a well writeen documentation. It has all the advanced features like pagination, infinite scrolling pages, pop-ups, and navigation. You can even visualize the data from ParseHub into Tableau. The free version has a limit of 5 projects with 200 pages per run. If you buy the paid subscription you can get 20 private projects with 10,000 pages per crawl and IP rotation.

OutWitHub

outwit-hub-logo OutwitHub is a data extractor built in a web browser. If you wish to use it as an extension you have to download it from Firefox add-ons store. If you want to use the standalone application you just need to follow the instructions and run the application. OutwitHub can help you extract data from the web with no programming skills at all. It’s great for harvesting data that might not be accessible. OutwitHub is a free tool which is a great option if you need to scrape some data from the web quickly. With its automation features, it browses automatically through a series of web pages and performs extraction tasks. You can export the data into numerous formats (JSON, XLSX, SQL, HTML, CSV, etc.).

FMiner

fminer-logo FMiner is a visual web data extraction tool for web scraping and web screen scraping. Its intuitive user interface permits you to quickly harness the software’s powerful data mining engine to extract data from websites. In addition to the basic web scraping features it also has AJAX/Javascript processing and CAPTCHA solving. It can be run both on Windows and Mac OS and it does scraping using the internal browser. It has a 15-day freemium model till you can decide on using the paid subscription.  

Dexi.io

dexi-logo Dexi (formerly known as CloudScrape) supports data collection from any website and requires no download. The application provides different types of robots in order to scrape data – Crawlers, Extractors, Autobots, and Pipes. Extractor robots are the most advanced as it allows you to choose every action the robot needs to perform like clicking buttons and extracting screenshots. The application offers anonymous proxies to hide your identity. Dexi.io also offers a number of integrations with third-party services. You can download the data directly to Box.net and Google Drive or export it as JSON or CSV formats. Dexi.io stores your data on its servers for 2 weeks before archiving it. If you need to scrape on a larger scale you can always get the paid version

Octoparse

octoparse-logo Octoparse is a visual scraping tool that is easy to understand. Its point and click interface allows you to easily choose the fields you need to scrape from a website. The web scraper can handle both static and dynamic websites with AJAX, JavaScript, cookies and etc. The application also offers a cloud-based platform allows you to extract large amounts of data. You can export the scraped data in TXT, CSV, HTML or XLSX formats. The free version allows you to build up to 10 crawlers, but with the paid subscription plans you will get more features such as API and many anonymous IP proxies that will faster your extraction and fetch large volume of data in real time.

Web Harvey

web-harveyweb-harvey WebHarvey’s visual web scraper has an inbuilt browser that allows you to scrape data such as from web pages. It has a point to click interface which makes selecting elements easy. The advantage of this scraper is that you do not have to create any code. The data can be saved into CSV, JSON, XML files. It can also be stored in a SQL database. WebHarvey has a multi-level category scraping feature that can follow each level of category links and scrape data from listing pages. The tool allows you to use regular expressions, offering more flexibility. You can set up proxy servers that will help you to maintain a level of anonymity, by hiding your IP, while extracting data from websites.

PySpider

pyspider-web-scraping-tool PySpider is a web crawler written in Python. It supports Javascript pages and has a distributed architecture. This way you can have multiple crawlers. PySpider can store the data on a backend of your choosing such as MongoDB, MySQL, Redis, etc. You can use RabbitMQ, Beanstalk, and Redis as message queues. One of the advantages of PySpider is the easy to use UI where you can edit scripts, monitor ongoing tasks and view results. The data can be saved into JSON and CSV formats. If you are working with a website-based user interface, PySpider is the Internet scrape to consider. It also supports AJAX heavy websites.

Apify

apify-sdk-logo Apify is a Node.js library which is a lot like Scrapy positioning itself as a universal web scraping library in JavaScript, with support for Puppeteer, Cheerio and more. With its unique features like RequestQueue and AutoscaledPool, you can start with several URLs and then recursively follow links to other pages and can run the scraping tasks at the maximum capacity of the system respectively. Its available data formats are JSON, JSONL, CSV, XML, XLSX or HTML and available selector CSS. It supports any type of website and has built-in support of Puppeteer. The Apify SDK requires Node.js 8 or later.

Content Grabber

content-grabber Content Grabber is a visual web scraping tool that has a point-to-click interface to choose elements easily. Its interface allows pagination, infinite scrolling pages, and pop-ups.  In addition, it has AJAX/Javascript processing, captcha solution, allows the use of regular expressions, and IP rotation (using Nohodo). You can export data in CSV, XLSX, JSON, and PDF formats. Intermediate programming skills are needed to use this tool.

Mozenda

mozenda-scraping-platform Mozenda is an enterprise cloud-based web-scraping platform. It has a point-to-click interface and a user-friendly UI. It has two parts – an application to build the data extraction project and a Web Console to run agents, organize results and export data. They also provide API access to get the data and have inbuilt storage integrations like FTP, Amazon S3, Dropbox and more.  You can export data into CSV, XML, JSON or XLSX formats. Mozenda is good for handling large volumes of data. You will need more than basic coding skills to use this tool as it has a high learning curve.

Cheerio

cheerio-parser-web-scraping Cheerio is a library that parses HTML and XML documents and allows you to use the syntax of jQuery while working with the downloaded data. If you are writing a web scraper in JavaScript, Cheerio is a fast option which makes parsing, manipulating, and rendering efficient. It does not – interpret the result as a web browser, produce a visual rendering, apply CSS, load external resources, or execute JavaScript. If you require any of these features, you should consider projects like PhantomJS or JSDom.  

Quick overview of how to use these tools

Web Scraper

After downloading the webscraper chrome extension you’ll find it in developer tools and see a new toolbar added with the name ‘Web Scraper’. Activate the tab and click on ‘Create new sitemap, and then ‘Create sitemap‘. Sitemap is the Web Scraper extension name for a scraper. It is a sequence of rules for how to extract data by proceeding from one extraction to the next. We will set the start page as the cellphone category from Amazon.com and click ‘Create Sitemap’The GIF  illustrates how to create a sitemap: web-scraper-tool-creating-sitemap Navigating from root to category pages Right now, we have the Web Scraper tool open at the _root with an empty list of child selectors add-new-selector-web-scraper Click ‘Add new selector’We will add the selector that takes us from the main page to each category page. Let’s give it the id category, with its type as link. We want to get multiple links from the root, so we will check the Multiple box below. The ‘Select button’ gives us a tool for visually selecting elements on the page to construct a CSS selector. ‘Element Preview’ highlights the elements on the page and ‘Data Preview’ pops up a sample of the data that would be extracted by the specified selector. build-web-scraper-selector Click select on one of the category links and a specific CSS selector will be filled on the left of the selection tool. Click one of the other (unselected) links and the CSS selector should be adjusted to include it. Keep clicking on the remaining links until all of them are selected. The GIF below shows the whole process on how to add a selector to a sitemap:   A selector graph consists of a collection of selectors – the content to extract, elements within the page and a link to follow and continue the scraping. Each selector has a root (parent selector) defining the context in which the selector is to be applied. This is the visual representation of the final scraper (selector graph) for our Amazon Cellphone Scraper: web-scraper-tool-selector-graph Here the root represents the starting URL, the main page for Amazon Cellphone. From there the scraper gets a link to each category page and for each category, it extracts a set of product elements. Each product element, extracts a single name, a single review, a single rating, and a single price. Since there are multiple pages we need the next element for the scraper to go into every page available. Running the scraper Click Sitemap to get a drop-down menu and click Scrape as shown below web-scraper-tool-scrape-elements The scrape pane gives us some options about how slowly Web Scraper should perform its scraping to avoid overloading the web server with requests and to give the web browser time to load pages. We are fine with the defaults, so click ‘Start scraping’. A window will pop up, where the scraper is doing its browsing. After scraping the data you can download it by clicking the option ‘Export data as CSV’ or save it to a database.

Data Scraper

We’ll show you how to extract data from Amazon.com using the data scraper chrome extension. First download the extension from the link here download-data-miner-plugin Open the website that you need to extract data from. We’ll scrape the product details of air conditioners under the appliance category from Amazon.com. Right-click on the web page and click on the option ‘Get Similar (Data Miner)’. You’ll see a list of saved templates on the left side. You can choose any one of them or create your own one and run the template. data-miner-scrape-amazon-products To create your own template click on the option ‘New Recipe’ or choose from the generic templates under the option ‘Public’. Data Scraper is user-friendly as it will show you how to create your own template step by step. You’ll get the output presented as a table: data-miner-download-resize Then press on download and extract the data as CSV/XSL format.

Scraper Chrome Extension

After downloading the extension open the website you need to highlight a part of the page that is similar to what to want to scrape. Right-click, and you’ll see an option called ‘Scrape similar’. The scraper console will open as a new window showing you the initial results, where you will see the scraped content in a table format. scraper-tool-extract-data-elements The “Selector” section lets you change which page elements are scraped. You can specify the query as either a jQuery selector or in XPath. You can export the table by clicking on ‘Export to Google Docs” to download and save the content as a Google Spreadsheet. You may also customize the columns of the table and specify names for them if you would like. After making customizations, you must press on the “Scrape” button to update the results of the table. scraper-extension-resize

ParseHub

All you need to do is enter the website you need to scrape and click on ‘Start Project’. Then click on the ‘+’ button to select a page or title. After selecting and naming all the fields you need you will get a CSV/XLSX or JSON sample result. Click on ‘Get Data’ and ParseHub will scrape the website and fetch your data. When the data is ready you will see CSV and JSON options to download your results.

FMiner

We’ll show you how to extract a table from Wikipedia using FMiner. We are going to use the link https://en.wikipedia.org/wiki/List_of_National_Football_League_Olympians. First download the application at the link http://www.fminer.com/download/ When you open the application, enter the URL and press the button ‘Record’ to record your actions. What we need to extract is the table of Olympic players. To create the table, click on the ‘+’ sign that says table. Then select each row by clicking on the option ‘ Target Select’, you’ll see one whole row selected from the table. In order to expand the whole table, click on the option ‘Multiple Targets’ – this will select the whole table. Once the whole table is highlighted you can now enter your new fields by clicking on ‘+’ sign (shown in the image below). fminer-add-table-and-column After you have created the table click on ‘Scrape’. You’ll get a notification that the scrape has finished. Just click on ‘Export’ to save the data as a CSV or XLS file. fminer-running-the-scraper

Dexi.io

To start, first, you have to sign up and create an account in dexi.io. It’ll then take you to the app https://app.dexi.io/#/. When you get there you can start by clicking on ‘Create New Robot’. It might take a while to get a hang of it, but there tutorials on how to create your first robot. If you need help you can check out their knowledge base dexi-new-project Dexi.io has a simple user interface. All you need to do is choose the type of robot you need, enter the website you would like to extract data from and start building your scraper. dexi-create-robot  
Even though these web scraping tools extract data from web pages with ease, they come with their limits. In the long run, programming is the best way to scrape data from the web as it provides more flexibility and attains better results. If you aren’t proficient with programming or your needs are complex, or you need large volumes of data to be scraped, there are great web scraping services that will suit your requirements to make the job easier for you. You can save time and get clean, structured data by trying us out instead – we are a full-service provider that doesn’t require the use of any tools and all you get is clean data without any hassles.

Need some professional help with scraping data? Let us know

Turn the Internet into meaningful, structured and usable data


Please DO NOT contact us for any help with our Tutorials and Code using this form or by calling us, instead please add a comment to the bottom of the tutorial page for help

  Note: All the features, prices etc are current at the time of writing this article. Please check the individual websites for current features and pricing.

Posted in:   Featured, Tools and Services

Comments or Questions?

Turn the Internet into meaningful, structured and usable data   

Best Practices while Scraping Websites Yourself

Get instant access to our free guide on

Best practices and systems when scraping the web yourself on a large scale.

Subscribe to our updates and view the guide

ScrapeHero Logo

Can we help you get some data?