Web scraping tools automate web-based data collection. These tools generally fall in the categories of tools that you install on your computer or in your computer’s browser (Chrome or Firefox) and services that are designed to be self-service. Web scraping tools (free or paid) and self-service websites/applications can be a good choice if your data requirements are small, and the source websites aren’t complicated.
However, if the websites you want to scrape are complicated or you need a lot of data from one or more sites, these tools do not scale well. The cost of these tools and services pales in comparison to the time and effort you require to implement scrapers using these tools and the complexity of maintaining and running these tools. For such cases, a full-service provider is a better and economical option.
In this post, we will walk through how these tools work so that you can evaluate if these work for your need.
Here are the best web scraping tools
|Name||Pricing||Type||Handle Large Volumes?|
|Data Scraper||Free||Chrome Extension||No|
|Web Scraper||Free||Chrome Extension||Yes|
|ParseHub||Paid||Firefox Extension/Desktop Application||Yes|
|OutwitHub||Free||Firefox Extension/Desktop Application||No|
|Dexi.io||Paid||Web-Based Scraping Application||Yes|
|Webhose.io||Paid||Web-Based Scraping Application||Yes|
|Octoparse||Paid||Web-Based Scraping Application||Yes|
Browser Extensions and Desktop Applications
We’ll show you how to extract data from Amazon.com using the extension.
Open the website that you need to extract data from. We’ll scrape the product details of air conditioners under the appliance category from Amazon.com. Right-click on the web page and click on the option ‘Get Similar (Data Miner)’. You’ll see a list of saved templates on the left side. You can choose any one of them or create your own one and run the template.
To create your own template click on the option ‘New Recipe’ or choose from the generic templates under the option ‘Public’.
Data Scraper is user-friendly as it will show you how to create your own template step by step. You’ll get the output presented as a table:
Then press on download and extract the data as CSV/XSL format.
You’ll find it in developer tools and see a new toolbar added with the name ‘Web Scraper’. Activate the tab and click on ‘Create new sitemap‘, and then ‘Create sitemap‘. Sitemap is the Web Scraper extension name for a scraper. It is a sequence of rules for how to extract data by proceeding from one extraction to the next. We will set the start page as the cellphone category from Amazon.com – https://www.amazon.com/s/ref=sr_hi_1?fst=p90x%3A1&rh=n%3A2335752011%2Ck%3Acellphones&keywords=cellphones&ie=UTF8&qid=1523426607 and click ‘Create Sitemap’. The GIF illustrates how to create a sitemap:
Navigating from root to category pages
Right now, we have the Web Scraper tool open at the _root with an empty list of child selectors
Click ‘Add new selector’. We will add the selector that takes us from the main page to each category page. Let’s give it the id category, with its type as link. We want to get multiple links from the root, so we will check the Multiple box below. The ‘Select button’ gives us a tool for visually selecting elements on the page to construct a CSS selector. ‘Element Preview’ highlights the elements on the page and ‘Data Preview’ pops up a sample of the data that would be extracted by the specified selector.
Click select on one of the category links and a specific CSS selector will be filled on the left of the selection tool. Click one of the other (unselected) links and the CSS selector should be adjusted to include it. Keep clicking on the remaining links until all of them are selected. The GIF below shows the whole process on how to add a selector to a sitemap:
A selector graph consists of a collection of selectors – the content to extract, elements within the page and a link to follow and continue the scraping. Each selector has a root (parent selector) defining the context in which the selector is to be applied. This is the visual representation of the final scraper (selector graph) for our Amazon Cellphone Scraper:
Here the root represents the starting URL, the main page for Amazon Cellphone. From there the scraper gets a link to each category page and for each category, it extracts a set of product elements. Each product element, extracts a single name, a single review, a single rating, and a single price. Since there are multiple pages we need the next element for the scraper to go into every page available.
Running the scraper
Click Sitemap to get a drop-down menu and click Scrape as shown below
The scrape pane gives us some options about how slowly Web Scraper should perform its scraping to avoid overloading the web server with requests and to give the web browser time to load pages. We are fine with the defaults, so click ‘Start scraping’. A window will pop up, where the scraper is doing its browsing. After scraping the data you can download it by clicking the option ‘Export data as CSV’ or save it to a database.
Open the website you need to highlight a part of the page that is similar to what to want to scrape. Right-click, you’ll see an option called ‘Scrape similar’. The scraper console will open as a new window showing you the initial results, where you will see the scraped content in a table format.
The “Selector” section lets you change which page elements are scraped. You can specify the query as either a jQuery selector or in XPath.
You can export the table by clicking on ‘Export to Google Docs” to download and save the content as a Google Spreadsheet. You may also customize the columns of the table and specify names for them if you would like. After making customizations, you must press on the “Scrape” button to update the results of the table.
All you need to do is enter the website you need to scrape and click on ‘Start Project’. Then click on the ‘+’ button to select a page or title. After selecting and naming all the fields you need you will get a CSV/Excel or JSON sample result.
Click on ‘Get Data’ and ParseHub will scrape the website and fetch your data. When the data is ready you will see CSV and JSON options to download your results.
We’ll show you how to extract a table from Wikipedia using Fminer. We are going to use the link https://en.wikipedia.org/wiki/List_of_National_Football_League_Olympians. First download the application at the link http://www.fminer.com/download/
When you open the application, enter the URL and press the button ‘Record’ to record your actions. What we need to extract is the table of Olympic players.
To create the table, click on the ‘+’ sign that says table. Then select each row by clicking on the option ‘ Target Select’, you’ll see one whole row selected from the table. In order to expand the whole table, click on the option ‘Multiple Targets’ – this will select the whole table. Once the whole table is highlighted you can now enter your new fields by clicking on ‘+’ sign (shown in the image below).
After you have created the table click on ‘Scrape’. You’ll get a notification that the scrape has finished. Just click on ‘Export’ to save the data as a CSV or XLS file.
Web-Based Scraping Applications and Services
Dexi.io has a simple user interface. All you need to do is choose the type of robot you need, enter the website you would like to extract data from and start building your scraper.
The application offers anonymous proxies to hide your identity. Dexi.io also offers a number of integrations with third-party services. You can download the data directly to Box.net and Google Drive or export it as JSON or CSV formats. Dexi.io stores your data on its servers for 2 weeks before archiving it. If you need to scrape on a larger scale you can always get the paid version.
You can get started by navigating to the webhose.io homepage and click ‘Use for free’. Once you enter your email address and set a password you can open a free account where you can see activity and query quota usage for a month.
Even though these web scraping tools extract data from web pages with ease, they come with their limits. In the long run, programming is the best way to scrape data from the web as it provides more flexibility and attains better results.
If you aren’t proficient with programming or your needs are complex, or you need large volumes of data to be scraped, there are great web scraping services that will suit your requirements to make the job easier for you.
You can save time and get clean, structured data by trying us out instead – we are a full-service provide that doesn’t require the use of any tools and all you get is clean data without any hassles.
Turn websites into meaningful and structured data through our web data extraction service
Need some professional help with scraping data? Let us know
Turn websites into meaningful and structured data through our web data extraction service
Note: All features, prices etc are current at the time of writing this article. Please check the individual websites for current features and pricing.