All Articles

Apparel and Accessory Closures in US – Store Closure Report

Apparel and Accessory Closures in US – Store Closure Report

ScrapeHero Data Store monitors the location data of 53 apparel brands in the USA. There have been a total of 351 apparel and accessory closures in the US during August 2020. Read more on the report.

Grocery and Supermarket Closures in US – Store Closure Report

Grocery and Supermarket Closures in US – Store Closure Report

ScrapeHero Data Store has the location data of 89 grocery and supermarket chains based in the USA. From the store data we found that 17 grocery and supermarket chains had store closures in Aug 2020. Read more on the report.

How to scrape Google without Coding | ScrapeHero Cloud

How to scrape Google without Coding | ScrapeHero Cloud

This tutorial will show you how to scrape Google data for free using ScrapeHero Cloud. Using these crawlers we will be scraping Google Search Results Page, Google Maps, and Google Reviews. 

What’s in your Data Sausage?

What’s in your Data Sausage?

The world runs on data but very few people care to follow the flow of data and explore its origins and how it ends up in the data products they consume. The process of sausage making is somewhat similar and people don’t know what meat from what animal goes into the sausages they enjoy eating. Data […]

Scrape Glassdoor Job Data using the ScrapeHero Cloud

Scrape Glassdoor Job Data using the ScrapeHero Cloud

This tutorial will help you scrape job data from any Glassdoor domain using the Glassdoor Job Listings Crawler in ScrapeHero Cloud. The crawler accepts multiple search URLs and filters. You can scrape job data such as Job title, salary, company, address, industry, revenue, website, and more.

Social Media Scraping

Social Media Scraping

Scraping social media data involves extracting data from social media websites like Instagram and Twitter. Social media scraping tool like ScrapeHero Cloud allows businesses to scrape these websites themselves easily.

How to fake and rotate User Agents using Python 3

How to fake and rotate User Agents using Python 3

When scraping many pages from a website, using the same user-agent consistently leads to the detection of a scraper. A way to bypass that detection is by faking your user agent and changing it with every request you make to a website. In this tutorial, we will show you how to fake user agents, and randomize them to prevent getting blocked while scraping websites.

How To Rotate Proxies and change IP Addresses using Python 3

How To Rotate Proxies and change IP Addresses using Python 3

When scraping many pages from a website, using the same IP addresses will lead to getting blocked. A way to avoid this is by rotating IP addresses that can prevent your scrapers from being disrupted. In this tutorial, we will show you how to rotate IP addresses to prevent getting blocked while scraping.

Grocery Chains Offering Curbside Pickup in the US

Grocery Chains Offering Curbside Pickup in the US

Walmart (4590) has the largest number of stores offering curbside pickup. Whole Foods (160) and Food Lion(159) have lowest number of curbside pickup stores.

How to scrape websites without getting blocked

How to scrape websites without getting blocked

Most websites may not have anti-scraping mechanisms, but some sites block scraping because they do not believe in open data access. In this article, we will talk about how to scrape websites without getting blocked by the anti-scraping or bot detection tools.

How To Scrape Amazon Product Data and Prices using Python 3

How To Scrape Amazon Product Data and Prices using Python 3

Quick and easy tutorial on building an Amazon Scraper to extract product information and pricing. This tutorial will teach you how to build a web scraper and run it to collect data by providing product URL

Best Open Source Javascript Web Scraping Tools and Frameworks in 2020

Best Open Source Javascript Web Scraping Tools and Frameworks in 2020

We will walk through open source Javascript tools and frameworks that are great for web crawling, web scraping, parsing, and extracting data.

Turn the Internet into meaningful, structured and usable data