Web Scraping Using RegEx

Share:

Web Scraping Using RegEx

Are you wondering whether you can perform web scraping using RegEx? Yes, you can. However, using RegEx is more error-prone. Hence, dedicated parsing libraries, like BeautifulSoup, are more preferred.

But there is no harm in learning how to use RegEx for web scraping. It can solidify your RegEx and web scraping knowledge.

This article shows you how to use RegEx for web scraping without using any parser.

How RegEx works

Before using RegEx for web scraping, let’s be clear on the fundamentals. 

RegEx or regular expressions work by searching for a pattern in a string. For example, suppose you want to find emails from a string. Then the pattern of the string could be

\S+@\S+\.\S+

Image showing the parts of email selected by the RegEx characters

Here, 

  • \S is a non-whitespace character
  • + tells that the previous character should repeat one or more times
  • @ matches the character itself
  • \. matches the period. The period is a special character, requiring a backslash to escape it. 

In short, the above pattern searches for a string. The string has non-white characters before and after the character ‘@.’ Following them, the string also has a period and another set of non-whitespace characters.

Here is a RegEx cheat sheet you can use while using RegEx for web scraping.

Cheat sheet showing RegEx syntax

Data Scraped Using RegEx

The tutorial shows web scraping using regular expressions with Python. The Python code uses RegEx to scrape eBay product data from its search results page.

  1. Name
  2. Price
  3. URL

Use the browser’s inspect tool to find the HTML source code of these data points. Right-click on a data point and click ‘Inspect’.

Browsers inspect panel showing the product URL and Name

Browsers inspect panel showing the product price

Web Scraping using RegEx: The Environment

The code in this tutorial uses three Python packages.

  1. The re module: This module enables you to use RegEx
  2. The json module: This module allows you to write the extracted data to a JSON file
  3. Python requests: This library has methods to manage HTTP requests

The re and json modules come with the Python standard library. So you don’t need to install them.

However, for web scraping with Python requests library, you must install it; you can do that using pip.

pip install requests

Web Scraping using RegEx: The Code

Import the packages mentioned above; you can do that with a single code line.

import re, requests, json

Make an HTTP request using the Python requests package to the eBay search results page; the request’s response will contain the HTML source code. You can use the get() method of Python requests to make the HTTP request.

response = requests.get("https://www.ebay.com/sch/i.html?_from=R40&_trksid=p4432023.m570.l1313&_nkw=smartphones&_sacat=0")

Extract all the div elements containing the product details from the response text. From these div elements, you can then extract the name, URL, and price. The findall() method of the re-module can help you find the div elements.

The findall() method takes two arguments, a pattern, and a string. It checks for the pattern in the string and returns the matched values. Here, the pattern matches a string that

  • Starts with ‘<div class=”s-item__wrapper’
  • Contains ‘<span class=s-item__price’
  • Ends with ‘</div>’
name_pattern = r'<span role="heading.+?">&lt;.+?&gt;(.+?)&lt;.+?&gt;&lt;\/span&gt;'
</span>

Extracting Price

The price will be inside a span element with the class ‘s-item__price’

url_pattern = r'href=(https:.+?) .+?&gt;'

Note: The above patterns are specific to the eBay search results page. Analyze the HTML source code to determine the appropriate RegEx patterns in each project.

You can use the above pattern to extract data from each div element. Iterate through the extracted div elements, and in each iteration

1. Extract name, price, and URL

name = re.search(name_pattern,product).group(1)
price = re.search(price_pattern,product).group(1)
url = re.search(url_pattern,product).group(1)

2. Store them in a dict and append it to an array. Here, the patterns also match strings that are not required. So use a conditional statement while appending; specifically, it should not append the values if the name contains ‘Shop on eBay’ or the character ‘<’

nameAndUrl.append(
    {
        "Name":name,
        "Price":price,
        "URL":url
    }
) if name !='Shop on eBay'and '&lt;' not in name else None

Finally, you can save the array as a JSON file using the json module. To do so, use json.dump().

with open("regEx.json","w",encoding="utf-8") as f:
    json.dump(nameAndUrl,f,indent=4,ensure_ascii=False)

Code Limitations

The code shown in this tutorial is only efficient if the code is well-structured. For complex, highly nested HTML source code, web scraping using RegEx can become slow.

Moreover, a slight change in the HTML code can break the code. For example, a change in spacing or the order of attributes may render the code unusable even if the attributes and the tag names of the data points remain unchanged.

The code does not bypass anti-scraping measures. Hence, it is not appropriate for large-scale web scraping, as the massive number of requests makes your scraper more susceptible to these measures.

Why Code Yourself? Use ScrapeHero’s Web Scraping Service

The code can scrape three data points from an eBay search results page, showing web scraping with RegEx in Python.

However, maintaining a RegEx code can be challenging as slight changes can break it. Moreover, trying to scrape additional data points requires complex RegEx that can slow down the process. 

Therefore, it is better to use a professional web scraping service, like ScrapeHero, for large-scale projects where scalability is important.

ScrapeHero’s web scraping service can build enterprise-grade web scrapers and crawlers according to your specifications. This way, you can focus on using the data to derive insights rather than gathering it. Contact ScrapeHero now for high-quality data.

We can help with your data or automation needs

Turn the Internet into meaningful, structured and usable data



Please DO NOT contact us for any help with our Tutorials and Code using this form or by calling us, instead please add a comment to the bottom of the tutorial page for help

Table of content

Scrape any website, any format, no sweat.

ScrapeHero is the real deal for enterprise-grade scraping.

Ready to turn the internet into meaningful and usable data?

Contact us to schedule a brief, introductory call with our experts and learn how we can assist your needs.

Continue Reading

NoSQL vs. SQL databases

Stuck Choosing a Database? Explore NoSQL vs. SQL Databases in Detail

Find out which SQL and NoSQL databases are best suited to store your scraped data.
Scrape JavaScript-Rich Websites

Upgrade Your Web Scraping Skills: Scrape JavaScript-Rich Websites

Learn all about scraping JavaScript-rich websites.
Web scraping with mechanicalsoup

Ditch Multiple Libraries by Web Scraping with MechanicalSoup

Learn how you can replace Python requests and BeautifulSoup with MechanicalSoup.
ScrapeHero Logo

Can we help you get some data?