Yelp.com is a reliable source for extracting information regarding local businesses. In this tutorial, you will learn how to extract information of business listings such as name, search rank, number of reviews and more from…
Web Scraping is a viable option to keep track of real estate listings available for sellers and agents. Being in possession of extracted information from real estate sites such as Zillow.com can help adjust prices of listings on your site or help you create a database for your business.
In this tutorial, we will scrape Zillow.com, an online real estate database to extract real estate listings available. This real estate scraper will extract details of property listings based on zip code.
Here are the following details we will be extracting:
- Street Name
- Zip Code
- Facts and Features
- Real Estate Provider
Below is a screenshot of some of the data fields we will be extracting
- Construct the URL of the search results page from Zillow. For example, here is the one for Boston- https://www.zillow.com/homes/02126_rb/. We’ll have to create this URL manually to scrape results from that page.
- Download HTML of the search result page using Python Requests – Quite easy, once you have the URL. We use python requests to download the entire HTML of this page.
- Parse the page using LXML – LXML lets you navigate the HTML Tree Structure using Xpaths. We have predefined the XPaths for the details we need in the code.
- Save the data to a CSV file.
Install Python 3 and Pip
Here is a guide to install Python 3 in Linux – http://docs.python-guide.org/en/latest/starting/install3/linux/
Mac Users can follow this guide – http://docs.python-guide.org/en/latest/starting/install3/osx/
Windows Users go here – https://www.scrapehero.com/how-to-install-python3-in-windows-10/
For this web scraping tutorial using Python 3, we will need some packages for downloading and parsing the HTML. Below are the package requirements:
- PIP to install the following packages in Python (https://pip.pypa.io/en/stable/installing/ )
- Python Requests, to make requests and download the HTML content of the pages ( http://docs.python-requests.org/en/master/user/install/).
- Python LXML, for parsing the HTML Tree Structure using Xpaths ( Learn how to install that here – http://lxml.de/installation.html )
You can download the code from the link here https://gist.github.com/scrapehero/5f51f344d68cf2c022eb2d23a2f1cf95 if the embed does not work.
If you would like the code in Python 2.7, you can check out the link at https://gist.github.com/scrapehero/2dd61d0f1bd5222a4c9ae76465990cbd
Running the Scraper
Assume the script is named, zillow.py. When you type in the script name in a command prompt or terminal with a -h
usage: zillow.py [-h] zipcode sort positional arguments: zipcode sort available sort orders are : newest : Latest property details cheapest : Properties with cheapest price optional arguments: -h, --help show this help message and exit
You must run the script using python with arguments for zip code and sort. The sort argument has the options ‘newest’ and ‘cheapest’ listings available. As an example, to find the listings of the newest properties up for sale in Boston, Massachusetts we would run the script as:
python3 zillow.py 02126 newest
This will create a CSV file called properties-02126.csv that will be in the same folder as the script. Here is some sample data extracted from Zillow.com for the command above.
You can download the code at https://gist.github.com/scrapehero/5f51f344d68cf2c022eb2d23a2f1cf95
This script should be able to scrape real estate listings of most zipcodes provided. If you would like to scrape the details of thousands of pages, you should read Scalable do-it-yourself scraping – How to build and run scrapers on a large scale and How to prevent getting blacklisted while scraping.
If you need some professional help with scraping complex websites, you can fill up the form below.
We can help with your data or automation needs
Turn the Internet into meaningful, structured and usable data
Disclaimer: Any code provided in our tutorials is for illustration and learning purposes only. We are not responsible for how it is used and assume no liability for any detrimental usage of the source code. The mere presence of this code on our site does not imply that we encourage scraping or scrape the websites referenced in the code and accompanying tutorial. The tutorials only help illustrate the technique of programming web scrapers for popular internet websites. We are not obligated to provide any support for the code, however, if you add your questions in the comments section, we may periodically address them.