Yelp.com is a reliable source for extracting information regarding local businesses. In this tutorial, you will learn how to extract information of business listings such as name, search rank, number of reviews and more from…
Web Scraping real estate data is a viable option to keep track of real estate listings available for sellers and agents. Being in possession of extracted real estate information from real estate sites such as Zillow.com can help adjust prices of listings on your site or help you create a database for your business. In this tutorial, we will scrape Zillow data using python, and show you how to extract real estate data. In particular, we will show you how to scrape real estate listings based on zip code.
Here are the steps to scrape Zillow
- Construct the URL of the search results page from Zillow. Example – https://www.zillow.com/homes/02126_rb/
- Download HTML of the search result page using Python Requests.
- Parse the page using LXML – LXML lets you navigate the HTML Tree Structure using Xpaths.
- Save the data to a CSV file.
No coding required and No setup required – Just provide URLs to start scraping!
Scrape Real Estate Listings in Zillow from ANY browser
We will be extracting the following data from Zillow:
- Title
- Street Name
- City
- State
- Zip Code
- Price
- Facts and Features
- Real Estate Provider
- URL
Below is a screenshot of some of the data fields we will be extracting from Zillow
Read More – Learn to scrape Yelp business data
Required Tools
Install Python 3 and Pip
Here is a guide to install Python 3 in Linux – http://docs.python-guide.org/en/latest/starting/install3/linux/
Mac Users can follow this guide – http://docs.python-guide.org/en/latest/starting/install3/osx/
Windows Users go here – https://www.scrapehero.com/how-to-install-python3-in-windows-10/
Packages
For this web scraping tutorial using Python 3, we will need some packages for downloading and parsing the HTML. Below are the package requirements:
- PIP to install the following packages in Python (https://pip.pypa.io/en/stable/installing/ )
- Python Requests, to make requests and download the HTML content of the pages ( http://docs.python-requests.org/en/master/user/install/).
- Python LXML, for parsing the HTML Tree Structure using Xpaths ( Learn how to install that here – http://lxml.de/installation.html )
The Code
We have to first construct the search result page URL. We’ll have to create this URL manually to scrape results from that page. For example, here is the one for Boston- https://www.zillow.com/homes/02126_rb/.
https://gist.github.com/scrapehero/5f51f344d68cf2c022eb2d23a2f1cf95
You can download the code from the link here https://gist.github.com/scrapehero/5f51f344d68cf2c022eb2d23a2f1cf95 if the embed does not work.
If you would like the code in Python 2.7 to scrape zillow listings, you can check out the link at https://gist.github.com/scrapehero/2dd61d0f1bd5222a4c9ae76465990cbd
Running the Zillow Scraper
Assume the script is named, zillow.py. When you type in the script name in a command prompt or terminal with a -h
usage: zillow.py [-h] zipcode sort positional arguments: zipcode sort available sort orders are : newest : Latest property details cheapest : Properties with cheapest price optional arguments: -h, --help show this help message and exit
You must run the zillow scraper using python with arguments for zip code and sort. The sort argument has the options ‘newest’ and ‘cheapest’ listings available. As an example, to find the listings of the newest properties up for sale in Boston, Massachusetts we would run the script as:
python3 zillow.py 02126 newest
This will create a CSV file called properties-02126.csv that will be in the same folder as the script. Here is some sample data extracted from Zillow.com for the command above.
You can download the code at https://gist.github.com/scrapehero/5f51f344d68cf2c022eb2d23a2f1cf95
Known Limitations
This Zillow scraper should be able to scrape real estate listings of most zip codes provided. To learn more on real estate data management you can go through this post – Real Estate and Quality Challenges
If you would like to scrape Zillow listings details of thousands of pages, you should read Scalable do-it-yourself scraping – How to build and run scrapers on a large scale and How to prevent getting blacklisted while scraping.
If you need some professional help with web scraping real estate data, you can fill-up the form below.
We can help with your data or automation needs
Turn the Internet into meaningful, structured and usable data
Disclaimer: Any code provided in our tutorials is for illustration and learning purposes only. We are not responsible for how it is used and assume no liability for any detrimental usage of the source code. The mere presence of this code on our site does not imply that we encourage scraping or scrape the websites referenced in the code and accompanying tutorial. The tutorials only help illustrate the technique of programming web scrapers for popular internet websites. We are not obligated to provide any support for the code, however, if you add your questions in the comments section, we may periodically address them.
Responses
Excellent tutorial! Is it possible to have the code search individual properties? For example if I had a list of 600 addresses in excel, could we perform the scrape in this article on each of these properties? Can it be modified to use the specific address as a dynamic user input into the website and then automatically return each property in order?
Thank you in advance for your time.
Hi Alex,
Thanks for your feedback.
The tutorials provide a starting point for your own specific use cases.
Yes you should be able to do what you need by using excel macros and building a a scraping api endpoint.
Thanks
Great script! And very easy to use. However, it only reaches the first page of the Zillow results — in other words, 25 houses. Is there a way to get it to pull all houses available in the zipcode searched?
Hi Howard – that is an exercise for the reader – a key point of the tutorial. Have find coding !
Hey Howard, you should look into selenium. That will enable you to create a script that will allow you to press the button that will go to the next page and allow you to scrape more data.
Nice article, it seems zillow will only show 500 (25/page * 20 pages) data, in order to fetch all datas in some super hot zipcodes. We might need to add filters to the search which I think price range (0-10k, 10k-20k) might be a useful attribute to narrow down the data size. The problem with price range is that it hugely depend on the area, by which I mean if some area only have houses price from 3m – 4m we will go through a bunch of unnecessary requests and this will increase the probability to get blocked. What do you think is the best attribute to limit results within 500?
Ran the script, but it gets detected and Zillow is passing back a captcha now. “Please verify you’re a human to continue”
You could use some techniques listed here to get around it – https://www.scrapehero.com/how-to-prevent-getting-blacklisted-while-scraping/
was anyone able to get around the “Please verify you’re a human to continue”?
Awesome article. When attempting to run it on the agents page it returns empty string using multiple classes. In example this page: https://www.zillow.com/malibu-ca/real-estate-agent-reviews/?sortBy=None&page=3&showAdvancedItems=False®ionID=12520&locationText=Malibu%20CA
I see there is another head and body loading at the end of the page is this a trick to prevent it from being scraped? Do you have any work around to this?
Nice scraping logic. A few tweaks and you can use the code on other real estate sites. Thanks!
If scraping Zillow page, does output CSV capture the property feature codes (eg a RESO compliant set of listing features and description)
just dont get it. why my csv was empty but fieldnames.
I’m also getting empty csv
You might most probably be getting blocked by Zillow. You could try some techniques here https://www.scrapehero.com/how-to-prevent-getting-blacklisted-while-scraping/
hi, same here, can’t get any csv output beyond the defined headers!
what to do?
thanks,
kurt
also, running on win 7, python 3.6.4
You might most probably be getting blocked by Zillow. Can you please check if ZipCode you searched for has results in Zillow ?
We just tested the script again and it worked for 20005.
python3 zillow.py 20005 newest
Would anyone be able to assist me? I am experiencing a ‘invalid syntax’ error. I have the script saved in the script folder in Python. Using Windows 10, Python 3.7.0
Unfortunately, if you get such basic errors you will need to read up on python programming to be able to use this code.
You can just use Zillow APIs – they are free
i tried this code … am not getting any error message but output is not get…
when am running in cmd Its showing only path..
C:\Users\Dell\Desktop\Python1>zillow.py 20005 newest
C:\Users\Dell\Desktop\Python1>
again am running:::
C:\Users\Dell\Desktop\Python1>zillow.py 20005 newest
C:\Users\Dell\Desktop\Python1>
Can you please run the script using the following command: python zillow.py 20005 newest . It works for me.
And how legal is it to scrape from Zillow that sources data from other MLSs? Also to what scale can one do this? Nationwide?
Will i be able to use this code to gather First and Last name as well as phone numbers and Emails?
Sure David, you can do anything with code. It is provided for free so that you can modify it to achieve your objective.
I am receiving this error:
File “C:\Users\smith\Zillow.py”, line 2, in
from lxml import html
File “C:\ProgramData\Anaconda3\lib\site-packages\lxml\html__init__.py”, line 54, in
from .. import etree
ImportError: DLL load failed: The specified module could not be found.
I installed the lxml. Any ideas on why this is happening?
Thanks!
Sounds like an installation error – sorry we cant help with that.
Nice job putting this together.
I was able to modify your code to do a few more cool things. I used regex to extract the zillow property ID’s from the URL. Then I registered for a ZWSID Zillow API Key and used that in conjunction with the property ID to run API queries against properties. I am able to get things like the Zestimate and rent estimate.
Brett,
Glad to see you expanding and combining scraping and the API to create a hybrid solution.
It would be great if you would publish the code (without you API key please) so the community can benefit.
We will gladly link to it.
Could you link the code
I’m running the program, but the csv file returns blank and the command line indicates that only 200 requests were made. Do you know of a solution to this issue?
Thanks,
Vish
I’m getting the exact same
We have tried few zipcodes and it works fine. Could Please share the zipcode you have tried?. We will look into this issue.
Looks like zillow.com is performing A/B tests on their site. We’ve updated the code accordingly.
excellent code …
i got some issue
AFTER RUNNING MULTIPLE TIMES “python zillow.py 20850 newest”
first try :
python zillow.py 20850 newest
Fetching data for 20850
https://www.zillow.com/homes/for_sale/20850/0_singlestory/days_sort
status code received: 200
Traceback (most recent call last):
File “zillow.py”, line 187, in
scraped_data = parse(zipcode, sort)
File “zillow.py”, line 118, in parse
response = get_response(url)
File “zillow.py”, line 69, in get_response
save_to_file(response)
File “zillow.py”, line 44, in save_to_file
fp.write(response.text)
File “C:\Program Files\Python37\lib\encodings\cp1252.py”, line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: ‘charmap’ codec can’t encode character ‘\u0100’ in position 7647: character maps to
second try :
Fetching data for 20850
https://www.zillow.com/homes/for_sale/20850/0_singlestory/days_sort
status code received: 200
parsing from html page
Writing data to output file
and it happens randomly !!!
i know its because of coding and encoding but i dono how to solve it…
and where is the problem…
Is it normal to get sent to a Captcha when running this code?
Yes John – that is likely
Anyone else having this issue when running the code?
status code received: 200
Traceback (most recent call last):
File “C:/Users/matto/PycharmProjects/Real_Estate_Scraping/zillow.py”, line 185, in
scraped_data = parse(zipcode, sort)
File “C:/Users/matto/PycharmProjects/Real_Estate_Scraping/zillow.py”, line 129, in parse
return get_data_from_json(raw_json_data)
File “C:/Users/matto/PycharmProjects/Real_Estate_Scraping/zillow.py”, line 74, in get_data_from_json
cleaned_data = clean(raw_json_data).replace(‘“, “”)
AttributeError: ‘NoneType’ object has no attribute ‘replace’
I just tried this script. It looks like zillow implemented a Captcha to prevent automated harvesting of their data. Here is a snippet from the response I got:
response = get_response(url)
….function handleCaptcha(response)….
Maybe take a look at https://www.scrapehero.com/how-to-solve-simple-captchas-using-python-tesseract/
Yes, I am receiving the same error message. It appears to stem from the variable “raw_json_data” being empty. Maybe a problem with the parser.xpath() call?
I ended up installing tesseract to handle Captcha’s and reran zillow.py. Still no luck
Did anyone figure out how to do this?
Follow the advice of “Xiyu-1 commented on Mar 7” from the git site “https://gist.github.com/scrapehero/5f51f344d68cf2c022eb2d23a2f1cf95”
Here Xiyu describes how the script needs to be modified to return the results and complete creation of the csv file.
Cheers,
-diytechy
Comments are closed.