How To Scrape eBay using Python and LXML

In this article, we will show you how to scrape eBay and extract data such as prices and names of products in all categories by a brand. Web scraping eBay can help with collecting information for eBay keyword monitoring, price monitoring, brand monitoring and pricing intelligence. Scraping eBay listings at regular intervals can be useful to check the details of products and compare them with your competitor sites.

Here are the steps on how to scrape eBay

  1. Construct the URL for the search results to scrape eBay.
    Example –
  2. Download HTML of the search result page using Python Requests.
  3. Parse the page using LXML – LXML lets you navigate the HTML Tree Structure using Xpaths.
  4. Save the scraped eBay product data to a CSV file.

Below is a screenshot of the data to extract using our eBay scraping tutorial.


You could also scrape details such as the number of products sold or the ratings given by consumers, but for now, we will keep this eBay scraper simple and extract these.

Requirements for eBay Scraping

Install Python 3 and Pip

Here is a guide to install Python 3 in Linux –

Mac Users can follow this guide –

Windows Users go here –


For this eBay scraping tutorial using Python 3, we will need some packages for downloading and parsing the HTML. Below are the package requirements:

The Code

Since we will be monitoring prices by their brand, here is the one for Apple –

You can download the link at if the embed above does not work.

If you would like the code in Python 2.7, you can check out the link at

Running the Python eBay Scraper

We have named the script If you type in the script name in terminal or command prompt with a -h

usage: [-h] brand

positional arguments:

  brand       Brand Name

optional arguments:

  -h, --help  show this help message and exit


The brand argument represents any brand available on eBay. You can type in a brand that eBay currently has on its site such as – Samsung, Canon, Dell, etc. The script must be run with the argument for brand. As an example, to find all of the products Apple currently has on eBay, we would run the scraper like this. 

python3 apple

In this article we are only scraping the product’s name, price, and URL from the first page of results, so a CSV file should be enough to fit in all the data. If you would like to extract details in bulk, a JSON file is more preferable. You can read about choosing your data format, just to be sure.

This will create a CSV file named apple-ebay-scraped-data.csv that will be in the same folder as the script. Here are some of the data extracted from eBay in a CSV file from the command above.


You can download the code at

We would love to know how this scraper worked for you. Let us know in the comments below.

Why Scrape eBay?

eBay scraping using the code above can be useful for a number of reason. Here are a few reasons on how web scraping eBay can be useful –

  1. eBay Keyword monitoring – You can easily monitor eBay for a specific keywords using this tutorial.
  2. Brand monitoring – You can replace the search term in this tutorial to include a brand name and easily monitor which brand products are being sold more often on eBay.
  3. Price monitoring – eBay is one of the largest marketplaces in the world, scraping eBay for price and comparing with the Amazon scraper and Walmart scraper pricing data can help create an efficient price monitoring system.

Known Limitations to eBay Scraping

This code should be able to scrape eBay prices and details of most brands available. If you want to scrape and extract the details of thousands of products for each brand and check the competitor prices of products periodically (on an hourly basis), then you should read  Scalable do-it-yourself scraping – How to build and run scrapers on a large scale and How to prevent getting blacklisted while scraping

Disclaimer: Any code provided in our tutorials is for illustration and learning purposes only. We are not responsible for how it is used and assume no liability for any detrimental usage of the source code. The mere presence of this code on our site does not imply that we encourage scraping or scrape the websites referenced in the code and accompanying tutorial. The tutorials only help illustrate the technique of programming web scrapers for popular internet websites. We are not obligated to provide any support for the code, however, if you add your questions in the comments section, we may periodically address them.

If you need some professional help with scraping complex websites you can fill up the form below.

Do you need some prices monitored ?

We help business monitor prices across e-Commerce websites by collecting data

Please DO NOT contact us for any help with our Tutorials and Code using this form or by calling us, instead please add a comment to the bottom of the tutorial page for help



frdscave June 7, 2018

i m a newbie in python trying to install it steps by steps, however i m getting error msg like below

/usr/local/lib/python3.6/site-packages/requests/ RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn’t match a supported version!
Traceback (most recent call last):
File “”, line 4, in
import unicodecsv as csv
ModuleNotFoundError: No module named ‘unicodecsv’

grateful if you can give some advice, thanks!

    Anders Henricsson July 18, 2018

    I had a similar problem. It seems that unicodecsv is not installed by default, so you’ll have to install it yourself. For example using:
    git clone
    cd python-unicodecsv/
    pip install .

      stephenl April 30, 2019

      pip install unicodecsv

Jena June 6, 2020

Hello everyone,
I have tried to run the code for Samsung (and a few other companies) and I keep on getting the same error. For Samsung, it says “Found 52,503 results for Samsung for Samsung”, however, it says “No data scraped” right below it and no csv file is made. I tried printing out the product_listings variable and it came up empty so the code never enters the “for product in product_listings:” loop. Does anyone have suggestions of what I am doing wrong?
Thank you.

Comments are closed.

Turn the Internet into meaningful, structured and usable data   

ScrapeHero Logo

Can we help you get some data?