Package to help python developers query popular search engines and scrape for result titles, links and descriptions from within their code. Multiple search engines are supported e.g (Google, Yahoo, Bing, DuckDuckGo) with planned support for others.

Programming language: Python
License: MIT License
Tags: Search     Google     Yahoo     Bing     DuckDuckGo     Scraping-tool     Search-engines    
Latest version: v0.5.4

Search Engine Parser alternatives and similar packages

Based on the "Search" category.
Alternatively, view Search Engine Parser alternatives based on common mentions on social networks and blogs.

Do you think we are missing an alternative of Search Engine Parser or a related project?

Add another 'Search' Package


Search Engine Parser

"If it is a search engine, then it can be parsed" - some random guy


Python 3.6|3.7|3.8|3.9 PyPI version PyPI - Downloads Deploy to Pypi Test Documentation Status License: MIT All Contributors

search-engine-parser is a package that lets you query popular search engines and scrape for result titles, links, descriptions and more. It aims to scrape the widest range of search engines. View all supported engines here.

Popular Supported Engines

Popular search engines supported include:

  • Google
  • DuckDuckGo
  • GitHub
  • StackOverflow
  • Baidu
  • YouTube

View all supported engines [here.](docs/supported_engines.md)


Install from PyPi:

    # install only package dependencies
    pip install search-engine-parser
    # Installs `pysearch` cli  tool
    pip install "search-engine-parser[cli]"

or from master:

  pip install git+https://github.com/bisoncorps/search-engine-parser


Clone the repository:

    git clone [email protected]:bisoncorps/search-engine-parser.git

Then create a virtual environment and install the required packages:

    mkvirtualenv search_engine_parser
    pip install -r requirements/dev.txt

Code Documentation

Code docs can be found on Read the Docs.

Running the tests




Query results can be scraped from popular search engines, as shown in the example snippet below.

  import pprint

  from search_engine_parser.core.engines.bing import Search as BingSearch
  from search_engine_parser.core.engines.google import Search as GoogleSearch
  from search_engine_parser.core.engines.yahoo import Search as YahooSearch

  search_args = ('preaching to the choir', 1)
  gsearch = GoogleSearch()
  ysearch = YahooSearch()
  bsearch = BingSearch()
  gresults = gsearch.search(*search_args)
  yresults = ysearch.search(*search_args)
  bresults = bsearch.search(*search_args)
  a = {
      "Google": gresults,
      "Yahoo": yresults,
      "Bing": bresults

  # pretty print the result from each engine
  for k, v in a.items():
      for result in v:

  # print first title from google search
  # print 10th link from yahoo search
  # print 6th description from bing search

  # print first result containing links, descriptions and title

For localization, you can pass the url keyword and a localized url. This queries and parses the localized url using the same engine's parser:

  # Use google.de instead of google.com
  results = gsearch.search(*search_args, url="google.de")

If you need results in a specific language you can pass the 'hl' keyword and the 2-letter country abbreviation (here's a handy list):

  # Use 'it' to receive italian results
  results = gsearch.search(*search_args, hl="it")


The results are automatically cached for engine searches. You can either bypass the cache by adding cache=False to the search or async_search method or clear the engine's cache

    from search_engine_parser.core.engines.github import Search as GitHub
    github = GitHub()
    # bypass the cache
    github.search("search-engine-parser", cache=False)

    # clear cache before search


Adding a proxy entails sending details to the search function

    from search_engine_parser.core.engines.github import Search as GitHub
    github = GitHub()
        # http proxies supported only
        proxy_auth=('username', 'password'))


search-engine-parser supports async:

   results = await gsearch.async_search(*search_args)


The SearchResults after searching:

  >>> results = gsearch.search("preaching to the choir", 1)
  >>> results
  <search_engine_parser.core.base.SearchResult object at 0x7f907426a280>
  # the object supports retrieving individual results by iteration of just by type (links, descriptions, titles)
  >>> results[0] # returns the first <SearchItem>
  >>> results[0]["description"] # gets the description of the first item
  >>> results[0]["link"] # gets the link of the first item
  >>> results["descriptions"] # returns a list of all descriptions from all results

It can be iterated like a normal list to return individual SearchItems.

Command line

search-engine-parser comes with a CLI tool known as pysearch. You can use it as such:

pysearch --engine bing  --type descriptions "Preaching to the choir"


'Preaching to the choir' originated in the USA in the 1970s. It is a variant of the earlier 'preaching to the converted', which dates from England in the late 1800s and has the same meaning. Origin - the full story 'Preaching to the choir' (also sometimes spelled quire) is of US origin.


usage: pysearch [-h] [-V] [-e ENGINE] [--show-summary] [-u URL] [-p PAGE]
                [-t TYPE] [-cc] [-r RANK] [--proxy PROXY]
                [--proxy-user PROXY_USER] [--proxy-password PROXY_PASSWORD]


positional arguments:
  query                 Query string to search engine for

optional arguments:
  -h, --help            show this help message and exit
  -V, --version         show program's version number and exit
  -e ENGINE, --engine ENGINE
                        Engine to use for parsing the query e.g google, yahoo,
                        bing,duckduckgo (default: google)
  --show-summary        Shows the summary of an engine
  -u URL, --url URL     A custom link to use as base url for search e.g
  -p PAGE, --page PAGE  Page of the result to return details for (default: 1)
  -t TYPE, --type TYPE  Type of detail to return i.e full, links, desciptions
                        or titles (default: full)
  -cc, --clear-cache    Clear cache of engine before searching
  -r RANK, --rank RANK  ID of Detail to return e.g 5 (default: 0)
  --proxy PROXY         Proxy address to make use of
  --proxy-user PROXY_USER
                        Proxy user to make use of
  --proxy-password PROXY_PASSWORD
                        Proxy password to make use of

Code of Conduct

Make sure to adhere to the [code of conduct](CODE_OF_CONDUCT.md) at all times.


Before making any contributions, please read the [contribution guide](CONTRIBUTING.md).

License (MIT)

This project is licensed under the [MIT 2.0 License](LICENSE) which allows very broad use for both academic and commercial purposes.

Contributors ✨

Thanks goes to these wonderful people (emoji key):

<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --> <!-- prettier-ignore-start --> <!-- markdownlint-disable --> Ed Luff💻 Diretnan Domnan🚇 ⚠️ 🔧 💻 MeNsaaH🚇 ⚠️ 🔧 💻 Aditya Pal⚠️ 💻 📖 Avinash Reddy🐛 David Onuh💻 ⚠️ Panagiotis Simakis💻 ⚠️ reiarthur💻 Ashokkumar TA💻 Andreas Teuber💻 mi096684🐛 devajithvs💻 Geg Zakaryan💻 🐛 Hakan Boğan🐛 NicKoehler🐛 💻 ChrisLin🐛 💻 Pietro💻 🐛

<!-- markdownlint-restore --> <!-- prettier-ignore-end -->


This project follows the all-contributors specification. Contributions of any kind welcome!

*Note that all licence references and agreements mentioned in the Search Engine Parser README section above are relevant to that project's source code only.