Web Scraping

Intoli Smart Proxies

Want to use the smartest web scraping proxies available?

Get started now and find out why Intoli is the best in the business!

Performing Efficient Broad Crawls with the AOPIC Algorithm

This article explains how the Adaptive On-Line Page Importance Computation (AOPIC) algorithm works. AOPIC is useful for performing efficient broad crawls of large slices of the internet. The key idea behind the algorithm is that pages are crawled based on a continuously improving estimate of page importance. This effectively allows the user of the algorithm to allocate the bulk of their limited bandwidth on the most important pages that their crawler encounters.

Continue reading

User-Agents — Generating random user agents using Google Analytics and CircleCI

If you’re in a hurry, you can head straight to the user-agents repository for installation and usage instructions! While web scraping, it’s usually a good idea to create traffic patterns consistent with those that a human user would produce. This of course means being respectful and rate-limiting requests, but it often also means concealing the fact that the requests have been automated. Doing so helps avoid getting blocked by overzealous DDOS protection services, and allows you to successfully scrape the data that you’re interested in while keeping site operators happy.

Continue reading

How F5Bot Slurps All of Reddit

In this guest post, Lewis Van Winkle talks about F5Bot, a free service that emails you when selected keywords are mentioned on Reddit, Hacker News, or Lobsters. He explains in detail how F5Bot is able to process millions of comments and posts from Reddit every day on a single VPS. You can check out more of Lewis Van Winkle’s writing at codeplea.com, and his open source contributions at github.com/codeplea.

Continue reading

A Slack Community for Developers to Discuss Web Scraping

The Web Scrapers Slack Community Want to link up with other developers interested in web scraping? Join the Web Scrapers Slack Channel to chat about Selenium, Puppeteer, Scrapy, or anything else related to web scraping. Invite Me! The last few years have been a very exciting time for web scraping. In that period, both Chrome and Firefox have introduced memory efficient headless modes which allow them to run on Linux servers without requiring X11 and a virtual framebuffer like xvfb.

Continue reading

Analyzing One Million robots.txt Files

One Million robots.txt Files The idea for this article actually started as a joke. We do a lot of web scraping here at Intoli and we deal with robots.txt files, overzealous ip bans, and all that jazz on a daily basis. A while back, I was running into some issues with a site that had a robots.txt file which was completely inconsistent with their banning policies, and I suggested that we should do an article on analyzing robots.

Continue reading

Scraping User-Submitted Reviews from the Steam Store

This article was originally published as a guest post on ScrapingHub’s blog. ScrapingHub is the company that wrote Scrapy, which this article is about, so read on to see why they liked it! Introduction The Steam game store is home to more than ten thousand games and just shy of four million user-submitted reviews. While all kinds of Steam data are available either through official APIs or other bulk-downloadable data dumps, I could not find a way to download the full review dataset.

Continue reading

Finding Pareto Optimal Blogs on Hacker News

Introduction I’ve been doing a lot of technical writing recently and, with that experience, I’ve grown to more deeply appreciate the writing of others. It’s easy to take the effort behind an article for granted when you’ve grown accustomed to there being new high-quality content posted every day on Hacker News and Twitter. The truth is that a really good article can take days or more to put together and it isn’t easy to write even one article that really takes off, let alone a steady stream of them.

Continue reading

Scraping and Parsing Sitemaps in Bash

A wise man once said that sitemaps are the window into a website’s soul, and I’m not inclined to disagree. Without a sitemap, a website is just a labyrinthian web of links between pages. It’s certainly possible to scrape sites by crawling those links, but things become much easier with a sitemap that lays out a site’s content in clear and simple terms. Sites which provide sitemaps are quite literally asking to be scraped; it’s a direct indication that the site operators intend for bots to visit the pages listed in the sitemaps.

Continue reading