Introduction:
Some call it “screen scraping”, others “web scraping” and yet a few others call it simply “data extraction” from documents that may be online or offline. These terms are used interchangeably, but they have slightly different meanings. Hence, the tactics used for extracting such data are slightly different, but for the course of this discussion, we will concentrate on “web scraping”, which means extracting data from websites (i.e., HTML documents). Later on, we will take a peek at how to extract data from Microsoft Word and Excel documents and PDF files.
Here, we will explore web scraping using Python programming. When firms hire Python developer from us, we have to perform such tasks if required. So, we have experience of that. Here I will share my expertise.
Web Scraping using urllib, urllib2, and BeautifulSoup:
Let us dive in straight away into the topic of “web scraping”. Now, there are multiple ways of doing this using python, and we will take a look at each of them briefly, but our main focus will be on using a slew of the following modules: urllib, its half-brother urllib2, and BeautifulSoup (3.2.1). You may use BeautifulSoup version 4 too, it is called bs4, and I don’t like the name for obvious reasons, so I will stick with the 3.2.1 version. It is not very different from bs4, so if you want to use that, please go ahead and use it with almost the same code that we write using version 3.2.1.
Now, the actual workhorse for this is the module urllib2 which makes the HTTP(S) connections, extracts the content (be it HTML or a Word or XLS or PDF), and stores it into a variable in your program. In the case of an MS Word or Excel or PDF document, it will download the entire doc as a whole and store it somewhere. You need a python module to extract the content from it, and in this discussion, we will see how to do that.
Let’s get to the code now. For now, you might not understand every line of it, but don’t worry, by the end of this blog I will explain everything I put down in the code and you will be able to grasp it.
[Note: the code here is taken from the repository of python and Perl code at https://github.com/supmit13, which is my repository of code that I have written for the past few years, so you can go ahead and have a look at other code in that repo. Not all of it is for use in production-grade since it is a repo of code that I normally write to test stuff, but some of it is in production. However, I own the code, and it is in the public domain, so you are free to make use of it, even without informing me. Just fork a repo if you want and you can start doing your stuff. However, please do not change anything on the original versions.]
Listing #1
import os, sys, re, time # we might not use all of them in our code here, but it is a good practice to have the basic tools imported at the outset
# Let’s start with something simple, like yellowpages.
You don’t need to specify these many header parameters, but it might be a good thing to do if you are trying to make any log parser on the server-side think that you are not a bot. Please note that the ‘User-Agent’ key has a value that is a fingerprint of the Firefox browser. So a person looking at the server logs will not say that your program was a bot. Of course, there are other measures that you need to take to fool the guy on the server side, but more on that as we move on.
pageRequest = urllib2.Request(url, None, httpHeaders)
# In the above line, we create a http request object that we are going to use to scrape the yellowpages website.
# The second parameter is the data we want to send the website in the request, and since
# we are making a GET request, we are not sending any explicit data. So it is None for now.
# Later, we will see how to make POST requests, and in those requests, we will see what we
# send in the ‘data’ param.
try:
pageResponse = urllib2.urlopen(pageRequest)
except:
print “Error retrieving page pointed to by URL: %s Error: %s”%(url, sys.exc_info()[1].__str__())
# Now, at this point we should be able to see what the content of the pageResponse variable
# is, but it will be in a gzipped encoded format. Still let us see what it contains.
The content of the pageResponse variable will be something like this:
For our purposes, this is garbage and we can’t do anything useful with it. So, in order to get the actual content (in english, not gibberish), we need to add the following code:
Listing #2
# Remember we imported StringIO – here is where we use it.
try:
gzipper = gzip.GzipFile(fileobj=responseStream)
decodedContent = gzipper.read()
except:
# Maybe this isn't gzipped content after all....
decodedContent = pageContent
print decodedContent
# This will print the contents in English
Now, this is the type of code (the decoding part) we need on a routine basis. Hence it is best to create a function out of it.
Listing #3
Next time we encounter this scenario, we will call decodeGzippedContent with the encoded content as a param. Having done this, let us now concentrate on the extraction of data. We will need BeautifulSoup here.
BeautifulSoup has enormous capabilities of extracting data and it would not be possible here to show them here in the narrow scope of this document. For example, it can extract data based on a tag name and an attribute of that tag. Let us suppose you want to get all the data contained in all “div” tags in an HTML document, but you want to consider only those div tags that have their “class” attribute set to “news”. To do that you could write the following code:
Listing #4
To know more about BeautifulSoup, I would suggest you take a look at their documentation (which is exhaustive) and only that can give you a precise idea of handy it is in your daily scraping tasks. The link to their docs is: https://www.crummy.com/software/BeautifulSoup/bs4/doc/
Please note that it is version 4 documentation, but if you look around a bit you will find the docs to version 3.2.1 as well
Now let us get back to urllib and urllib2 for a more in-depth discussion. As you might have noticed, in our last example, we used the method “urlopen” of the urllib2 module to make a request. “urlopen” occurs to be the default opener object, and you can modify it according to your requirements. For example, in the following code, I am going to declare a class called “NoRedirectHandler” which handles all redirects that automatically take place when you use a default opener object. Sometimes that is a convenient solution (you don't need to worry about a page redirection since it happens automatically), but in certain conditions, you might want to know what exactly is being done when the redirect happens. So here is the code below, and I will walk you through it.
Listing #5
The point to note above is the definition of the NoRedirectHandler and its usage in pulling data. Using the “urllib2.build_opener()” method, you can create a custom opener object for yourself and use it. Yes, you need to write quite a bit of code for that, but on the positive side, it also gives you flexibility and independence from using the cut and dried stuff which might not suit your purpose.
Other Libraries:
Python have quite a few libraries that allow you to do the same in much lesser code, but that comes with quite an amount of constraints. Firstly, you need to learn the library and figure out what is does and how it does what it does, and then implement your solution accordingly. There are frameworks like “scrapy”, and libraries like “request” and “mechanize” that handle a lot of stuff for you. Again, I can only give you some pointers regarding how to make basic usage of these modules/frameworks, and I am also going to list down a few advantages and disadvantages of these entities.
Let's look at Scrapy first: This is a framework that was designed explicitly for web scraping.
Listing #6
for link in link_extractor.extract_links(response):
# Do something with the links.
As you can see, scrappy hides the details of the technicalities and provides the programmer with an infrastructure where she/he can focus on the functionality of the application under consideration. Again, however, the downside is that Scrapy doesn't take care of everything you need. For instance, if you are extracting links from an HTML document and you want to go 5 levels below the target page to extract links, Scrapy will efficiently do that for you. However, if you want to know which links came from which level, scrappy plays dumb. It doesn't keep track of the layer at which a certain link is found. And that can be a serious problem if you are trying to assign a score reflecting the relevance of the link, where links at the top pages will have more weightage than the others in the lower levels.
You may also use the “request” library, which is very easy to use:
Listing #7
# You can actually make any type of request using this module – like POST, PUT, HEAD...
Now, let us go back to urllib2 one more time and see how POST requests are made. A POST request ideally contains data, and it can be a large volume of data. This might take time, so you might want to increase the server's timeout parameter to a value such that all the data is uploaded.
Let's get into the code:
Listing #8
pageRequest = urllib2.Request(requestUrl, encodedPageFormData, httpHeaders)
The variable “ encodedPageFormData” contains data in the following format:
param1=val1¶m2=val2¶m3=val3....
Now, what you can do is first collect your data and place it in a dictionary, like so:
d = {'param1' : 'val1', 'param2' : 'val2', 'param3': 'val3'...}
In order to get the data in the above mentioned format, you can do urllib.urlencode(d):
encodedPageFormData = urllib.urlencode(d)
The subsequent code is similar to the code we explained above.
If you are interested in scraping newspaper articles and their metadata (like the date on which the article was printed in the newspaper, the name of the author, his/her occupation, etc) can be achieved using a module called “newspaper”. You can easily install a newspaper module using “pip install newspaper”. Once that is done, you may write the following code to extract the content.
Listing #9
from newspaper import Article
Scraping Sites that are behind an Authentication Mechanism:
To scrape contents from a website that is being an authentication mechanism (meaning you have login using your username and password), you need to send the login URL your username, password and any cookie sent by the server to the browser. In such a case, you need to keep track of the cookie(s) that are sent to the scraper/bot every time you send it an HTTP POST request and you would need to include that in your next HTTP request. Given below is a piece of code that demonstrates it. It logs into the Facebook account, but you need to put in the appropriate credentials as well as install the dependencies. It will not run as-is, since it is a part of a larger project which has a layered architecture. But anyway, I think it will be sufficient to give you an idea as to how this thing is done:
Listing #10
Specifically, take a look at the lines starting at line #21. The username and the password are being populated and then a POST request is being made with some other parameters that Facebook wants. This code was written about 3 years back, and hence it is outdated, but if you want to log into any website through an authentication mechanism, this is the way to go.
Scraping Documents other than HTML:
Retrieving documents other than HTML is pretty easy, as they tend to exist at a certain link on a page. For example, if you have to download the PDF file from the website “http://www.pdf995.com/samples/pdf.pdf|”, it is quite easy. The code below does it. Just take a look.
Similar logic applies to MS Word Doc and MS Excel sheets.
Now, the main issue with these documents is parsing and getting the data out. While MS Word and MS Excel have reasonably good python modules to parse them and extract data (like xlrd for xls(x) files, and Python-Docx and a few others for Word documents), data extraction from PDF can be very tricky. It depends on the specific document and its format. There is a framework called Reportlab that is capable of doing this, but it is complex. Hence PDF data extraction needs to be looked into on a case by case basis.
How Do You Scrape the Darknet:
Using the methods above, you would not be able to crawl and scrape contents from the dark web. You would either need a Tor browser (to do it partially manually) or you would need socks5h proxies to do that. Below is a sample code that performs this task. A lot of the code had to be stripped as it was part of a classified project, but I am sure you would get the idea as to how it is done.
Web Scraping Using Python! In this tutorial, you'll learn how to extract data from the web, manipulate and clean data using Python's Pandas library, and data visualize using Python's Matplotlib library. https://t.co/M6zSlqUU4E pic.twitter.com/lsAroMj0A3
— DataCamp (@DataCamp) September 1, 2018
Conclusion:
Well, we have not been able to cover a lot of areas in web scraping and web crawling, but this is a part of data mining and data mining is a big topic. I have tried to explain the concepts I have discussed above to the best of my abilities, but I am sure that in some cases I have fallen short. Anyway, if you have any questions for me, please revert to the address mentioned in the blog and I think I will get a notification. Thanks for your patience.