Lightning Fast and Elegant Scraping Framework for Gophers. Colly provides a clean interface to write any kind of crawler/scraper/spider. With Colly you can easily extract structured data from websites, which can be used for a wide range of applications, like data mining, data processing or archiving. To scrape Google results we have to make a request to Google using a URL containing our search parameters. For instance Google allows you to pass a number of different parameters to a search query. In this particular example we are going to write a function that will generate us a search URL with our desired parameters.
I have previously written a post on scraping Google with Python. As I am starting to write more Golang, I thought I should write the same tutorial using Golang to scrape Google. Why not scrape Google search results using Google’s home grown programming language.
Imports & Setup
2 4 6 8 | 'fmt' 'net/http' ) |
This example will only being using one external dependency. While it is possible to parse HTML using Go’s standard library, this involves writing a lot of code. So instead we are going to be using the very popular Golang library, Goquery which supports JQuery style selection of HTML elements.
Defining What To Return
We can get a variety of different information from Google, but we typically want to return a result’s position, URL, title and description. In Golang it makes sense to create a struct representing the data we want to be gathered by our scraper.

2 4 6 | 'com':'https://www.google.com/search?q=', 'ru':'https://www.google.ru/search?q=', } |
This will allow pass a two letter country code to our scraping function and scrape results from that particular version of Google. Using the different base domains in combination with a language code allows us to scrape results as they appear in the country in question.
What Is Web Scraping
2 4 6 8 | func buildGoogleUrl(searchTerm string,countryCode string,languageCode string)string{ searchTerm=strings.Replace(searchTerm,' ','+',-1) ifgoogleBase,found:=googleDomains[countryCode];found{ returnfmt.Sprintf('%s%s&num=100&hl=%s',googleBase,searchTerm,languageCode) returnfmt.Sprintf('%s%s&num=100&hl=%s',googleDomains['com'],searchTerm,languageCode) } |
We then write a function that allows us to build a Google search URL. The function takes in three arguments, all of the string type and returns a URL also a string. We first trim the search term to remove any trailing or proceeding white-space. We then replace any of the remaining spaces with ‘+’, the -1 in this line of code means that we replace every-single remaining instance of white-space with a plus.
We then look up the country code passed as an argument against the map we defined earlier. If the countryCode is found in our map, we use the respective URL from the map, otherwise we use the default ‘.com’ Google site. We then use the format packages “Sprintf” function to format a string made up of our base URL, our search term and language code. We don’t check the validity of the language code, which is something we might want to do if we were writing a more fully featured scraper.
2 4 6 8 10 12 14 | funcgoogleRequest(searchURL string)(*http.Response,error){ baseClient:=&http.Client{} req,_:=http.NewRequest('GET',searchURL,nil) req.Header.Set('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36') res,err:=baseClient.Do(req) iferr!=nil{ }else{ } |
We can now write a function to make a request. Go has a very easy to use and power “net/http” library which makes it relatively easy to make HTTP requests. We first get a client to make our request with. We then start building a new HTTP request which will be eventually executed using our client. This allows us to set custom headers to be sent with our request. In this instance we our replicating the User-Agent header of a real browser.
We then execute this request, with the client’s Do method returning us a response and error. If something went wrong with the request we return a nil value and the error. Otherwise we simply return the response object and a nil value to show that we did not encounter an error.
Parsing the Result
Now we move onto parsing the result of request. Compared with Python the options when it comes to HTML parsing libraries is not as robust, with nothing coming close to the ease of use of BeautifulSoup. In this example, we are going to use the very popular Goquery package which uses JQuery style selectors to allow users to extract data from HTML documents.
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 | func googleResultParser(response *http.Response)([]GoogleResult,error){ doc,err:=goquery.NewDocumentFromResponse(response) returnnil,err results:=[]GoogleResult{} rank:=1 item:=sel.Eq(i) link,_:=linkTag.Attr('href') descTag:=item.Find('span.st') title:=titleTag.Text() iflink!='&&link!='#'{ rank, title, } rank+=1 } } |
We generate a goquery document from our response, and if we encounter any errors we simply return the error and a nil value object. We then create an empty slice of Google results which we will eventually append results to. On a Google results page, each organic result can be found in ‘div’ block with the class of ‘g’. So we can simply use the JQuery selector “div.g” to pick out all of the organic links.
We then loop through each of these found ‘div’ tags finding the link and it’s href attribute, as well as extracting the title and meta description information. Providing the link isn’t an empty string or a navigational reference, we then create an GoogleResult struct holding our information. This can then be appended to the slice of structs which we defined earlier. Finally, we increment the rank so we can tell the order in which the results appeared on the page.
Wrapping It All Up
2 4 6 8 10 12 14 16 18 20 | './googlescraper' 'time' varkeywords=[]string{'edmund martin','python programming','web scraping'} res,_:=googlescraper.GoogleScrape(keyword,'uk','en') fmt.Println(keyword) fmt.Println(item) time.Sleep(time.Second *30) } |
The above program makes use of our GoogleScraper function by working through a list of keywords and scraping search results. After each scrape we are waiting a total of 30 seconds, this should help us avoid being banned. Should we want to scrape a larger set of keywords, we would want to randomise our User-Agent and change up the proxy we were using in each request. Otherwise we are very likely to run into a Google captcha which would prevent us from gathering any results.
The full Google scraping script can be found here. Feel free to play with it and think about some of the additional functionality that could be added. You might for instance want to scrape the first few pages of Google, or pass in a custom number of results to be returned by the script.
If you’re here, you probably already know what web scraping is. But on the off chance that you just happened to stumble upon this article, let’s start with a quick refresher on web scraping, and then we’ll move on to goquery.
Web Scraping – a quick introduction
Web Scraping is the automated method of extracting human-readable data output from a website. The specific data is gathered and copied into a central local database for later retrieval or analysis. There is a built-in library in the Go language for scraping HTML web pages, but often there are some methods that are used by websites to prevent web scraping – because it could potentially cause a denial-of-service, incur bandwidth costs to yourself or the website provider, overload log files, or otherwise stress computing resources.
However, there are web scraping techniques like DOM parsing, computer vision and NLP to simulate human browsing on web page content.
GoQuery is a library created by Martin Angers and brings a syntax and a set of features similar to jQuery to the Go language.
jQuery is a fast, small, and feature-rich JavaScript library. It makes things like HTML document traversal and manipulation, event handling, animation, and Ajax much simpler with an easy-to-use API that works across a multitude of browsers.
– jqueryGoQuery makes it easier to parse HTML websites than the default net/html package, using DOM (Document Object Model) parsing.
Installing goquery
Let’s download the package using “go get“.
A concise manual can be brought up by using the “go doc goquery” command.
Web Scraping Golang Yang
GoLang Web Scraping using goquery
Web Scraping With Python
Create a new .go document in your preferred IDE or text editor. Mine’s titled “goquery_program.go”, and you may choose to do the same:
We’ll begin by importing json and goquery, along with ‘log‘ to log any errors. We create a struct called Article with the Title, URL, and Category as metadata of the article.
Within the function main(), dispatch a GET client request to the URL journaldev.com for scraping the html.
We have already fetched our full html source code from the website. We can dump it to our terminal using the “os” package.
This will output the whole html file along with all tags in the terminal. I’m working on Linux Ubuntu 20.04, so the output display may vary with system.
Golang Web Frameworks
It also gave a secondary print statement along with a notification that the page was optimized by LiteSpeed Cache:
Number of bytes copied to STDOUT: 151402
Now, let’s store this response in a reader file using goquery:
Now we need to use the Find() function, which takes in a tag, and inputs that as an argument into Each(). The Each function is typically used with an argument i int, and the selection for the specified tag. On clicking “inspect” in the JournalDev website, I saw that my content was in <p> tags. So I defined my Find with only the name of the tag:
- The “fmt” library has been used to print the text.
- The “next” was just to check if the output was being received(like, for debugging) but I think it looks good with the final output.
- The “%d” and “%s” are string format specifiers for Printf.
Web Scraping Example Output
The best thing about coding is the satisfaction when your code outputs exactly what you need, and I think this was to my utmost satisfaction:
Golang Web Scraping Xpath
I tried to keep this article as generalised as possible when dealing with websites. This method should work for you no matter what website you’re trying to parse !
With that, I will leave you…until next time.
References
