Can we predict the crawling of the Google-Bot?

Logfile analysis allows website owners a deep insight about how Google and other search engines crawl their pages. How often does a bot come by a page, which pages are rarely or not at all visited by the bots? For which pages does the bot get errors? All these and more questions can be answered by a log file analysis.

Unfortunately, a log file analysis is extremely time-consuming in many cases, as very few employees have access to the log files, especially in large organizations. If you have managed to access this treasure trove of data, the evaluation of the usually huge files can cause you headaches.

For collecting our logs we used log-hero.com, a tool which stores the logs directly in Google Analytics and we can simply query the data there via the API and compare this data directly with user data from the main account as needed in the analysis.

Get the Bot Data from Google Analytics

To get to know the paths the google crawler travels along our page, we first need the data provided through Log Hero. We can easily acccess the data through an API call to Google Analyitcs. We like to use the rga package, but you may prefer the googleAnalyticsR. Both deliver the data in the same format, so it’s mostly a matter of personal preference. It is very important though to use the rga package from https://github.com/skardhamar/rga and not it’s capital letter cousin RGA.

 

Follow the Googlebot

We do some data wrangling to put the steps of the crawler into a format that is accepted by the clickstream package. Now we know the paths the crawlers is taking and are able to find a structure within these paths. A natural model to be applied here is markov chains which are easily fitted and plotted.

Estimating the hits of a single page

Now if we add a new page or otherwise want to predcit how often a single page will be visited by the crawler, we would like to build a prediction model. For this, we will use 4 sources of data:

  1. The Loghero data – Where was the crawler in the past and how often?

  2. The standard Google Analytics Data – Where and how long do human users stay?

  3. Screaming Frog page data – Meta data of each page (text ratio, word count, size…)

  4. Screaming Frog link data – (Unique) in and outlinks to the page

So let us gather all this data. We will be using the googleAnalyticsR package this time.

To merge all our data together we have to do some basic data wrangling.

We will now use to basic models which we will train on a subset and test on test set. First, we will use a Random Forest to estimate a basic regression, ignoring the ordinal factor scale of counting the visits discreetly, but enjoying the non-linearity. In the second model, we will use a generalize linear model with poisson link and thusly incorporate the fact we are predicting a count variable but losing non-linearity in the proess.

We have trained the models and now predicted the number of crawler visits for pages the model hast never seen. Let’s compare the predicted values with the actual ones:

/ohrenschmalz/ 197 126 5
/ohren-reinigen/ohrenkerzen/ 195 74 5
/ohren-reinigen/ohren-reinigen-bei-haustieren/ 127 120 6
/ohren-reinigen/ohrenreiniger/ 79 72 4
/otoskop/ 69 41 4
/reinigungsanleitung/ 63 44 5
/ohren-reinigen/wattestaebchen/ 47 97 4
/testsieger/ 38 53 2
/ohrenschmalz/ 1 1 2
/ohren-reinigen/ohrenkerzen/ 2 4 2
/ohren-reinigen/ohren-reinigen-bei-haustieren/ 3 2 1
/ohren-reinigen/ohrenreiniger/ 4 5 5
/otoskop/ 5 8 5
/reinigungsanleitung/ 6 7 2
/ohren-reinigen/wattestaebchen/ 7 3 5
/testsieger/ 8 6 8

As we can see, the random forest is far more capable of dealing with the large differences in scale in between the pages than the glm. But if we only consider the rank of the pages, both models acutally work pretty well. From an user perspective, the concret number of predicted visits is not as important as the fact which pages are visited more often. The GLM of course offers to take a look at the beta values:

 

With these information you can further tune and adjust your homepage and find out what the crawler is actually doing.

Further thoughts to predict the Google Bot

This article shows the first experiments we can do with the data from Log Hero for individual pages. Even further, we managed to produce viable proof-of-concept models for crawler behaviour prediction within a few dozen lines of code. Investing more time and thought may lead to very sophisticate models directly suited to your needs.

The idea now is to add more and more data points to the prediction and to find out what were the most important factors in our prediction – the page speed, behaviour/traffic by real users, in- and outlinks?

We are very happy if you could share your results with us, if you can do similar analyses (or even the same with the code above) analyses or give further tips for perhaps relevant data points.

Der Beitrag Can we predict the crawling of the Google-Bot? erschien zuerst auf Keyword Hero – The SEO Revolution.

Related