We would like to add our voice to those out there that have been pleading with Google for some time to give us the ability to differentiate bids by search partner on Google’s syndicate network. Industry leaders such George Mitchie have in the past presented data illustrating that the quality of traffic coming from the Google search engine itself is significantly higher than the traffic coming from search partners. Here we will present some of our own data to reinforce the fact that the quality of traffic we get via search partners is (with some exceptions) of a lower quality than the traffic we get directly from Google’s search engine. Additionally, we will show that there are geographical differences in the value of traffic and that the value of brand and non-brand traffic can potentially be quite different for search partners. We do not dispute the additional value of traffic from the syndication partners, but we just want the ability to pay the right price for that traffic based on its inherent value.
The proportion of Google search traffic that comes from syndicate partners varies quite a bit from client to client. On average, we see about 20-25% of traffic coming from search partners for US clients, while it tends to be closer to 10% for Australasian clients. Data for collections of US and Australasian clients show that search partner traffic is generally of a considerably lower quality. In figures 1 and 2 we sort referring domains by click volume across a set of US and Australasian clients, respectively. We then calculate the ratio of each domains average conversion rate to the overall average conversion rate for the client. This comparative conversion rate data reveal the differences between Google and other referring domains. The color of the bar labels for the different domains distinguish between 3 groups of domains: red (conversion rate at least 10% below overall conversion rate), green (conversion rate at least 10% above overall conversion rate) and black (conversion rate within 10% of overall conversion rate).
Figure 1: Relative traffic value by search partner for a selection of US clients
Figure 2: Relative traffic value by search partner for a selection of AUS clients
In the US and Australasia most search partners bring in traffic that is well below that from Google.com in quality. There are some differences in relative traffic value between the two regions, notably for Amazon traffic. Out of interest we repeated the above analysis by excluding all brand traffic first. The results show that the relative traffic value between search partners change considerably when we do this. In figure 3 we show data for the set of Australasian clients above after excluding all brand traffic. EBay is an example where the exclusion of brand traffic has increased the relative value of its traffic considerably. Volumes do become quite thin so the inherent statistical variability should be kept in mind in our inference. It is clear that the traffic from most search partners is quite a bit lower than direct search traffic from Google. The ability to bid differently by search partner would therefore ad quite a bid of value. Adwords advertisers have the ability to bid differently for the Google domain and the rest of the network by separating all campaigns into Google only versions and exact copies of them to Google + Search partners versions. The bids for the Google-only campaigns are higher because the conversion rates are higher, hence on Google.com only the Google-only campaigns are in play. That means that the Google + Syndication partner campaigns actually only serve ads on the syndication network and the bids can be depressed for that traffic. This is a rough workaround and not ideal. The ability to bid differently by domain would be much better.
Figure 2: Figure 3: Relative non-brand traffic value by search partner for a selection of AUS clients
In Bill Tancer’s book Click, he gives some examples of how near real-time Internet data provides a time advantage over traditional leading economic indicators. These indicators are typically only available with a time lag. The data for a particular month is generally released about halfway through the next month. I found this concept quite interesting when I read the book a year ago. I never really pursued it analytically myself, until I recently discovered a nice interface to query Google Trends data from within the leading freely available open-source statistical software package R. The R package RGoogleTrends (developed under the Omegahat Project) provides a very useful tool to extract and analyze Google query data in an efficient manner. In the documentation for this package it is stated that its development was inspired by a blog post by Google’s chief economist, Hal Varian, which was published on the Google’s Research Blog. They illustrate some simple forecasting methods, and encourage readers to undertake their own analyses. By their own admission it is possible to build more sophisticated forecasting methods. We decided to take up the challenge, because at Clicks2Customers we are always keen for an analytical challenge, especially if it comes from the mighty Google.
R has a wide range of sophisticated time series packages, which we decided to put to the test to see if the incorporation of query data can indeed improve the estimation and forecasting of leading economic indicators. In this post we will focus on the monthly home sales data released by the US Census Bureau and the US Department of Housing and Urban Development at the end of each month and which was used in Google’s study. In order to make our results comparable to that of Google, we use the same January 2004 to July 2008 time window.
Our aims are two-fold:
- Verify that a more sophisticated time series modeling approach improves accuracy compared to Google’s relatively simple models
- Verify that the inclusion of query data in models improves the accuracy of estimates
In figures 2 and 3 we show the raw and seasonally adjusted home sales data downloaded from the US Census Bureau. Similar to the Google study we will start our modeling process on the seasonally adjusted sales figures. This is to aid a comparison of our results with those of Google, although the seasonal component can easily be modeled directly. Google Trends provides an index of the volume of Google queries by geographic location and category. The query index of a search term reflects the query volume for that term in a given geographical region divided by the total number of queries in that region at a point in time. This index is then normalized relative to January 1, 2004. The index at a later date therefore reflects a percentage deviation from January 1, 2004. Google Trends data is also reflected on a category and sub-category level. Figure 3 reflects the search index data for the ‘Real Estate’ category and 5 of its sub-categories: Real Estate Agencies, Home Financing, Home Inspections & Appraisal, Property Management, and Rental Listings & Referrals.
Figure 1: Raw Home Sales Data
Figure 2: Seasonally adjusted home sales data
Figure 3: Google query volumes for the Real Estate category and 5 of its sub-categories
The Google study fits simple auto-regressive models using standard linear model fitting functions. A closer investigation of these models shows that they do not adequately model the correlation structure in the data. We will follow a more classical time series approach based on the classic autoregressive integrated moving average (or ARIMA) time series models. In our study we will first model the house sales data on its own, in order to establish a performance benchmark. Thereafter, we will incorporate query data in the models to test if its inclusion can improve the prediction of house sales data. We will evaluate the prediction of the different models by making a series of one-month ahead predictions and compute the prediction error, known as the mean absolute error (MAE), as defined in the Google study. Each forecast uses only the information available up to the time the forecast is made, which is one week into the month in question.
The simplest time series model that is closest to the null model (Model 0), presented by Google is an ARIMA(1,1,0) model. The difference being that our model takes a lag-1 difference of the log-transformed data to reduce it to a stationary data series, which is a necessary prerequisite. This model provides a reasonable fit to the data and gives a prediction error of 6.03%, which is lower than the 6.91% of Google’s null model. There is some suggestion in the data that a higher order auto-regressive model may provide a better fit. We found that an ARIMA(7,2,0) model does result in an improved fit and a significantly reduced prediction error of 4.04%. The previous model already outperforms Google’s more advance model (Model 1) with a prediction error of 6.08%, which already incorporates query data and house prices. Next we take it up a notch by incorporating the above Google query data and fitting a multivariate time-series model. We use the query data in the first week of each month. We experiment with different combinations of the above query indices and found that the Property Management query index gives the lowest prediction error of 3.7%. The model we fit is a vector auto-regressive model with a lag of 3 using the R package dse. The monthly 1-step ahead prediction errors for the above models above are plotted in figure 3.
Figure 4: US home sales data 1 step ahead prediction errors
Let us return to our stated aims. It seems like we have verified both aims, namely that a more formal time series approach improves considerably on the models presented by Google and that the inclusion of query data has the potential to further improve the 1-step ahead prediction in the case of the house sales data. Our best performing model improves about 39% on the best model presented in the Google study in terms of 1 step ahead prediction accuracy (without incorporating the house price data used by Google yet). There seems to be potential in using Google query data in forecasting economic data.
This is a single example and a proper study will have to apply a more sophisticated modeling approach to a much wider range of data sets. The Google study also illustrates the use of Google Trends data in predicting travel visits. In their example they use data from the Hong Kong Tourism Board. We intent to perform a similar study using monthly tourism data released by Statistics South Africa in conjunction with Google Trends data for the period building up to the 2010 FIFA World Cup. This should make for an interesting case study for the use of Google Trends data. Keep an eye on our blog for the results sometime in the future!