The arrival of Google Instant created great excitement in the search community. There has been a lot of speculation about the impact that Google Instant will have on paid and natural search. Will it help or hurt the long tail? What will its impact be on conversion rates? How will click-through dynamics change on the page? Will big brands benefit from an increased traffic share? Some initial data started surfacing at SMX East and some blogs. The consensus seems to be that there are no dramatic changes as yet. Here we will present our own data on the paid search impact we have observed so far.
IMPRESSIONS AND AVERAGE CPC
We looked at data for a set of our US retail clients and compared metrics after the launch of Google Instant on September 8th, up to October 6th, against a similar length period prior to the launch. In figure 1 we show the distribution of percentage changes in overall impressions and average CPCs before and after the launch of Google Instant at a campaign level. We focus on a reasonably large set of campaigns where the overall spend has stayed fairly stable over the period. In each boxplot the horizontal black line represents the median change for each metric. An earlier post explain how to interpret the boxplots in figure 2. For impressions there was a median 2.9% decrease before and after the launch of Google Instant, and for CPCs a median 3.2% increase. Given the variability in the data, it would be very premature to get too excited about the changes we see below. It really suggests that nothing has changed significantly in terms of impression volumes and CPCs. A change may come when people will change their search behavior over a longer period of time and get familiar with Google Instant.
Figure 1: Changes in impressions and average CPCs before and after the launch of Google Instant.
THE EFFECT ON THE KEYWORD TAIL
Many pundits argued that the arrival of Google Instant would spell the end for the long tail. A searcher may start a search with the intention of using a long-tail search term but may end up getting good enough results when they have only typed a part of the original intended query. Others argue that most people would finish typing their original query anyway. We analyzed a large set of over 20,000 paid search keywords across several US advertisers to see if we see any drop in the revenue contribution from tail terms. We compare the 3 weeks before Google Instant to the 3 weeks thereafter. There were 22,031 keywords with an impression in the 3 weeks before Google Instant. This dropped to 20,911 keywords after Google Instant. This is not significant, as this level of variation was not unexpected for such a large keyword set even before Google Instant. What we are more interested in, is to see if there has been a shift in the contribution to total revenue between head and tail terms. In figure 2 we rank all keywords into percentiles according to their traffic contribution, starting with the highest traffic keywords on the left. As an example, we see that the top 10% of keywords in terms of traffic account for about 80% of total revenue. We can consider these as head terms. Total revenue refers to click date revenue only, as there will be very little data with full cookie revenue. If there has been a substantial shift in traffic towards the head keywords after Google Instant, we would have expected the red line in figure 2 to have shifted above the black curve on the left of the plot; however, it is clear that there has been very little change in the relative contribution of head terms and tail terms. This suggests that we are not seeing a dramatic decrease in the contribution of tail terms to total revenue yet.
Figure 2: Revenue contribution by keyword click percentiles.
CHANGING CLICK-THROUGH DYNAMICS
An interesting potential impact of Google Instant is how it will affect the click-through dynamics on the first and subsequent pages of search results. There are some theories suggesting that relatively more clicks may occur on higher positions than before. The data for a group of our US clients in figure 3 suggest that there may have been a subtle shift in click-through rates before and after Google Instant. We plot the square root of the click-through rate in figure 3, as it results in better visualization due to the heavily skewed click-through rate distributions. We focus on a subset of keywords with at least 1000 impressions in a 4 week periods before and after the launch of Google Instant. The median click-through rate (represented by the black dots) seem to have shifted slightly higher at the top positions and slightly lower at the lower positions.
Subsequently, we fit a logistic regression model that enable us to model the click-through rate as a function of position and match type before and after Google Instant. The regression fits are shown in figure 4 for 2 match types. They also suggest that there has been a subtle increase in click-through rates after Google Instant at the higher positions and a subtle decrease at lower positions. In order to evaluate the statistical significance of these changes in the light of the inherent variability in the data, we fitted another regression model for three position ranges: top (positions 1-2), middle (positions 3-6) and bottom (positions 7-12) instead of individual positions. We then compute click-through odds ratios (and their 95% confidence intervals) for comparing the odds of a click before and after Google Instant for a specific position range. For a refresher on click-through odds refer to this earlier post. The results are summarized in table 1 below. If there has not been a significant change in the odds of a click we would expect the confidence interval to include 1. Table 1 suggest that the odds for a click on the top positions has increased by about 6.6% after Google Instant, while the odds for a click has decreased by about 13.1% after Google Instant at the lower positions. The change for the middle positions is much smaller and barely significant, reflected by the fact that the corresponding confidence interval almost includes 1. The lower positions here also include some page 2 positions, which may suggest that Google Instant is reducing the importance of page 2 results as well. The relatively thin data on second page positions prevent confident inference about the effect on page 2 at this stage.
oFigure 3: A comparison of click-through rates for top and bottom positions before and after Google Instant.
Figure 4: Modeled click-through rates by position pre- and post-Google Instant
Table 1: Click-through odds ratios pre- and post-Google Instant from logistic regression fits
Below we summarize our findings:
- There does not seem to be a significant shift in overall impressions and CPCs at this stage.
- Our data does not provide any evidence at this stage that we should start preparing for the funeral of the long tail of paid search just yet.
- A subtle shift seems to be happening in terms of click-through dynamics on the first page (and possibly the second page). There seems to be a slight increase in click-through for higher positions and slight decrease for lower positions, resulting in relatively more traffic from higher positions than before. If this results in higher competition amongst advertisers for higher positions in order to maintain volume, it is certainly not something Google will be too unhappy about. The best strategy remains to bid traffic to its estimated value rather than focusing too much on position.
- More time is needed to see how Google Instant affects longer terms search behaviour, as people become more familiar with it.
- Google Instant could also affect conversion rates, which was not investigated here. We will investigate this once we have gathered some more conversion data.
We would like to add our voice to those out there that have been pleading with Google for some time to give us the ability to differentiate bids by search partner on Google’s syndicate network. Industry leaders such George Mitchie have in the past presented data illustrating that the quality of traffic coming from the Google search engine itself is significantly higher than the traffic coming from search partners. Here we will present some of our own data to reinforce the fact that the quality of traffic we get via search partners is (with some exceptions) of a lower quality than the traffic we get directly from Google’s search engine. Additionally, we will show that there are geographical differences in the value of traffic and that the value of brand and non-brand traffic can potentially be quite different for search partners. We do not dispute the additional value of traffic from the syndication partners, but we just want the ability to pay the right price for that traffic based on its inherent value.
The proportion of Google search traffic that comes from syndicate partners varies quite a bit from client to client. On average, we see about 20-25% of traffic coming from search partners for US clients, while it tends to be closer to 10% for Australasian clients. Data for collections of US and Australasian clients show that search partner traffic is generally of a considerably lower quality. In figures 1 and 2 we sort referring domains by click volume across a set of US and Australasian clients, respectively. We then calculate the ratio of each domains average conversion rate to the overall average conversion rate for the client. This comparative conversion rate data reveal the differences between Google and other referring domains. The color of the bar labels for the different domains distinguish between 3 groups of domains: red (conversion rate at least 10% below overall conversion rate), green (conversion rate at least 10% above overall conversion rate) and black (conversion rate within 10% of overall conversion rate).
Figure 1: Relative traffic value by search partner for a selection of US clients
Figure 2: Relative traffic value by search partner for a selection of AUS clients
In the US and Australasia most search partners bring in traffic that is well below that from Google.com in quality. There are some differences in relative traffic value between the two regions, notably for Amazon traffic. Out of interest we repeated the above analysis by excluding all brand traffic first. The results show that the relative traffic value between search partners change considerably when we do this. In figure 3 we show data for the set of Australasian clients above after excluding all brand traffic. EBay is an example where the exclusion of brand traffic has increased the relative value of its traffic considerably. Volumes do become quite thin so the inherent statistical variability should be kept in mind in our inference. It is clear that the traffic from most search partners is quite a bit lower than direct search traffic from Google. The ability to bid differently by search partner would therefore ad quite a bid of value. Adwords advertisers have the ability to bid differently for the Google domain and the rest of the network by separating all campaigns into Google only versions and exact copies of them to Google + Search partners versions. The bids for the Google-only campaigns are higher because the conversion rates are higher, hence on Google.com only the Google-only campaigns are in play. That means that the Google + Syndication partner campaigns actually only serve ads on the syndication network and the bids can be depressed for that traffic. This is a rough workaround and not ideal. The ability to bid differently by domain would be much better.
Figure 2: Figure 3: Relative non-brand traffic value by search partner for a selection of AUS clients
In Bill Tancer’s book Click, he gives some examples of how near real-time Internet data provides a time advantage over traditional leading economic indicators. These indicators are typically only available with a time lag. The data for a particular month is generally released about halfway through the next month. I found this concept quite interesting when I read the book a year ago. I never really pursued it analytically myself, until I recently discovered a nice interface to query Google Trends data from within the leading freely available open-source statistical software package R. The R package RGoogleTrends (developed under the Omegahat Project) provides a very useful tool to extract and analyze Google query data in an efficient manner. In the documentation for this package it is stated that its development was inspired by a blog post by Google’s chief economist, Hal Varian, which was published on the Google’s Research Blog. They illustrate some simple forecasting methods, and encourage readers to undertake their own analyses. By their own admission it is possible to build more sophisticated forecasting methods. We decided to take up the challenge, because at Clicks2Customers we are always keen for an analytical challenge, especially if it comes from the mighty Google.
R has a wide range of sophisticated time series packages, which we decided to put to the test to see if the incorporation of query data can indeed improve the estimation and forecasting of leading economic indicators. In this post we will focus on the monthly home sales data released by the US Census Bureau and the US Department of Housing and Urban Development at the end of each month and which was used in Google’s study. In order to make our results comparable to that of Google, we use the same January 2004 to July 2008 time window.
Our aims are two-fold:
- Verify that a more sophisticated time series modeling approach improves accuracy compared to Google’s relatively simple models
- Verify that the inclusion of query data in models improves the accuracy of estimates
In figures 2 and 3 we show the raw and seasonally adjusted home sales data downloaded from the US Census Bureau. Similar to the Google study we will start our modeling process on the seasonally adjusted sales figures. This is to aid a comparison of our results with those of Google, although the seasonal component can easily be modeled directly. Google Trends provides an index of the volume of Google queries by geographic location and category. The query index of a search term reflects the query volume for that term in a given geographical region divided by the total number of queries in that region at a point in time. This index is then normalized relative to January 1, 2004. The index at a later date therefore reflects a percentage deviation from January 1, 2004. Google Trends data is also reflected on a category and sub-category level. Figure 3 reflects the search index data for the ‘Real Estate’ category and 5 of its sub-categories: Real Estate Agencies, Home Financing, Home Inspections & Appraisal, Property Management, and Rental Listings & Referrals.
Figure 1: Raw Home Sales Data
Figure 2: Seasonally adjusted home sales data
Figure 3: Google query volumes for the Real Estate category and 5 of its sub-categories
The Google study fits simple auto-regressive models using standard linear model fitting functions. A closer investigation of these models shows that they do not adequately model the correlation structure in the data. We will follow a more classical time series approach based on the classic autoregressive integrated moving average (or ARIMA) time series models. In our study we will first model the house sales data on its own, in order to establish a performance benchmark. Thereafter, we will incorporate query data in the models to test if its inclusion can improve the prediction of house sales data. We will evaluate the prediction of the different models by making a series of one-month ahead predictions and compute the prediction error, known as the mean absolute error (MAE), as defined in the Google study. Each forecast uses only the information available up to the time the forecast is made, which is one week into the month in question.
The simplest time series model that is closest to the null model (Model 0), presented by Google is an ARIMA(1,1,0) model. The difference being that our model takes a lag-1 difference of the log-transformed data to reduce it to a stationary data series, which is a necessary prerequisite. This model provides a reasonable fit to the data and gives a prediction error of 6.03%, which is lower than the 6.91% of Google’s null model. There is some suggestion in the data that a higher order auto-regressive model may provide a better fit. We found that an ARIMA(7,2,0) model does result in an improved fit and a significantly reduced prediction error of 4.04%. The previous model already outperforms Google’s more advance model (Model 1) with a prediction error of 6.08%, which already incorporates query data and house prices. Next we take it up a notch by incorporating the above Google query data and fitting a multivariate time-series model. We use the query data in the first week of each month. We experiment with different combinations of the above query indices and found that the Property Management query index gives the lowest prediction error of 3.7%. The model we fit is a vector auto-regressive model with a lag of 3 using the R package dse. The monthly 1-step ahead prediction errors for the above models above are plotted in figure 3.
Figure 4: US home sales data 1 step ahead prediction errors
Let us return to our stated aims. It seems like we have verified both aims, namely that a more formal time series approach improves considerably on the models presented by Google and that the inclusion of query data has the potential to further improve the 1-step ahead prediction in the case of the house sales data. Our best performing model improves about 39% on the best model presented in the Google study in terms of 1 step ahead prediction accuracy (without incorporating the house price data used by Google yet). There seems to be potential in using Google query data in forecasting economic data.
This is a single example and a proper study will have to apply a more sophisticated modeling approach to a much wider range of data sets. The Google study also illustrates the use of Google Trends data in predicting travel visits. In their example they use data from the Hong Kong Tourism Board. We intent to perform a similar study using monthly tourism data released by Statistics South Africa in conjunction with Google Trends data for the period building up to the 2010 FIFA World Cup. This should make for an interesting case study for the use of Google Trends data. Keep an eye on our blog for the results sometime in the future!
Data is becoming more and more important in every sphere of society. This is underlined by companies like Google that have it as their mission to organize the world’s information and make it universally accessible and useful. Major consulting firms are acknowledging the emergence of data-driven decision making as an emerging global trend. This is a trend that is not only limited to business world. We are increasingly being exposed to statistics and data in our everyday lives.
The Royal Statistical Society is launching its 10 year campaign for statistical literacy on World Statistics Day: 20/10/2010. The vision for the campaign, known to its friends as getstats, is “a society in which our lives and choices are enriched by an understanding of statistics”. Please visit http://www.getstats.org.uk for more information and to show your support As a company operating in a data-driven industry, we are proud support this global initiative.
In our second Google Analytics post focusing on paid search, we illustrate the use of very useful reporting tools in Google Analytics for gaining some quick and actionable insights into into your paid search campaigns. A useful three-tier strategy is to
- Focus on the major performance shifts at a campaign-level
- Identify the keywords in the campaigns that are driving those shifts in performance
- Zero in on the actual search queries to look for opportunities
Lets consider the above strategy at the hand of an example from a multi-product retail site.
Step 1 – Identify major shifts in Campaign-level performance
Below we show quarter-on-quarter shifts in campaign revenue for all paid search campaigns for the first 2 quarters of 2010, from the Campaigns Report under Traffic Sources. We include filters to focus on the higher traffic non-brand campaigns. Predominantly there has been strong growth in revenue across most campaigns. We can now drill down to a keyword-level to understand which keywords are driving the strong growth in some of the categories.
Step 1: Identify major shifts in campaign performance
If you are not using web analytics to provide extra insights into your PPC campaigns, I suspect Louis Gossett Jnr is lurking somewhere in the background like in the Namibian brewer’s beer commercial, ready with his catch phrase: “What are you doing Quentin?”
The availability of a free and robust web analytics tools such as Google Analytics removes any excuse for viewing your paid search campaigns in isolation. Paid search marketers often treat search campaigns as though they exist in a marketing vacuum. A tool like Google Analytics will quickly help you understand how search campaigns perform as an integrated part of the greater marketing effort that may include SEO, e-mail and above the line campaigns. Furthermore, Google Analytics will provide you valuable behavioral data of the paid search traffic you are driving to your paid search landing pages.
Over the next while we will look at some simple but effective ideas for using Google Analytics to improve your paid search performance. Today we will start by looking at site search behavior.
Leverage the power of site search to improve the quality of your site links
In previous posts we showed how the inclusion of site links in paid search ad copy can improve the click-through rate on your high traffic terms. It will typically be the brand search terms that will display site links, although Google has relaxed this recently. You can use behavioral data from Google Analytics for suggesting effective site links.
Many customers will know your brand, and also that they can find their desired product on your site. Many of them will navigate to your site using a search engine. If the click on a PPC advertisement you will typically land them on the home page and they will navigate their way to their desired product page, with the amount of effort depending on the quality of your site. If you know what the above visitors are typically looking for, why not give them the option to make the journey to that conversion shorter?
With very little effort you can enable site search tracking in Google Analytics and create a custom segment to focus on brand search traffic. The Search Terms Report for the above segment can then give you a quick overview of what most people that land on your site through a brand search typically search for on your site. In the example report below it is clear that many of them are looking for iPod products. This advertiser is also based in a geographical area that is currently experiencing winter, so heaters and dehumidifiers also seem to be in seasonal demand. After looking at the e-commerce tab for this report (not shown here), it is also clear that these products are converting quite well. By including site links to the relevant product pages you preempt visitor intent and shorten the conversion funnel, which is likely to improve conversion.
If Louis Gosset Jnr was selling paid search he would have said: “Always keep it real, use Google Analytics to enhance your paid search campaigns.”
A key objective in PPC is to grow revenue within efficiency targets. This can be achieved through good bid management on the existing keyword set, as well as through the addition of new keywords. It is easy to add thousands of keywords using product feeds or a machine generation method. These keyword sets often add very little term value in a PPC campaign. Quantity is no substitute for quality. It is important to track the quality of keywords you add to a PPC campaign over time and possibly by the source of the keyword e.g. product feed, keyword research, search query mining etc.
Some questions you may want answer include:
- What is the average additional revenue we generate per keyword added (and is it within efficiency targets)?
- Is our initial evaluation period for new keywords sufficient, or are we potentially missing opportunities?
- What is the best source for new keywords?
- Is the quality of new keywords improving over time?
The answers to the above questions are not always apparent from traditional keyword performance reports. Keyword portfolios have already been compared to investment portfolios when it comes to keyword bidding. We use a similar analogy which compares keyword portfolios to loan portfolios. This allows us to apply the idea of a vintage curve for a loan portfolio to keyword portfolios. A loan vintage is a group of accounts that originated in a given time period. For example, all new customers from 2009 make up the 2009 vintage. Vintage groupings can be monthly, quarterly, or annual, depending on the application or data available. Different groups of customers from the same vintage can also be compared to assess their relative quality (e.g. walk-in customers versus directly solicited pre-selected customers). A performance metric such as the proportion of accounts more than 90 days in arrears is often chosen to compare different loan vintages by time on book. If we make the analogy that new keywords represent new customers we can extend the idea of loan vintages to that of keyword vintages. Similar to loans, keywords are added at different time and can originate from different sources (e.g. product feeds or a break-down of search query reports). We can track performance metrics with “keyword age”, which may include percentages of new keywords with a click per impression or revenue per new keyword after a certain time period.
Below we will illustrate one application of a vintage analysis on real campaign data to gain some insight into new keywords. Many more applications are also possible. In figure 1 we consider 8,467 new keywords added to a PPC campaign from April to August 2009. Suppose we start a large set of new keywords on a fairly aggressive initial bid in order to gather performance data with which to optimize keyword bids. We would expect that a large proportion of them would be bid down over time in order to achieve an acceptable or agreed efficiency target, until we reach a stable overall bid level for these keywords as a group. When we study a larger set of campaign data across various advertisers, this seems to happen on average after about 3 months. A similar trend is apparent in figure 1 for our example campaign. It suggests that we achieved an average monthly long-term revenue of about $4 per original keyword added. It is clear how new keywords were bid down over time as performance was optimized.
Figure 1: Revenue per new keyword added versus Average Bid
There is an important variable we did not consider in the above view, and that is the efficiency metric for the new keywords over time. In the case of the above campaign the efficiency metric is cost of sales (i.e. PPC cost/sales). A target of around 8% (allowing for statistical noise) on non-brand terms is an acceptable benchmark for this campaign. In figure 2 below we show the average revenue per keyword (as shown in figure 1) against the cost of sales by keyword age. Despite consistently decreasing bids on new keywords over time, the overall efficiency varied around the 8% cost of sales benchmark throughout. This suggests that we potentially did not give new keywords enough time to prove themselves before decreasing bids on them. If we kept the average bid higher for a longer period we can potentially achieve a higher average long-term revenue than $4 per keyword added, within the efficiency target. The keyword bidding in month 8 seems to try and address this efficiency by raising the average bid. Hopefully we have not priced too many of the original keywords out of contention by this stage.
Figure 2: Revenue per new keyword versus Efficiency
Similar to the above curves you can track the average revenue that you gain from each new keyword added for different cohorts of keywords. This often decreases over time if your inventory remains fairly stable over time, as the most relevant keywords are often selected upfront if your initial campaign setup and research processes are well developed. Treating your keywords as ‘vintages’ can provide useful insight about the quality and performance of your new keywords over time. It also gives you an approximation of the average revenue you can expect per keyword added and at what efficiency. I hope 2010 will be a good vintage for your PPC keywords!
There is no doubt that the keyword tail is important, provided it is effectively managed in a resource efficient manner, for example by using a bid management system. The difficulty in assessing just how much the tail contributes lies in the exact definition of the tail itself. The contribution of the tail to total campaign revenue is quite sensitive to this definition. There are numerous definitions of the keyword tail and head out there. A good practical view of the keyword head is to consider it as the number of keywords an analyst can manage manually with reasonable ease. A number of 250 to 500 seem to be popular choices. Other definitions specify fairly arbitrary click thresholds. In this post we will consider a visual way for assessing the size of your campaign’s tail. It is also important to assess if your keyword tail meets your campaign’s overall efficiency targets. The tail can add great value to your campaign, but it needs to be managed effectively, just like the rest of your campaign.
A useful way to view the tail is to sort all keywords by their traffic contribution over a specified period. This enables us to divide keywords into different segments based on their contribution to total traffic, and then to compute the contribution of each of these keyword segments to total revenue and their relative efficiency. The choice of segments need some care due to the discrete nature of keyword traffic for lower volume keywords. Figure 1 below shows how this data can be visually represented to give you an overview of an entire campaign. This can then be visually represented as illustrated in figure 1. The data represents the performance of all non-brand keywords for an online retailer over the last 3 months of 2009. The blue bar represents about 3,818 keywords (23% of the 16,601 keywords with an impression over the period). If we consider this group of keywords as the head keywords, it will provide a fairly conservative estimate of the tail. In this definition the tail keywords still contribute about 24% of total revenue at an acceptable efficiency given the campaign’s efficiency targets.
One of the recent new features on Google Adwords has been the ability to add Site links to certain campaigns. These campaigns are the ones that Google deems eligible to display additional links in the Adwords advert (similar to what they’ve been doing for quite a while on the organic search results (see fig. 1). On the SERPS, these additional organic links provide the user with a quick way to navigate to subsections of a website without having to go the homepage first, effectively saving the user the effort of an extra click.
Fig. 1: Site links on organic results
So at first sight, it seems like a good idea for Google to implement the same feature on Sponsored Links because it adds value and relevance to the user. But could there be other motives behind Google’s action?
The first thing to point out is that in reality, Google (currently) only seems to display site links on ads with a very high Adrank, which basically means ads linked to brand (or trademark) keywords. Now there has been a lively debate in the search marketing world over whether or not one should bid on one’s own trademark. After all, why would you spend money on clicks that would find you in the (free) organic results anyway? That is a valid argument but unfortunately it is also a bit naïve. As Fig. 2 shows, if you don’t bid on your brand, someone else eventually will and no matter how good your organic rankings, you will lose clients to your competitors. Cause the truth is, Google does not make any money from organic results and if possible they will try to put a sponsored link above the organic results. Your competitor doesn’t even have to bid on your brand, Adwords’ Expanded Broad Matching might automatically match your competitor’s brand to yours. (Example: Google will match the keyword ‘Avis’ ,broad, to a user searching for ‘Hertz’ – i.e. without Avis actively bidding on the Hertz keyword). Have a proper look at your search query reports and you would be surprised at how frequently this occurs.
Fig. 2: Competitors reducing the effect of organic site links
With the site links feature, Google provides a trademark owner the ability to appear more prominently on the SERPs but only if they choose to bid on their trademarked keywords. If we look at fig. 3 it is obvious that site links make a sponsored link appear more prominent for a trademark owner, even as the organic results are being pushed lower down the page.
This post is based on qualitative observations on popular brand that does not belong (or is related) to our client portfolio but we have quantitative data to back up our theories (more on that in a next post). Unlike in the organic results, site links are not only intended to increase user relevance, but also to increase Google’s revenue by making the competition for prime SERPs advertising space more fierce. Organic (free) listings are being pushed lower down the page, forcing lower ranking websites to start paying for more prominent sponsored listings. For trademark owners site links provide a competitive advantage but at the bottom of the ladder we suspect that competition and click prices will increase.
In a next post, we’ll take a more detailed look at the impact of site links on a campaign’s performance based upon our experience of the last few months.