Cogblog

The Official Blog of Cogmap, the Org Chart Wiki

 

Archive for May, 2010

 

Size Matters and Other Obfuscations by Ad Networks and Google

Friday, May 28th, 2010

“Today, in response to feedback from many of you who run branding campaigns, we’re announcing a new filter that allows you to show your ads only on AdSense sites among the 1000 largest on the web, as defined by DoubleClick Ad Planner. This new feature will ensure that your ads reach a large number of users, but only on well-known sites best suited for branding goals.”

- Google Inside Adwords blog post this week

Yeah, Ad Networks sell the same way all the time.  comScore top 100, top 500, top 1000.  It is kind of amazing that Google has not released this feature previously, but I am sure that what held them up was that the Product Manager responsible kept de-prioritizing this because it is, if not “evil”, a kind of silly feature that generally serves to mislead people that are not paying particularly close attention.

Perpetually, this is positioned by the sales force as “branding inventory”.  It is implied that the largest web sites in the world are generating inventory ideally suited for large brands to advertise on.  That is just plain silly.

The amount of page views a web site generates has absolutely nothing to do with the relative brand safety or value of brand association that a web site has.  Let’s spend 30 seconds looking at this list of top 1000 web sites:

  • Facebook
  • Blogspot
  • MySpace
  • Photobucket
  • Orkut
  • WordPress
  • Blogger
  • Partypoker
  • Megauploader
  • Imageshack

Those are just some of the non-Chinese sites in the top 60 (so maybe 1/2 of the english language content available) and let me tell you:

  • These sites are not just predominantly UGC content, they feature tons of NSFW content.  (Unlike, say, LinkedIn, which, while on the list and primarily UGC, is mostly brand-safe.)
  • Further, the inventory that Google is getting from some of these guys (e.g. MySpace) will definitely tend toward the distinctly less brand-safe and NSFW.  MySpace is probably not sending their best impressions to Google.

Popularity of the web site is not even loosely correlated with brand safety.  The strategy for this stuff is simple.  For most ad networks (and this is probably still true for Google, although maybe slightly less so), most of their ad inventory came from these sites anyways.  After all, these sites account for the majority of impressions on the Internet.

This “Top X” assurance comforts media planners in some way while only cutting out 10% – 30% of the inventory the network had for delivery.  The result is plenty of room to optimize for the network and the deal is closed, so a victory all the way around.  Buying “Top X” inventory is a way to pay a premium for the inventory you were probably going to get anyway without significant brand protections for advertisers.

Yay, I am on AdExchanger Today

Tuesday, May 25th, 2010

John’s little center of the quant advertising universe, AdExchanger, saw fit to run an article about an infographic I invented for no reason. Wooooot!

Jerry Neumann, the best angel investor in online advertising start-ups in NYC, did a great post where he created a third party data set around companies discussed in AdExchanger. He mentioned in the comments that he would make the data set available to anyone who wanted it so I leapt at the chance.

When I started fooling around with the data set, my first couple of hypotheses ended up being uninteresting, but then I started to think about Jerry’s comments on the relationships between VCs and I thought, “One could use a data set like this to model some of the relationships in the space. VCs that share an investment are probably tighter than those that aren’t. This allows you to take the space apart a little bit.

Kind of interesting. Kind of not. Don’t think that I don’t know that this infographic is not earth-shattering, it was simply easy to do, so I did it.

Also, Jerry has already forewarned us that he has gotten tons of emails since then from investors complaining that they weren’t represented. Clearly their PR guy needs to work on AdExchanger a little more! The data set is what it is. Is it flawed? Sure. Most third-party data is. I don’t think anyone has the time or inclination to build a better set. If this interests you and you have time or money, let me know.

Huge props to Smart Logic Solutions, which I shanghai’d into actually doing the design work.

There will be 4,000 DSPs

Tuesday, May 4th, 2010

People used to say that there were 400 ad networks out there in their hey-day.  I think there will be an order of magnitude more DSPs.  Here is why:

Building a DSP is easy.

Ad networks were vertically integrated examples of our marketplace.  To build an ad network, you had to build an exchange, sign up publishers, and build a DSP.  The market has evolved to a point now where getting dramatic reach and scale at a level unimaginable 5 years ago just takes 10 lines of PHP (so that is like one line of Python?) with Right Media’s PHP library:

 
 
 
 
 
 
 
 
// get params from command line
list(, $SOAP_BASE, $ADV_USERNAME, $ADV_PASSWORD,
       $ADV_ID, $LI_ID) = $_SERVER['argv'];

// get handles on Contact and Publisher services
$contact_client = new SoapClient($SOAP_BASE . 'contact.php?wsdl');
$campaign_client = new SoapClient($SOAP_BASE . 'campaign.php?wsdl');
$target_client = new SoapClient($SOAP_BASE . 'target_profile.php?wsdl');

// login and get auth token to be used later for other API calls
$token = $contact_client->login($ADV_USERNAME, $ADV_PASSWORD);

// create campaign data holder
$campaign = new stdClass();

// set advertiser entitity id and description fields
$campaign->advertiser_entity_id = $ADV_ID;
$campaign->description = 'example campaign';

// create a new campaign based on campaign object defined above
$campaign_id = $campaign_client->add($token, $campaign);

// link the campaing to the line item
$campaign_client->addLineItem($token, $campaign_id, $LI_ID);

Fearsome.

I used to tell people all the time that starting an ad network is the easiest thing someone can do: Get some inventory, call 20 agencies and tell them you have a new algorithm to drive performance, and they each give you a $20k test budget!  Voila, you did $400k in your first quarter.  Agencies felt pressure to find test budgets for everybody because, if a client were to ask them “What do you think of X”, you can’t say, “Well, we never tried X”.  Agencies felt an almost fiduciary responsibility to try new stuff.

Now, if you didn’t perform, the next chunk of dollars was tougher, but you had runway instantly.

We are seeing the exact same dynamic in DSPs today.  If you mix the data and inventory a little differently (and it would be hard not to), voila, you are worthy of a test.  Agencies are playing the field today, the great rollup of DSPs that everyone is so looking forward to has not yet happened, and agencies expect to and are prepared to try lots of different things.  All you have to do is perform after that and you have a business.  If you eat your margins early on, offer layered in retargeting, etc., the odds that you can artificially inflate performance in a way that makes your business look interesting is high and this gives you more runway to work.  Convert 25% of your test budgets to $200k renewals and you have a Q2 business doing $1m in revenue.

Building a DSP is cake.  Locking in data or inventory or building an algorithm that creates great performance for advertisers over time by arbitraging data and inventory is what will separate the winners from the losers, but it will be non-obvious in the first 6 months of working together who those guys are for agencies.  Remember when Glam spent millions of dollars of VC money buying inventory at a loss to lock in exclusive access to inventory, then when they had the advertiser base, they crushed payouts to pubs? There are a lot of the same kinds of problems that get slathered over early on in this market. Building a DSP that can look at 10 billion impressions a day vs. 1 billion cost-efficiently is interesting, but no one will need that scale for a year, so no one will know who can do it better/faster/cheaper.

I worked at Ad.com and I am not gonna lie, I took away from that place a sense that algorithms are hard, you need tons of data to figure out how to improve them, and there are few shortcuts other than “been there, done that”. Unlikely that any small company has a better algorithm today. You need that kind of algorithmic skillz. (And of course, Google > Ad.com > other networks) And even Ad.com would say that their algorithm has tons of room for improvement. Ask me, I know ten. But even algorithmic improvement has trade-offs: You can factor in more data to improve results, but that requires more data points to test which requires larger test budgets. Bummer. To limit testing, you need to limit the factors you evaluate. The result is plain vanilla.

The market for DSPs is white-hot, expect 4,000 of them, but most of them will be frauds.  Real differentiation and competitive advantage in a space where virtually everyone has access to the same inputs is hard.  Remember the Netflix competition.  Given a data set (100 million data points), improving predictive results is incredibly hard.  With a million dollars on the line, it took 3 years and dozens of people to generate a 10% improvement.

You heard it here first.  Breaking news as it happens.