Building a DSP is easy.
Ad networks were vertically integrated examples of our marketplace. To build an ad network, you had to build an exchange, sign up publishers, and build a DSP. The market has evolved to a point now where getting dramatic reach and scale at a level unimaginable 5 years ago just takes 10 lines of PHP (so that is like one line of Python?) with Right Media’s PHP library:
// get params from command line list(, $SOAP_BASE, $ADV_USERNAME, $ADV_PASSWORD, $ADV_ID, $LI_ID) = $_SERVER['argv']; // get handles on Contact and Publisher services $contact_client = new SoapClient($SOAP_BASE . 'contact.php?wsdl'); $campaign_client = new SoapClient($SOAP_BASE . 'campaign.php?wsdl'); $target_client = new SoapClient($SOAP_BASE . 'target_profile.php?wsdl'); // login and get auth token to be used later for other API calls $token = $contact_client->login($ADV_USERNAME, $ADV_PASSWORD); // create campaign data holder $campaign = new stdClass(); // set advertiser entitity id and description fields $campaign->advertiser_entity_id = $ADV_ID; $campaign->description = 'example campaign'; // create a new campaign based on campaign object defined above $campaign_id = $campaign_client->add($token, $campaign); // link the campaing to the line item $campaign_client->addLineItem($token, $campaign_id, $LI_ID);
I used to tell people all the time that starting an ad network is the easiest thing someone can do: Get some inventory, call 20 agencies and tell them you have a new algorithm to drive performance, and they each give you a $20k test budget! Voila, you did $400k in your first quarter. Agencies felt pressure to find test budgets for everybody because, if a client were to ask them “What do you think of X”, you can’t say, “Well, we never tried X”. Agencies felt an almost fiduciary responsibility to try new stuff.
Now, if you didn’t perform, the next chunk of dollars was tougher, but you had runway instantly.
We are seeing the exact same dynamic in DSPs today. If you mix the data and inventory a little differently (and it would be hard not to), voila, you are worthy of a test. Agencies are playing the field today, the great rollup of DSPs that everyone is so looking forward to has not yet happened, and agencies expect to and are prepared to try lots of different things. All you have to do is perform after that and you have a business. If you eat your margins early on, offer layered in retargeting, etc., the odds that you can artificially inflate performance in a way that makes your business look interesting is high and this gives you more runway to work. Convert 25% of your test budgets to $200k renewals and you have a Q2 business doing $1m in revenue.
Building a DSP is cake. Locking in data or inventory or building an algorithm that creates great performance for advertisers over time by arbitraging data and inventory is what will separate the winners from the losers, but it will be non-obvious in the first 6 months of working together who those guys are for agencies. Remember when Glam spent millions of dollars of VC money buying inventory at a loss to lock in exclusive access to inventory, then when they had the advertiser base, they crushed payouts to pubs? There are a lot of the same kinds of problems that get slathered over early on in this market. Building a DSP that can look at 10 billion impressions a day vs. 1 billion cost-efficiently is interesting, but no one will need that scale for a year, so no one will know who can do it better/faster/cheaper.
I worked at Ad.com and I am not gonna lie, I took away from that place a sense that algorithms are hard, you need tons of data to figure out how to improve them, and there are few shortcuts other than “been there, done that”. Unlikely that any small company has a better algorithm today. You need that kind of algorithmic skillz. (And of course, Google > Ad.com > other networks) And even Ad.com would say that their algorithm has tons of room for improvement. Ask me, I know ten. But even algorithmic improvement has trade-offs: You can factor in more data to improve results, but that requires more data points to test which requires larger test budgets. Bummer. To limit testing, you need to limit the factors you evaluate. The result is plain vanilla.
The market for DSPs is white-hot, expect 4,000 of them, but most of them will be frauds. Real differentiation and competitive advantage in a space where virtually everyone has access to the same inputs is hard. Remember the Netflix competition. Given a data set (100 million data points), improving predictive results is incredibly hard. With a million dollars on the line, it took 3 years and dozens of people to generate a 10% improvement.
You heard it here first. Breaking news as it happens.