Posterous theme by Cory Watilo


Redefining Premium Content Towards CPM Zero

Say Goodbye to Hollywood

When Ari Emanuel, co-CEO of talent agency William Morris Endeavor said that Northern California is just pipes and needs Premium Content it’s clear that he just doesn’t get it. There is no such thing as premium content. There are only two things premium on a mass scale anymore – distribution and devices. 

Massive media fragmentation fueled by the Internet has forever redefined what is ‘premium’ content. The democratization of media – the ability for a critical mass of people (now virtually the entire world) to create, distribute and find content killed the old model of premium. Modern Family is a good TV show but when I can more easily stream a concert like this through my HDTV at any moment I want I’m pretty sure “premium content” has been redefined.

Since the web is the root cause of death for premium content it makes sense that the effect is no better exemplified than in web publishing. Since the advent display advertising publishers have sought to categorize and valuate their content in ways that were familiar to traditional media buyers. No media channel has promoted the idea of or value for premium content more than digital. Thus, print media’s inside front and back covers became the homepages and category pages on portals. Like print, these were areas where the most eyeballs could be reached. 

But a funny thing happened in digital behavior. People skipped over the front inside cover and went right to content that was relevant to them. Search’s ability to fracture content hierarchies and deliver relevance not only became the most loved and valuable application of the web, it destroyed the idea of premium content all together. In reality, premium never really existed in a user-controlled medium because it was never based on anything that had to do with what the user wanted. It was based on the traditional ad metric of “reach” when in this medium, decisions about what is premium are determined by on-demand ability and relevance. 

Sinking of the Britannica

Screen_shot_2012-06-06_at_2

The beauty of this medium is in the measurement of it. Validation for the drowning of premium beyond the fact that Wikipedia destroyed Encyclopedia Britannica rests in the performance of digital media. A funny thing happened as advertising performance became more measured. Advertisers discovered premium didn’t nearly matter as much as they thought.  There were better ways to drive performance that yielded better and more measureable results. The ability to match messaging to people on-request and in a relevant way was more valuable in this medium than some content provider idea of what was “premium.” In this medium the public not the publisher determines what is premium.

As realtime rules based matching technology continues to improve performance advertising and marketing itself continues to grow at the expense of premium advertising. Today, despite those trying to hold on to the past, premium is little more than an exercise in brand borrowing and little else. Despite the best efforts of the IAB to bring Brand advertising to Digital it has fallen as a percentage of ad spend for five straight years. In the world we live in today Mr. Emanuel’s $9 billion dollar upfront for network TV primetime advertising is $1.5 billion less in ad revenue than Google made last quarter

What this all means for the future of digital media (and thus all media eventually) is that it’s headed to “CPM Zero.” Look around – all the digital advertising powers – Google, Facebook, Twitter, Amazon – are selling based one thing. Performance. They are not selling on the premium sales mechanism of CPM. When ‘CPM Zero’ happens, and it will, these forces pushing the digital ad industry forward win. They own the customer funnel and they will own the future of marketing and advertising. It begs one big question. Where does this leave content creators and publishers?

Don’t Fear the Reaper

Publishers will never be able to put the CPM sales genie back in the bottle. CMOs and advertisers are already finding out that they are paying too much for premium. Go ask GM what they think. What publishers are finding out is that they are no longer selling their media; it’s being bought. Purchased from a marketplace with infinite inventory in a wild west of data. Therein lies the publisher’s ace in the hole and the strategies and tactics digital publishers (and eventually broadcasters) can use to combat the death of premium. 

Like Search, Publishers need to have two crucial components to their marketplaces. They need the tension of scarcity in the marketplace. That will drive up demand and force advertisers to spend the time working on improving their performance. This was the cherry on the sundae for Google as a $1billion industry – Conversion Testing and Content Targeting grew out of nowhere to support spends in Search. Most every dollar saved with optimization went to drive more volume – or back to Google. They need a unique currency for the marketplace. Keywords were a completely new way to buy media. Nothing has ever worked better. Facebook is selling Actions with OpenGraph. Ultimately advertisers are buying customers not keywords or actions but there is a unique window of opportunity for publishers at this moment in time to create something new and uniquely people, not page focused.

The tactics used to fuel these strategies all rely on one natural resource – data. Publishers have diamonds and gold in beneath the surface of their properties. Mining these data nuggets and using them to improve the performance of their media is the sole hope publishers have competing in the world of “CPM Zero.” Only publishers can uniquely wrap their data with their media and drive performance in a manner unique to the marketplace. That’s what Google does. That’s what Facebook does. That’s what Twitter does. The scarcity mentioned above is created because the realtime understanding of site visitor interest and intent is only derived using first party data as rules and integration with the publisher ad server for delivery. So pubs are really left with one choice – take control of their data and use it for their benefit creating an understanding of WHY people are buying their media and how it performs. Or let Google, Facebook, third-party et al come in and grab their data and know nothing about why it’s being bought and how much it’s being sold.

The ability to match messaging to people on-request and in a relevant way is within the publisher’s domain. It is the most premium form of advertising currency ever created and will deliver an order of magnitude more value. It will fuel the 20% YoY growth of digital advertising and marketing for the next 15 years. Who captures the majority of that value, the advertiser or the publisher, is the only question remaining.

 

Measure Twice, Cut (over) Once

4699769669_e52cab3de2

This past weekend we did a deploy at Yieldbot unlike any other we've done before.

At its completion we had:

  • upgraded from using Python 2.6 to 2.7.3;
  • reorganized how our realtime matching index is distributed across systems;
  • split off monitoring resources to separate servers;
  • moved out git repos that were submodules to be sibling repos;
  • changed servers to deploy code from Chef server instead of directly from github;
  • completely transitioned to a new set of servers;
  • suffered no service interruption to our customers.

The last two points deserve some emphasis. At the end of the deploy, every single instance in production was new - including the Chef server itself. Everything in production was replaced, across our multiple geographic regions.

Like many outfits, we do several deploys a week, sometimes several a day. Having no service disruption is always critical, but in most deploys is also usually fairly straightforward. This one was big.

The procedures we had in place for carrying it out were robust enough though that we didn't even internally notify anyone from the business side of the company when the transition was happening. The only notification was getting sign-off from Jonathan (CEO) on Friday morning that the cut-over would probably take place soon. In fact, we didn't notify anyone *after* the transition took place either, unless you count this tweet:

Screen_shot_2012-06-05_at_3

I suppose we cheated a little by doing it late on a Saturday night though. 🙂

Data

We have a few kinds of data that we had to consider. Realtime streaming, analytics results, and configuration data.

Realtime Streaming and Archiving

For archiving of realtime stats, the challenge was going to be the window of time that old systems were still receiving requests while new servers were starting to take their place.  In addition to to zero customer impact we demanded zero data loss.

This was solved mostly by preparation. By having the archiving include the names of the source donig the archiving, the old and new servers could both archive data to teh same place without overwriting each other.

Analytics Results

We currently have a number of MongoDB servers that hold the results of our analytics processes, and store the massive amounts of data backing the UI and the calculation of our realtime matching index.

Transitioning this mostly fell on MongoDB master-slave capabilities. We brought up the new instances as slave instances pointing to the old instances as their master. When it was time to go live on the new servers, a re-sync with chef reverted them to acting as masters.

There was a little bump here where an old collection ran into a problem in the replication and was replicating to be much larger in the new instance than in the large instance. Luckily it was an older collection that was no longer needed, and dropping it altogether on the old instance got us past that.

Configuration Data

Transitioning the config data was made easy by the fact that it uses a database technology that we created here at Yieldbot called HeroDB. (which we'll much more to say about it in the future).

The beneficial properties of this database in this case is that it is portable, and can be easily reconciled against a secondary active version. So we copied these databases over and had peace of mind that we'd reconcile later as necessary with ease.

Testing

We tested the transition in a couple different ways.

As we talked about in an earlier blog post, we use individual AWS accounts for developers with Chef config analogous to the production environment.  In this case we were able to bring up clusters in test environments along the way before even trying to bring up the replacement clusters in production.

We also have test mechanisms in place already to test proper functioning of data collection, ad serving, real time event processing, and data archiving. These test mechanisms can be used in individual developer environments, test environments, and production. These proved invaluable in validating proper functioning of the new production clusters as we made the transition.

The Big Switch - DNS

DNS was the big switch to flip and the servers go from "ready" to "live". To be conservative we placed one of our new edge servers (which would serve a fraction of the real production traffic in a single geographic region) into the DNS pool and verified everything worked as expected.

Once verified, we put the rest of the new edge servers across all geographic regions into the DNS pools and removed all of the old edge servers from the DNS pools.

The switch had been flipped.

There Were Bumps (but no bruises)

There were bumps along the way. Just none that got in our way. Testing as we went, we were confident that functionality was working properly and could quickly debug anything unexpected. As any fine craftsman knows, you cut to spec as precisely as possible, and there's always finish work to get the fit perfect.

Chef, FTW!

The star of the show, other than the team at Yieldbot that planned, coded, and executed the transition, was Chef.

We continue to be extremely pleased with the capabilities of Chef and the way that we are making use of it. No doubt there are places where it is tricky to get what we want. And of course there's a learning curve in understand how the roles, cookbooks, and recipes all work together, but when it all snaps into place, it's like devops magic.