The five qualities of successful experimental marketing programs

In the high technology space marketers often find themselves attempting to develop new marketing and sales models.  Maybe you’re going after a new segment or offering a new product.  Oftentimes the mix of marketing and sales activities you use for your established products and customer bases are not successful with these new initiatives, even if the product itself is solid and the customer need real.

Under these circumstances marketing and sales leadership find themselves in the mode of discovering a business model rather than executing a business model.  In other words, the point of marketing and sales activities is not actually to make money but rather to determine the methods that will enable the company to build up to a successful revenue stream.

This distinction is important and in my experience usually lost on those responsible for marketing and sales strategy.  Therefore you see behaviors like forcibly growing revenue at negative profit by scaling up programs that are too inefficient to be sustainable (“We’re losing money on every unit sold, but we’ll make it up in volume”) or plugging away at some minor, one-off program that at best can hope to yield a few hundred thousand dollars (“There are only fifteen target customers in the world, but damn it, we’ll get ‘em all”).  These are bad decisions in all but the most extreme circumstances, but they happen routinely when an organization becomes completely focused on hitting a revenue target (executing a business model) as opposed to building the knowledge base that will later allow you to grow and profit at the same time (discovering a business model).

When I’m in charge of the marketing for a new product initiative, I always insist on understanding whether we’re executing a proven business model or seeking a new business model.  And even in the case of existing, successful businesses, often companies are trying to expand into parallel markets or improve efficiency or simply eat their own children before competitors do.  In these cases a company may actually run both activities simultaneously, sometimes even on the same customer base.

When marketing programs are in place to seek a business model rather than execute one, then I have a handy list of the five key qualities to which an activity or campaign must adhere.  Violate any of these rules and you’re getting confused about the distinction between seeking and executing, to  your own detriment.

Successful experimental marketing and sales campaigns and activities must be,

  1. Testable
  2. Repeatable at scale
  3. Affordable
  4. Offering speed to results
  5. Not absurd

Let me explain each of these qualities in turn:

Testable.  When we’re seeking a business model, that means we’re in a scientific mode.  We have hypotheses about how our activities may affect customer behavior, ultimately resulting in sales.  We spend our budget, skill, and manpower to test these hypotheses, and based on the measured results of these tests, we can adopt and implement ideas (a home run), further explore or refine where potential appears to exist (a base hit), or abandon them completely (strike out).  Without the testing component, there is no progress.  We don’t know what we can scale up and what we need to modify and what we simply throw away.

Note that not all tests result in metrics or numbers.  Often that’s what we’re looking for (e.g. leads generated or incremental sales booked), but sometimes the test is softer.  Was there a lot of interest at the trade show booth?  How much PR pickup did we get?  Does the partner community approve of the idea?  These test results are also very important, even though they aren’t conducive to an ROI calculation.

Repeatable at scale.  The point behind testing is that when you find the results that work, you can go out and do them again, and ideally you can do them much bigger than you did the first time.  The classic example is a direct marketing program (let’s say direct mail to a specific target list) in which we measure the ROI of the program.  If it’s positive, we can then run everything at ten or 100 times the scale of the original test.  Upside is large and downside is small.

Activities that aren’t repeatable at scale are not model-seeking activities.  A one-off, unique opportunity that will never come up again in your lifetime is not repeatable.  Even if it’s gloriously successful, there’s nothing you can do with that.  Now, a marketer may still decide to take a flyer on such an event, if the price is right and it seems likely to help the business.  But under those circumstances that is a business model execution decision, not a business model seeking decision.  You’re running the program for the direct benefit in itself, not for knowledge that you will apply in a scaled-up fashion.

Affordable.  One fact of life is that we have limited resources.  Budget, time, attention, development roadmap, number of times you can send offers to your installed base without burning them out – all these things are examples of finite resources that can run out.

Experimental programs consume these resources, just as model-driving programs do.  The more resources you spend on your experimental program, the less you have to get to your critical business number.  Therefore when running experimental programs, it’s important to keep a tight grip on running affordable programs.

If we use the above direct mail example, test cells can often be very small compared to the dialed-in programs that you run at scale.  If your average test sell is just a few percentage points the size of a scaled-up program, then you can afford to try out some reasonable hypotheses.  If they don’t turn out, you haven’t lost much, but if they do work, then you can suddenly multiply them thirtyfold and enjoy a big win.

Offering speed to results.  If we’re here to learn, then speed to results is critical.  I want a program where I can have a sense for how it turned out in a few days and a fully valid result in a month.  That’s important because the experimental programs themselves (being small in scale, likely not to work out, and never fully optimized) even under the best of circumstances do not constitute business success.  We have to take those results and implement them before we get the financial and market rewards we require to call our businesses successful.  Add in the fact that discovering the right sales and marketing model often requires multiple – and sometimes many – iterations, and you have an environment where speed is key.

Not absurd.  Note that I didn’t say proven or certain or unable to fail.  Note that I didn’t even say likely.  Often these are the standards business hold all marketing and sales activities up to before giving them the green light.  If you do that, then you’ll create an environment where you never can truly experiment.  Instead, we need a lower standard than that.

Now, I said not absurd because we’re not here to do dumb stuff either.  Remember that earlier discussion of limited resources?  We certainly can’t be wasting them, not even in our experimental programs.  Therefore each experimental program has to be based on a reasonable hypothesis, one that seems to have a very real chance of success.  There’s nothing wrong with some of your ideas not panning out.  Otherwise you’re probably not being aggressive enough in your experimentation and therefore missing out on some of your potential.  But at the same time, make sure that everything you try is offering a genuine, legitimate contribution to the base of knowledge you have for your market and its responses to your products.  Make sure you only try things that might work.

Apply all five of these criteria rigorously to all experimental marketing and sales activities you undertake in your organization, and you’ll be a step ahead on discovering those new opportunities and markets and offerings than you would be any other way.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>