Consider two scenarios:

 

  • You have completed two sales.  One took 1 year, one took 1 week.  Average those two and the sales cycle can be concluded to be 26.5 weeks.
  • You have a set of data for 10,000 completed sales.  They were completed in a variety of different timeframes, but the average sales cycle comes out to 26.5 weeks.

 

At these extreme cases your common sense should have told you that something is rotten in the first example.  Who really knows what the statistic would actually be with a full set of data?  We have an average, but our confidence in it is extremely low.  Statistical analysis would assign a Confidence Interval, whereby we could see determine a range that the results very likely lie in.  Graphically these are shown as “error bars.”

 

Is it practical to use confidence intervals for every statistic you look at?  Certainly not.  The biggest idea that you should take from this concept is that everything needs to be examined with a critical eye.  Using the available data remains the best way to operate, but to accept every number without some wiggle room around it would be to ignore the uncertain nature of business.

 

The underlying condition that this addresses is that of the false positive.  Upon seeing metric that lies outside of the expected range, our first impulses might be to draw a hasty conclusion.  Even good data can lead to a misguided conclusion if we blindly trust it.  Sales Performance Management may enable a more data-driven approach, but it still exists in the real world where nothing is ever quite perfect.

 

demo_cta1