Posts Tagged ‘measurement’

The problems of measuring the effectiveness of business activity have long since been apparent. Since the 19th century and greater mass distribution and the beginnings of mass communications and advertising, it’s become more difficult, something that was not lost on either/both John Wanamaker in the US, or William Lever in the UK, both of whom have been attributed with the maxim

I know that half the money I spend on advertising is wasted, but the trouble is I don’t know which half…

In the 1960s William Bruce Cameron, an American Sociologist, first coined another seminal phrase

It would be nice if all of the data which sociologists require could be enumerated because then we could run them through IBM machines and draw charts as the economists do.
However, not everything that can be counted counts, and not everything that counts can be counted.

But today it seems we live in a world where the media like to reduce complex issues to their most simplistic, where League Tables are all-important, and politicians live (and die) on policy-by-sound-bite. A pithy, punchy statistic is worth more than nuanced arguments that acknowledge the inter-dependence between issues or subtleties within an analysis.

My daughters’ primary school recently experienced an inspection by OFSTED. The previous inspection a couple of years ago had been classed as satisfactory. That report highlighted lots of positive aspects about the school, including comments from parents and children, about how they liked going to school, that the atmosphere was friendly and supportive, and that the school was well on the way to improving its rating to good.

Since then, there has been a change within the Department of Education. Just weeks before taking up his appointment as the new Chief Inspector of Schools in January 2012, Sir Michael Wilshaw said

If anyone says to you that ‘staff morale is at an all-time low’ you will know you are doing something right.

Really… that’s not a style of effective management I’ve ever had recommended to me.

The most recent OFSTED report on our primary school has felt the full force of the changing framework and moving goalposts for school inspections. Before I get started on this, let me say that I am in many ways delighted with some of the tough lessons the school management has had to learn in recent weeks: there are clear areas for improvement and I’m already confident that they are addressing them.

But the report has changed in tone, style and format. The grading ‘satisfactory’ has been renamed as ‘requires improvement’. Its opening lines are not accentuating the positive; quite the reverse. Lest anyone be in the slightest doubt, its opening gambit is

This is not a good school…

…by which it means the official OFSTED rating of ‘good’. But don’t try telling me they don’t know exactly what they’re doing…

The report focuses in the starkest terms on what OFSTED regards as the failings and shortcomings in the school. Positive comments and recent improvements are noted, but almost in passing. Many positive aspects of the school that seemed valuable 2-3 years ago don’t even feature.

There seems to be an enormous focus on management systems, data and measurement, as though they’re only interested in the things that can be measured and quantified, such as “what’s the absentee rate among children with SEN? how has that changed since last year?”  I’m not saying this isn’t important, in fact it’s probably a hygiene factor, and the school’s weakness in this respect is to me an annoyance, an unnecessary distraction. It needs fixing, but it should never have been a problem.

Because now the OFSTED report about my daughters’ school is online and official, and it doesn’t read well. I know that it doesn’t reflect our full experience of the school and omits all sorts of positive elements. But new or prospective parents don’t know that: the OFSTED report is a major influence on what they think. All the positive word of mouth and community goodwill can only go so far.

In the same way as children are tutored to pass the grammar school entrance tests, schools are now focusing, at least in the short-term, on data-capturing and reporting. I hope it does improve the outcomes, I genuinely do. But I’m not especially confident.

During the last General Election campaign, David Cameron pledged to drive the education system to do more teaching and less testing. But I Reckon he’s achieved the opposite. SAT tests appear ever more important, league tables are still published in most major newspapers as the be-all-and-end-all for parents to judge their schools. My younger daughter was tested on her phonics aged just 6, despite all sorts of evidence against that approach.

Phonics testing cartoon

There is more testing and measurement now, and on pre-defined criteria that are not always based on the weight of evidence, but on a political agenda. Moreover, this testing starts sooner, such that we could soon be testing and grading our children from a very early age, when they develop differently with different types of intelligence and skills. We could marginalise those who do not match the profile of what Michael Gove regards as a Model Pupil; rigidly academic, with a prescriptive curriculum, based on facts and memory.

I Reckon a one-size-fits-all set of criteria for measuring children is flawed, and the current obsession with quantifying and counting everything is at best imperfect and at worst could suppress children’s personalities and creativity. I’m no expert, but Sir Ken Robinson is, so if you I haven’t convinced you, maybe he can.


Read Full Post »

It is both the blessing and the curse of direct and digital marketing that it is inherently measurable and accountable, through response rates, online metrics, tagging and tracking. William Lever should be able to rest easy, as his exasperated claim that “half my advertising money is wasted . The problem is that I don’t know which half” becomes less and less relevant.

Or at least it should be. However, a good deal of marketing on major brands today is still built on sand, or to be more precise, massive generalisations and assumptions; just like the one I’ve made there. I can’t begin to measure that statement accurately, so instead (like the decent marketer I am) I have made a confident assertion that appeals to both your common sense and your cleverness, because it’s probably true, and anyway, I seem to know what I’m talking about.

Marketers (and politicians too) uses indices to create a story that serves their needs. An index of 150 against some criteria or other means that there is certainly some kind of  stronger propensity to watch a type of TV show, or agree with some statement, but it’s often then taken one step further. It suddenly becomes an entirely different phenomenon, from being ‘more likely to’ (a matter of relativity) it morphs into ‘most…’ (an issue of absolutes).

This can be dangerous stuff, as that sort of misrepresentation can quickly seep into the received wisdom, and become the unquestionable corporate truth; easy to understand but simplistic, clear but at the same time misleading. Ben Goldacre is the author of the very excellent Bad Science book, columns and blog, from which he also sells t-shirts. I love this one…

Bad planning and assumptions makes for bad objective-setting, which also makes for bad measurement. If the goals aren’t clear or based in some kind of truth, the measurement is nigh on impossible. If it’s not clear how to measure what you want to measure upfront, before you start, and if you don’t establish the means to capture the information you need to measure, how will you know what success looks like?

But that’s when marketers really come into their own, skimming over the quantitative technicalities (marketers often have short attention spans and glaze over at details) and focusing instead on the qualitative positives, and looking to the future. To be honest, this is often the best policy. While simplistic goal-setting and measurement can be a weakness, so too can the fabled analysis-paralysis, in which every last figure and metric is scrutinised well beyond their natural meaning. Every blip or shift in scores is pored over and further ‘digging’ is done to uncover the root causes. Weeks are spent trying to reveal insights, when in fact the judgement of clever and experienced people using their common sense might serve the company better. It helps make decisions, focus further work, and makes the story easier to believe; like you believing my assertion earlier.

So what’s my point? I hear you ask… good question. We should measure marketing more carefully, but that risks running into navel-gazing and slow, poor decision-making. But on the other hand, a lack of measurement also leads to overly simplistic objective setting and decision-making.  Yikes. That sounds tricky.

Remember the title? It might well be true, but noone likes a smartass.

Read Full Post »