There are so many discussions swirling around social media measurement these days, and the discussions I’ve had recently at conferences have reinforced the fact that as a whole, measurement of communication is incomplete at best. We’re not satisfied with what’s available to us in terms of proving the value of what we’re doing.
CAN we measure elements of social media impact, reach, and yield? You bet. There are lots of metrics, new and old, that contribute to building cases for attribution of purchase influence, customer and transaction values, and the like.
But the challenge of doing a good job measuring our communication, customer outreach, and marketing initiatives has always been a sticky one, for one specific set of reasons.
Influence Isn’t Cause.
Communication and relationship development have always – and will always – reside in the gray area of actions that influence and impact purchase and buying behaviors, but are not always the direct and only cause for same. We want desperately to have said that our advertising or our press release or that game of golf was solely responsible for a customer’s decision to buy from us.
But more than likely, the combination of several experience touchpoints directly with the company combined with external influences (opinions of friends and family, for instance) and things like context and timing (my purchase of a new dishwasher is driven by need, but my impression of a company based on other experiences might steer me their way) are what make up the bigger and complete picture of a sale.
Even in the world of direct marketing, where you can track the path from touchpoint to purchase with codes or links or whatever, you cannot say with absolute certainty that that marketing effort was the singular cause for the purchase. It might have been the catalyst or the impetus for the purchase at that time, but it’s likely not the only thing that guided someone’s decision to buy, yet we measure it as such.
Measurement has always been imperfect. It’s not just social media measurement.
Frankly, we as communicators and marketers and PR people have relied on a flawed set of measurements for a long time, and we’ve always been lousy at demonstrating the impact of our work. There’s a reason why CMO tenure is ridiculously short. And I’m not so much alarmed at the figures that say we’re bad at measuring social media because, honestly, we’re bad at measuring lots of things. We’ve just told ourselves otherwise, content to settle into the metrics we do have, even if they’re not really telling us anything of substance.
The gray area in measurement is in the combination of:
- Correlation: Sales go up while our marketing reach does, so the two must be related
- Attribution: Our press release was part of the overall promotion strategy, so must have had some impact on the whole
- Influence: Awareness of our company is reinforced by a recommendation or endorsement from a Trust Agent
All of these things matter. They all impact the likelihood of sales. But none of them is alone the cause, and that’s why folks flip out over ROI equations. Just how much of that revenue can you attribute directly to social media, or traditional marketing, or public relations, or the skills of your sales guy? Unless you’re only employing a single strategy, it’s more likely that your ROI equation is related to the whole.
How to improve it?
What we need to keep exploring in social media is conversation pathing. Online gives us the best shot at refining measurement that we’ve had, really. The notion that we can trace all of the digital breadcrumbs – conversation points, recommendations and commentary, discussions including a brand within a larger conversation, content marketing, reviews, capturing of offline experiences – and create a weaving, meandering path through the social space in order to move the needle from separate influence points to an overall sense of how the profile of the aggregate conversation drove the customer to the finish line.
It’s a headful, alright. And I love that I’m working with a company that’s pushing boundaries on this to try and connect the dots so much better. But we have got to get out of the mindset that all of the old metrics still apply, that new metrics don’t have a place because they don’t have precedent.
Our communication is evolving. Our business foundations are changing. Our measurement practices and the work we put into quantifying the value of what we do needs to change, too. It’s going to take work and elbow grease and lots of methodical, meticulous trial and tracking and refinement.
But if justification and proof is what we want, we’d better be willing to do the work it takes to get there.
So, I want to hear from you. Is correlation and impact enough? Can we ever really prove and demonstrate cause, or do we need to? And above all, where is the balance between granular measurement that distorts focus, and measurement that highlights the business insights we so desperately need?
The comments belong to you.
Great post Amber as it puts out there in a very subtle yet effective way the same thing we marketers know is true: we suck at measuring! But hey, we’re pretty good at venturing a guess and persuasion. That’s what allows us to sleep better at night and as long as you excel on challenging yourself to do better on these two approaches you’ll show your value to others. –Paul
Great post Amber as it puts out there in a very subtle yet effective way the same thing we marketers know is true: we suck at measuring! But hey, we’re pretty good at venturing a guess and persuasion. That’s what allows us to sleep better at night and as long as you excel on challenging yourself to do better on these two approaches you’ll show your value to others. –Paul
You really put a lot of thought into this Amber – thanks for the discussion. This is obviously a hot topic right now, and doesn’t show any signs of settling down anytime soon.
I have to wonder if we (the Interactive Marketing industry) screwed ourselves 10 years ago by harping on measurement too much. I’m guilty of it – we used to chide traditional advertising and marketing as difficult (or even impossible) to measure, while we provided wonderful statistical evidence of purchase intent, brand awareness, event attendance, viral growth, etc. through click through rates, web visits, RSVP lists, referral links, and the like. The point is that while those metrics were obviously quite valuable when analyzed in the proper context, they didn’t tell the whole story, just as the metrics we have available to us today don’t tell the whole story. Short of embedding a chip in everyone’s brain that tracks all of the impressions, recommendations, brand experiences, etc. that an individual parses before an actual purchase, we will never be able to confirm hard and absolutely 100% reliable ROI.
Even with the conversation pathing you described above (which is excellent thinking btw), there is still a whole other world out there of non-digital elements to consider. Traditional media, person-to-person interactions, physical experiences with the brand, and other factors will always play a part in a person’s decision to purchase. How do we measure that – we can’t always. That doesn’t mean these things aren’t valuable and potentially even more causal than anything we are measuring digitally.
We should seek to measure everything we possibly can, but I would argue that we cannot get so wrapped up in the metrics that we don’t leave room for the human factor – i.e. analysis of the numbers by humans to provide an educated opinion on what they mean.
Thanks for the thought you put into this. It’s obvious you are right at home at Radian6. 🙂
You really put a lot of thought into this Amber – thanks for the discussion. This is obviously a hot topic right now, and doesn’t show any signs of settling down anytime soon.
I have to wonder if we (the Interactive Marketing industry) screwed ourselves 10 years ago by harping on measurement too much. I’m guilty of it – we used to chide traditional advertising and marketing as difficult (or even impossible) to measure, while we provided wonderful statistical evidence of purchase intent, brand awareness, event attendance, viral growth, etc. through click through rates, web visits, RSVP lists, referral links, and the like. The point is that while those metrics were obviously quite valuable when analyzed in the proper context, they didn’t tell the whole story, just as the metrics we have available to us today don’t tell the whole story. Short of embedding a chip in everyone’s brain that tracks all of the impressions, recommendations, brand experiences, etc. that an individual parses before an actual purchase, we will never be able to confirm hard and absolutely 100% reliable ROI.
Even with the conversation pathing you described above (which is excellent thinking btw), there is still a whole other world out there of non-digital elements to consider. Traditional media, person-to-person interactions, physical experiences with the brand, and other factors will always play a part in a person’s decision to purchase. How do we measure that – we can’t always. That doesn’t mean these things aren’t valuable and potentially even more causal than anything we are measuring digitally.
We should seek to measure everything we possibly can, but I would argue that we cannot get so wrapped up in the metrics that we don’t leave room for the human factor – i.e. analysis of the numbers by humans to provide an educated opinion on what they mean.
Thanks for the thought you put into this. It’s obvious you are right at home at Radian6. 🙂
Great post Amber. I agree with Brandon in that we would never get a perfect metric unless we put a chip into our brains. Internet has provided us with a great opportunity to measure results and ROI, specially with the new innovation in conversation metrics.
I think the most important thing we need to understand is that measuring a response requires as much numerical information as it needs information gotten through social interactions with the costumer. I know lot’s of marketers and business executives ask for hard numbers as measurements, but we forget our brains are made to process numbers and words because we need both to get a proper understanding of our surroundings. For example, we need to get out there and learn that maybe it’s not our great campaign (that may not be good) but our great product that it’s creating buzz in some market.
On the correlation matter. Now the internet provides a environment in which we – sort of- can control some variables to measure correlation. For example we can put out 3 versions of a website to test if a change has an impact on sales, but it’s still very difficult to be sure that correlation exists, mainly because it’s impossible to isolate a single variable of our marketing efforts. We can get better at guessing though by getting direct information from the user and learning how did they learned from us, if they talked to someone and what was the overall impact of that.
Thanks for starting this discussion. I feel the new measurement tools are going in the right way giving us a more complete set of data to pair with the qualitative data we get from our costumers.
Great post Amber. I agree with Brandon in that we would never get a perfect metric unless we put a chip into our brains. Internet has provided us with a great opportunity to measure results and ROI, specially with the new innovation in conversation metrics.
I think the most important thing we need to understand is that measuring a response requires as much numerical information as it needs information gotten through social interactions with the costumer. I know lot’s of marketers and business executives ask for hard numbers as measurements, but we forget our brains are made to process numbers and words because we need both to get a proper understanding of our surroundings. For example, we need to get out there and learn that maybe it’s not our great campaign (that may not be good) but our great product that it’s creating buzz in some market.
On the correlation matter. Now the internet provides a environment in which we – sort of- can control some variables to measure correlation. For example we can put out 3 versions of a website to test if a change has an impact on sales, but it’s still very difficult to be sure that correlation exists, mainly because it’s impossible to isolate a single variable of our marketing efforts. We can get better at guessing though by getting direct information from the user and learning how did they learned from us, if they talked to someone and what was the overall impact of that.
Thanks for starting this discussion. I feel the new measurement tools are going in the right way giving us a more complete set of data to pair with the qualitative data we get from our costumers.
Hi Amber,
Excellent analysis! Couldn’t agree more that measurement in general (all types) is imperfect, and what’s needed most is a lot of work and a dedication to tease out whatever insights we can find. There are some fundamentals at the heart of it. Whenever we talk about the process individual people use to make buying decisions, it is always 1) part emotional 2) part logical/rational and 3) likely a unique, circuitous path that could not have been accurately predicted. It’s also probably a combination of factors, both offline and on, over a period of time. The direct marketing and contact center people call these touch points, and even in social media I think the term has some validity.
The thing is neither social media nor any kind of online marketing today is capable of “seeing” ALL of the touch points–only a subset. Trying to predict outcomes based on partial data is like trying to win a chess game by knowing all the rules except two. Pretty hard.
Good measurement requires standard units of measure that are also (obviously) valid. Too often the assumptions imposed by any effort of standardization render metrics invalid or at least introduce errors. I think the lead nurturing people are on the right track when they try to split the process into small steps. The smaller steps are easier to define and it’s easier to infer a more limited causality. (Besides, in general busting things up into component parts is how we learn things.) It’s not hard to see that while nirvana might be the ability to know that the golf game/social convo/press release resulted in the closed sale, it would be much easier to just find out if (whichever one)got you on the short list and in consideration.
If we do approach it from a component parts analysis, then it’s quite likely that we need to invent a whole bunch of new metrics that don’t exist yet and may have no prior equivalents.
Hi Amber,
Excellent analysis! Couldn’t agree more that measurement in general (all types) is imperfect, and what’s needed most is a lot of work and a dedication to tease out whatever insights we can find. There are some fundamentals at the heart of it. Whenever we talk about the process individual people use to make buying decisions, it is always 1) part emotional 2) part logical/rational and 3) likely a unique, circuitous path that could not have been accurately predicted. It’s also probably a combination of factors, both offline and on, over a period of time. The direct marketing and contact center people call these touch points, and even in social media I think the term has some validity.
The thing is neither social media nor any kind of online marketing today is capable of “seeing” ALL of the touch points–only a subset. Trying to predict outcomes based on partial data is like trying to win a chess game by knowing all the rules except two. Pretty hard.
Good measurement requires standard units of measure that are also (obviously) valid. Too often the assumptions imposed by any effort of standardization render metrics invalid or at least introduce errors. I think the lead nurturing people are on the right track when they try to split the process into small steps. The smaller steps are easier to define and it’s easier to infer a more limited causality. (Besides, in general busting things up into component parts is how we learn things.) It’s not hard to see that while nirvana might be the ability to know that the golf game/social convo/press release resulted in the closed sale, it would be much easier to just find out if (whichever one)got you on the short list and in consideration.
If we do approach it from a component parts analysis, then it’s quite likely that we need to invent a whole bunch of new metrics that don’t exist yet and may have no prior equivalents.
I love your concept of conversation pathing. It reminds me of a new model of online advertising tracking that Microsoft/aQantive/Avenue A has been working on. Instead of giving all the conversion attribution to the last ad that a user saw, it tracks all of the ads across the network that a user saw and ‘gives back’ attribution in different percentages to each of them based on whether it was simply and impression or an interaction.
We can never know what really combined to lead to conversion – mostly because the majority of the time, the customer probably can’t articulate it themselves. This changes depending on the complexity/cost of the product of course – the more expensive the more a customer made a very deliberate choice but it’s often a complex choice made up of many factors and every customer makes decisions in slightly different ways.
Today with our ability to track almost every action online, it’s not a matter of having the ability to track – it’s defining what selection of things to track that won’t cost more than the value received from tracking/reporting it (which can be high since data is often not consolidated).
We have Community Roundtable call on measurement with KD Paine today and this has given me plenty to chew on… thank you!
I love your concept of conversation pathing. It reminds me of a new model of online advertising tracking that Microsoft/aQantive/Avenue A has been working on. Instead of giving all the conversion attribution to the last ad that a user saw, it tracks all of the ads across the network that a user saw and ‘gives back’ attribution in different percentages to each of them based on whether it was simply and impression or an interaction.
We can never know what really combined to lead to conversion – mostly because the majority of the time, the customer probably can’t articulate it themselves. This changes depending on the complexity/cost of the product of course – the more expensive the more a customer made a very deliberate choice but it’s often a complex choice made up of many factors and every customer makes decisions in slightly different ways.
Today with our ability to track almost every action online, it’s not a matter of having the ability to track – it’s defining what selection of things to track that won’t cost more than the value received from tracking/reporting it (which can be high since data is often not consolidated).
We have Community Roundtable call on measurement with KD Paine today and this has given me plenty to chew on… thank you!
It is interesting to me that we find it not only difficult to use measurement to pinpoint the exact cause(s) of success but also challenging to determine if we are measuring the right things in the right ways to begin with. It often seems exceedingly easy to measure and find the cause of failure. Is this an illusion, or is it really easier to track the breakdown?
It is interesting to me that we find it not only difficult to use measurement to pinpoint the exact cause(s) of success but also challenging to determine if we are measuring the right things in the right ways to begin with. It often seems exceedingly easy to measure and find the cause of failure. Is this an illusion, or is it really easier to track the breakdown?
This will be a curious challenge. I was talking with Stacey Monk, founder of TweetsGiving, at Carnegie Mellon University last week. We both served on a panel regarding social media. I was talking about quantitative and qualitative measurements and she said that “when it works, you’ll know.” I explained that for smaller organizations, that may be fine. But for those of us in government, universities, corporations, etc. we have to justify every change in our budget line items. Dollars allocated to these initiatives mean dollars not going somewhere else. Staff time devoted here means other projects taking a lower priority.
Who has the best ideas about measurement and evaluation to date? What do you think of Olivier Blanchard’s model of the timeline and measurment exhibited in his presentation on http://smroi.net?
Cara,
It is interesting that companies, governments, and universities haven’t really learned what Amber is trying to illustrate in this post – there is no one single factor that you can point at that shows why someone does anything. Humans are complex creatures, and while we can gather data, show trends, and make educated guesses, in the end, that’s what they are – guesses.
Any executive who is asking for hard ROI numbers on social media is, quite frankly, asking the wrong question. A better question to ask is, ‘what happens if we *don’t* attempt to engage with our constituents?’ I do think that there is a dearth of measurements of the ‘negative path’ question – what happens if we choose to ignore a trend that clearly shows promise based on past successes.
This will be a curious challenge. I was talking with Stacey Monk, founder of TweetsGiving, at Carnegie Mellon University last week. We both served on a panel regarding social media. I was talking about quantitative and qualitative measurements and she said that “when it works, you’ll know.” I explained that for smaller organizations, that may be fine. But for those of us in government, universities, corporations, etc. we have to justify every change in our budget line items. Dollars allocated to these initiatives mean dollars not going somewhere else. Staff time devoted here means other projects taking a lower priority.
Who has the best ideas about measurement and evaluation to date? What do you think of Olivier Blanchard’s model of the timeline and measurment exhibited in his presentation on http://smroi.net?
Cara,
It is interesting that companies, governments, and universities haven’t really learned what Amber is trying to illustrate in this post – there is no one single factor that you can point at that shows why someone does anything. Humans are complex creatures, and while we can gather data, show trends, and make educated guesses, in the end, that’s what they are – guesses.
Any executive who is asking for hard ROI numbers on social media is, quite frankly, asking the wrong question. A better question to ask is, ‘what happens if we *don’t* attempt to engage with our constituents?’ I do think that there is a dearth of measurements of the ‘negative path’ question – what happens if we choose to ignore a trend that clearly shows promise based on past successes.
Anybody who claims it’s harder to measure the effects of communication through social media than other channels either doesn’t know much about communication research or doesn’t understand social media. A consumer’s digital footprint is many times more visible and more easily defined – qualitatively and quantitatively – than the impression of her offline activities. Social media activity provides more data points and more context to consider when we measure communication effects. Social activity tracking software is continually evolving, providing even more refined data. I fail to understand how anyone can argue with a straight face that social media measurement is somehow less precise or more difficult than conventional communication metrics, digital or otherwise.
(Trying to hold my best straight face … ;-))I doubt any of us would disagree with you D_ about digital “footprints.” They’re there alright. The trouble is, so often they do NOT correspond to an individual’s evaluation and purchasing process or behavior. A lot of the easily obtained metrics are not worth the paper they’re not written on. If they don’t correspond, they’re simply irrelevant and therefore not useful.
Granted I come from the B2B complex sale space, where the problem may be more pronounced, but I suspect it’s true in consumer also, with the difference being only a matter of degree.
I interpret a main thrust (but not the only) of Amber’s post as a call to solve the calibration problem. In other words, one of the biggest challenges with measurement, in social media as well as elsewhere in marketing, is doing QA on the metrics … figuring out if they actually measure what they’re supposed to and tinkering and adjusting as needed and then defining them well enough to ensure consistent and valid results. I agree with this observation.
If you think that’s been solved, then I guess we don’t agree.
Re: Keli’s point, I think it’s very insightful. It makes sense that finding points of failure is easier, for 2 reasons. One, to find a failure you simply have to find a single broken link in the chain. But to ascribe causality to success is a much more complicated and multi-step process involving many variables. Hard stuff. Second, while the same broken link might apply to many instances, the path to success can easily vary and lack consistency. If so then you have to make many individual calculations to determine success vs. a single, same point of failure applied to many instances.
Anybody who claims it’s harder to measure the effects of communication through social media than other channels either doesn’t know much about communication research or doesn’t understand social media. A consumer’s digital footprint is many times more visible and more easily defined – qualitatively and quantitatively – than the impression of her offline activities. Social media activity provides more data points and more context to consider when we measure communication effects. Social activity tracking software is continually evolving, providing even more refined data. I fail to understand how anyone can argue with a straight face that social media measurement is somehow less precise or more difficult than conventional communication metrics, digital or otherwise.
(Trying to hold my best straight face … ;-))I doubt any of us would disagree with you D_ about digital “footprints.” They’re there alright. The trouble is, so often they do NOT correspond to an individual’s evaluation and purchasing process or behavior. A lot of the easily obtained metrics are not worth the paper they’re not written on. If they don’t correspond, they’re simply irrelevant and therefore not useful.
Granted I come from the B2B complex sale space, where the problem may be more pronounced, but I suspect it’s true in consumer also, with the difference being only a matter of degree.
I interpret a main thrust (but not the only) of Amber’s post as a call to solve the calibration problem. In other words, one of the biggest challenges with measurement, in social media as well as elsewhere in marketing, is doing QA on the metrics … figuring out if they actually measure what they’re supposed to and tinkering and adjusting as needed and then defining them well enough to ensure consistent and valid results. I agree with this observation.
If you think that’s been solved, then I guess we don’t agree.
Re: Keli’s point, I think it’s very insightful. It makes sense that finding points of failure is easier, for 2 reasons. One, to find a failure you simply have to find a single broken link in the chain. But to ascribe causality to success is a much more complicated and multi-step process involving many variables. Hard stuff. Second, while the same broken link might apply to many instances, the path to success can easily vary and lack consistency. If so then you have to make many individual calculations to determine success vs. a single, same point of failure applied to many instances.
Indeed, the social media metrics tools are getting incredibly better. I’ve had the chance to try some that a friend was using and still want to check all of the others available out there. The problem is not that we can’t measure is that in some cases we tend to attribute results to a ‘campaign’ (to call it something) just because they occurred in a sequential form. And sometimes we miss to attribute results to something that may have it, simply because the effect took longer or we were not measuring it.
I doubt our measuring techniques will get perfect soon, they are getting extremely better and I hope with the semantic web the qualitative part could be easier and scalable (just a huge hope I have), what we need to do is study our data a lot and know how our costumers think through the long term relationships we build with them.
Indeed, the social media metrics tools are getting incredibly better. I’ve had the chance to try some that a friend was using and still want to check all of the others available out there. The problem is not that we can’t measure is that in some cases we tend to attribute results to a ‘campaign’ (to call it something) just because they occurred in a sequential form. And sometimes we miss to attribute results to something that may have it, simply because the effect took longer or we were not measuring it.
I doubt our measuring techniques will get perfect soon, they are getting extremely better and I hope with the semantic web the qualitative part could be easier and scalable (just a huge hope I have), what we need to do is study our data a lot and know how our costumers think through the long term relationships we build with them.
No, Steve, I don’t think we disagree. My point wasn’t that SM measurement is bullet proof, but rather an echo of one of Amber’s main points: “Measurement has always been imperfect. It’s not just social media measurement.” The “calibration problem” isn’t unique to SM metrics, as you point out. From a communication effects research perspective, the visibility of interactions in online social environments provides a much richer, more contextualized data set in comparison to those typically analysed for traditional media. As Amber notes, following the digitial footprints down the conversational path may prove the best theoretical approach to understanding media consumption and message impact. I certainly think so. Regardless, my point was simply that SM measurement is certainly no worse off than any of the alternative channels businesses currently invest in. The argument that it’s too hard to measure success or quantify ROI is just wrong, at least in comparison to the other available options.
No, Steve, I don’t think we disagree. My point wasn’t that SM measurement is bullet proof, but rather an echo of one of Amber’s main points: “Measurement has always been imperfect. It’s not just social media measurement.” The “calibration problem” isn’t unique to SM metrics, as you point out. From a communication effects research perspective, the visibility of interactions in online social environments provides a much richer, more contextualized data set in comparison to those typically analysed for traditional media. As Amber notes, following the digitial footprints down the conversational path may prove the best theoretical approach to understanding media consumption and message impact. I certainly think so. Regardless, my point was simply that SM measurement is certainly no worse off than any of the alternative channels businesses currently invest in. The argument that it’s too hard to measure success or quantify ROI is just wrong, at least in comparison to the other available options.
Dealing with multiple points of influence has always made this issue difficult to resolve, and the problem is far greater in the digital age. Consider someone buying a BMW because it’s:
1) Good looking
2) High performance
3) Reliable
While you can obtain this preference data directly from the buyer, knowing how much of a factor each attribute played in the purchase decision is difficult enough, then you try to address the influence of each source.
1) Friend recommendation
2) Television advertising
3) Online advertising
4) Blog posts from owners
5) Test drive
Does digital make a difference, absolutely, but how much, and how did it interact (hopefully reinforce) with the messaging from other media? While you may be able to follow someone’s digital bread crumbs, the difficulty is merging that path with the buyer’s offline path.
Dealing with multiple points of influence has always made this issue difficult to resolve, and the problem is far greater in the digital age. Consider someone buying a BMW because it’s:
1) Good looking
2) High performance
3) Reliable
While you can obtain this preference data directly from the buyer, knowing how much of a factor each attribute played in the purchase decision is difficult enough, then you try to address the influence of each source.
1) Friend recommendation
2) Television advertising
3) Online advertising
4) Blog posts from owners
5) Test drive
Does digital make a difference, absolutely, but how much, and how did it interact (hopefully reinforce) with the messaging from other media? While you may be able to follow someone’s digital bread crumbs, the difficulty is merging that path with the buyer’s offline path.
The digital age does complicate the influence equation, but you can’t put the genie back in the bottle. No doubt that influence analysis was easier when advertising messages were restricted to print and radio, but those days are long gone (and to be fair, even back then you’d have a hard time demonstrating with any degree of certainty the relative importance of a radio spot versus the word-of-mouth recommendation from the neighbor down the street). I think the point of Amber’s post, and certainly the one I’m trying to make, is that social data isn’t any more problematic in this regard than data from other digital channels. More importantly, socially disseminated messages are impacting consumer behavior regardless of corporate involvement. People hear about products from friends, form opinions based upon the experiences of their social peers, and make purchasing decisions that are influenced by their online interactions in social media contexts (and yes, the data supporting these assertions comes directly from consumers themselves). You’re certainly right about the difficulty of determining influence in the absence of direct data from consumers, but that’s not a reason to hold back on social media engagement; it’s an ineluctable fact of the digital age. My suggestion isn’t that we abandon measurement altogether, but that we reconsider the questions we’re trying to answer.
The digital age does complicate the influence equation, but you can’t put the genie back in the bottle. No doubt that influence analysis was easier when advertising messages were restricted to print and radio, but those days are long gone (and to be fair, even back then you’d have a hard time demonstrating with any degree of certainty the relative importance of a radio spot versus the word-of-mouth recommendation from the neighbor down the street). I think the point of Amber’s post, and certainly the one I’m trying to make, is that social data isn’t any more problematic in this regard than data from other digital channels. More importantly, socially disseminated messages are impacting consumer behavior regardless of corporate involvement. People hear about products from friends, form opinions based upon the experiences of their social peers, and make purchasing decisions that are influenced by their online interactions in social media contexts (and yes, the data supporting these assertions comes directly from consumers themselves). You’re certainly right about the difficulty of determining influence in the absence of direct data from consumers, but that’s not a reason to hold back on social media engagement; it’s an ineluctable fact of the digital age. My suggestion isn’t that we abandon measurement altogether, but that we reconsider the questions we’re trying to answer.
I prefer to look at social media ROI in an even simpler way: it’s measured one customer at a time.
Done properly, social media is about creating individual, personal and meaningful connections with listeners turning them into brand advocates.
http://www.cyberbuzz.com/2009/11/05/times-square-or-twitter/
I prefer to look at social media ROI in an even simpler way: it’s measured one customer at a time.
Done properly, social media is about creating individual, personal and meaningful connections with listeners turning them into brand advocates.
http://www.cyberbuzz.com/2009/11/05/times-square-or-twitter/