Our Twitter
search
Flickr
search
  • IMG_6494
  • IMG_6493
  • IMG_6491
  • IMG_6490
  • IMG_6489
  • IMG_6488
  • IMG_6487
  • IMG_6486

The first-born, ugly duckling of social media is back.  Grown up, chic and mysterious.

Infographic-e1349368078674 New Myspace: Can It Make A Comeback?

If you visit myspace.com/pressroom, there is no official announcement of an update. However the ‘New Myspace’ video uploaded on Monday, September 24, 2012 highlights a new design and user experience. The video sparked a lot of conversation around whether or not MySpace will make a successful comeback; we haven’t formed a final opinion yet, but we’re still interested to see how successful the relaunch is going to be.  Here’s some context:

Facts:

  • MySpace has 42 million songs in its music library (compared to Spotify’s 15 million.)
  • Though MySpace will have to make a comeback, it already has a user base.  According to Compete.com, MySpace.com saw around 20 million unique visitors in August 2012.
  • Specific Media bought MySpace about a year ago; they’re a company that specializes in online video advertising.

Things we like:

  • The new format looks clean and easy to navigate. The layout echoes a combination of Pinterest, Facebook and Google+.
  • MySpace differentiates from other social networks, since it continues to focus specifically on music.
  • Similar to Pinterest and Google+’s inception, MySpace’s mysterious video ends with a call-to-action asking users to “Request An Invite.”

Challenges:

  • It’s not easy building a successful social network or indeed making a comeback as one.
  • Facebook, Twitter, Pinterest, Google+…is there room for another social network?
  • There are now music services such as Spotify and Grooveshark, representing further competition for MySpace’s success.

We  monitored the online conversation since the new Myspace video was posted (9/24) through September 27th.  Looking at any online discussion across blogs, Facebook, Flickr, YouTube, Reddit and more, Twitter was the venue of choice for most people, representing 83% of the conversation. Bear in mind that since the new MySpace format is not available yet, people can only make judgments from the two minute video. Perhaps as a result of that, most of the conversation was neutral, as people simply spread the news about the soon-to-be revamped site without stating an opinion. In terms of the polarized conversation, however, positivity outweighed negativity (25% vs. 8%).

Despite not winning everyone over, the video generated a large volume of conversation and other types of social activity in a short period of time: from September 24 through September 27, 2012, there were 14,549 tweets and 3,890 bit.ly clicks to the MySpace Vimeo video.  We’ll see in time whether this activity turns into a successful relaunch.

SGS Speakers

Last week, Mashable hosted their annual Social Good Summit in NYC. The summit live-streamed discussions between NYC, Beijing and Nairobi in seven languages, and Converseon was there to see the speakers live at the 92nd Street Y.

Leaders from around the globe discussed how social media is being used to generate support for causes and fundraise, with audiences participating via the summit’s hashtag, #SGSGlobal. Cyber-security, women’s empowerment, child mortality and spirituality were among the many topics discussed during the summit. One of Converseon’s alums, Paull Young, spoke on behalf of his current organization, charity: water – kudos for participating in an engaging panel, mate!

Converseon took a look at Tweets containing the #SGSGlobal published during the summit to see which panels received the most attention on Twitter. #Promise4Children – a hashtag dedicated to the UNICEF initiative ‘A Promise Renewed,’ which aims to reduce child mortality –emerged as a main discussion point for this year’s summit. The initiative was launched in June of 2012 to end preventable child deaths by giving key aid to pregnant women and newborns in developing nations. Authors tweeted about the panel and encouraged others to sign the pledge for child survival, leading hashtags like #UNICEF, #children and #childmortality to frequently appear in discussions about SGS 2012 as well.

For more information on how our SocialTrack offering can help you understand the impact of your events, please contact Jasper Snyder (jsnyder@converseon.com).

More than two weeks ago, London held the Opening Ceremony to celebrate the start of the 2012 Olympic Games. The event certainly made a splash in the online world, though it was clear that U.S. audiences were not quite sure what to make of it all. The New York Times, for example, headlined the ceremony as “A Five-Ring Opening Circus, Weirdly and Unabashedly British.”  Now that we’ve all seen the Closing Ceremony, Converseon took a look at some of the online conversation around the two events, identifying which aspects of the ceremonies people were talking about and using our Convey technology to understand what people really thought about them.

Opening Ceremony, July 27, 2012

Opening-TreeMap 2012 Olympics: A Tale of Two Ceremonies

Musical Performances were a top hit (pun intended) during the ceremonies. With well-known musicians including Paul McCartney, the Spice Girls and The Who taking the stage, audiences debated Great Britain’s choice of stars and their performances, as well as who they thought was missing from the lineup. Beijing Comparisons and Opening Ceremony Comparisons also appear frequently, as U.S. audiences compare the ceremonies both to each other and to the 2008 Beijing Olympic Ceremonies.

Closing Ceremony, August 12, 2012


ClosingTreeMap 2012 Olympics: A Tale of Two Ceremonies

People are much more positive about the Closing Ceremony than they were about the Opening Ceremony.  This is largely attributable to a shift in the poles of people’s sentiment; strongly negative opinion decreased, while strongly positive opinion increased.  So maybe we’re not all quite ready for flying nurses, a skydiving Queen, and an extremely large Lord Voldemort!

Sentiment-Poles 2012 Olympics: A Tale of Two Ceremonies

For more information on our Convey technology or services that can help you measure the social impact of a particular event, please contact Vidar Brekke (vbrekke@converseon.com) or Jasper Snyder (jsnyder@converseon.com).

WOMMA Listening is Good

We’re proud to have been involved in the creation of the latest WOMMA publication, ‘Listening is Good.  Participating is Better.  The Practitioner’s Guide to Listening and Monitoring in Customer Conversations.’

The playbook represents an important step in the maturity of the social media listening market, recognizing as it does the difference between the two basic types of use of social media data: on the one hand, use of data on an individual message basis – for example, for customer service – and on the other hand use of aggregated data for insights-focused purposes.

This bifurcation represents an opportunity for market research professionals to drive sophistication in listening providers, and will have a paradoxical effect on the industry: first, it will encourage fragmentation of the market, in the sense that providers will emerge or refocus to address very specific parts of the value chain; at the same time, as social data further cements its position as a core part of the market research toolkit, larger market research vendors will continue to acquire social listening providers to round out their offerings.

What will be interesting is to see if and when the market paradigm shifts from a tool-centric one to one where the tool is merely something that a service provider uses to produce their deliverables.  To draw an analogy, client-side market researchers don’t for the most part concern themselves with the statistical package their research vendor is using, nor what the technologies are that the vendor uses to manage their sample; they simply choose vendors with insightful analysis based on a representative sample.

For non-members of WOMMA, there’s a preview of the Playbook at the WOMMA website here.  WOMMA members can download the entire guidebook free of charge in the member center.

Please contact me at jsnyder@converseon.com if you’d like to discuss how to use social data in your market research mix.

small_logo

Three years can often feel like a long time, but in the world of social media, it goes by like a flash.

For three years, behind the scenes, Converseon has had its team of data scientists and machine learning experts toiling away on taking on a particularly difficult challenge:  how to provide the next generation of text analytics for the social age that is close to the gold standard of human coding so that we can do so at scale.    Language is a challenge – sarcasm and slang make up vast parts of social conversations.  Syntax is still not fully understood.

But plunge into this we did, and for good reason:  we have long recognized what many of you also probably have too — that text analytics, sentiment analysis, etc for social conversation data was well…just not very good.  In fact it was quite poor.   And that limits its value and uses.

We also knew that to truly do it right it would take a massive effort and time.   Three years in time, in fact.  And millions of meticulous human coded records across industries and brands.  We  recognized the need to build an end to end semi-supervised system that would allow it to continue to evolve and learn as human language evolves and transforms; one that could be trained to specific industries and companies.  Because we know that language means different things in different contexts.  Off the shelf, generic solutions simply couldn’t get us to where we wanted to go.

But we made this effort because we believe that solving this challenge would open up a world of amazing insight and value.

And today, almost a thousand days later, we have achieved what we believe to be the most accurate social intelligence data in the industry — which now enables us to fuel many other applications, including advanced uses like predictive modeling.

Today we introduce to the world ConveyAPI.

Yes, the performance numbers cited below are impressive…and they’re real.   In fact, in transparency, we have put forth not only how we tested the system but set forth a process we think the rest of the industry should follow so that everyone can have some standards that they can believe in and work with.

ConveyAPI is designed to truly convey the meaning of social conversation.  We look forward to showing it to you as it rolls out.

Visit ConveyAPI.com or read our press release here.

magnify

For a buyer of social media analytics, comparing the performance of various technologies is nothing short of baffling. This is especially true with respect to sentiment analysis — indeed text analytics in general — where scientific jargon, marketing puffery, and a laundry list of features can often obscure what really matters: using a technology meant to measure human expression, are we obtaining the value of a human analysis?

This notion of human performance as the ultimate goal is based on an important observation: when people analyze social media, we get valuable results.

When we built our social text analytics solutions, we recognized that, if only we could somehow take a few thousand people, shrink them and put them into a little box, and then get them to work thousands of times faster (to deal with seriously big data), we would have an incredible solution to our clients’ problems. Yes, people do make mistakes, and they disagree with each other about things. (Consider: “At this price point, I guess the smartphone meets the minimum requirements”. Three different people might fairly call this either positive or negative or neutral.) But even though human performance is imperfect, we know from our long-tested experience that human analysis provides all kinds of value that clients need.

So, when building and benchmarking our social media analysis technology, we set our sights on how close our system could get to human performance. One doesn’t need the technology to be 100% perfect, because people aren’t perfect, and we know people can get the job done just fine. (See the second paragraph again.) The right goal is for the technology to be as good as people.1

With that in mind, here’s how we’re approaching the measurement challenge. The first step is to figure out how well people can do at the analysis we care about, so we know what we’re aiming for. How can you do that? Well, take someone’s analysis and have a second person judge it. Hmm. Wait a second. How do we judge whether the second person is a good judge? Add a third person to judge the second person. How do you now judge whether the third person is a good — Uh oh. You see the problem.

The problem is that there’s no ultimate, ideal judge at the end of the line. Nobody’s perfect. (But that’s ok, because we know that when people do the job, it delivers great value despite those imperfections. See that second paragraph yet again.) As it turns out, there’s a different solution: let your three people take turns judging each other. Here’s how it works. Treat Person 1’s analysis as “truth”, and see how Persons 2 and 3 do. Then treat Person 2’s analysis as truth, and see how Persons 1 and 3 do. Then treat Person 3’s analysis as truth, and see how Persons 1 and 2 do. It turns out that if we take turns allowing each person to define the “true” analysis for the others, and then average out the results, we’ll get a statistically reliable number for human performance — without ever having to pick any one of them as the person who holds the ultimate “truth”. This will give us a number that we can call the average human performance. 2

If we want to know if our system is good, we’ll compare how it does to average human performance. It’s the same turn-taking idea all over again, this time comparing system to humans rather than comparing humans to humans. That is: Treat Person 1’s analysis as “truth” and see how the system does. Do it again with Person 2 as “truth”. And Person 3. Average those three numbers, and we’ve got raw system performance.

The final step: what we really want to know is, how close is the raw system performance to average human performance? To get this you divide the former by the latter to get percentage of human performance. For example, let’s suppose that the average human performance is 74%. That is, on average, humans agree with each other 74% of the time. (If that number seems low, yes, you guessed it; second paragraph.) Suppose Systems A and B turn in raw system performances of 69% and 59%, respectively. Is one system really better than the other? How can you tell? System A is achieving 69/74 = 93% of human performance. System B achieves 59/74 = 80% of human performance. Out of all this numbers soup comes something that you can translate into understandable terms: System A is within spitting distance of human performance, but System B isn’t even within shouting distance. System A is better. 3

What we’ve just described is a rigorous and transparent method for evaluating the performance of social analytics methods. When you’re evaluating technologies on your short list, we suggest you use this approach, too.

If you don’t have the resources for such a rigorous comparison, let us know, and we’ll lend you a hand.


1 In a seminal  paper about evaluation of language technology, Gale, Church, and Yarowsky established the idea of benchmarking systems against an upper bound defined by “the ability for human judges to agree with one another.” That’s been the standard in the field ever since. (William Gale, Kenneth Ward Church, and David Yarowsky. 1992. Estimating upper and lower bounds on the performance of word-sense disambiguation programs. In Proceedings of the 30th annual meeting on Association for Computational Linguistics (ACL ’92). Association for Computational Linguistics, Stroudsburg, PA, USA, 249-256. DOI=10.3115/981967.981999 http://dx.doi.org/10.3115/981967.981999).

2 This is an instance of a general statistical technique called cross validation.
3 You’re about to ask how we decide that 93% is “spitting distance” and 80% isn’t, aren’t you?  Fair enough.   But we never said that the buyer’s judgment wasn’t going to be important.   Our point is that you should be asking 93% of what and 80% of what, and the what should be defined in terms of the goal that matters to you.  If what you’re after is human-quality analysis, then percentage of  human performance is the right measure.  Subjectively we’ve found that if a system isn’t comfortably over 90% on this measure, it might be faster and more scalable, but it’s not providing the kind of quality that yields genuine insights for buyers.

Rodin - The Thinker

I presented last week at the 2012 CASRO Technology Conference.  Having come from a ‘traditional’ research background it’s always great to catch up with old colleagues and find out what’s top of mind for them.  Additionally, from a presentation perspective, there are so many parallels between survey methodology and social research that it’s relatively straightforward to address some of the methodological issues that surround the latter by borrowing concepts from the former.

The focus of my presentation was how researchers can use social data.  I wasn’t coming at this from a business function perspective – i.e. discussing how to use social for product development, or competitive insights, etc. – but rather from the perspective of thinking about some of the questions researchers are now having to address in terms of the enabling technologies used for analysis of social data.

First, researchers need to ensure they’re looking at both the text of a social media message and its metadata.  The metadata often include information that is crucial to derving insight; perhaps from the point of view of understanding the consumer segment of the author, which when aggregated (and anonymized) is crucial to understanding if you’re analyzing the right conversations.  It’s just like a survey – you need to know who’s answering your questionnaire.

Second, social data need to be sorted before you can work with it as a researcher.  Messages need to be sorted by relevancy, sorted by the topic discussed, sorted by the sentiment expressed, by the emotion shown and so on; with the exception of relevancy, what you’re sorting for depends on the type of research question you’re going to use the data to answer.  So researchers need to define what the pieces of information are that they’re sorting for – and make sure that the data are classified in such a way that this sorting is possible.

Third, there are a number of ways you can do this sorting.  Machines are great at doing a lot of tasks in a short space of time, and humans are great at doing tasks to a high degree of quality.  If you can combine those approaches, you’re getting the best of both worlds.  That’s what machine learning does, and we’ve spent a lot of time here at Converseon developing ways to measure – and optimize – the performance of our machine learning technology to the extent that in many scenarios we cannot tell the difference between a human and a machine.  One crucial point that we’ve embedded in our measurement efforts is that not all mistakes are created equal; in most cases, it’s worse to classify a positive message as a negative message than it is to classify it as a neutral one.  So if you’re just looking at a machine’s ‘accuracy’ count you might in fact be getting a distorted picture of how well or badly a machine’s doing something.

You can download the presentation here or email me at jsnyder@converseon.com if you’d like to talk more about accurate social data and how to use it for market research purposes.

Brueghel's Tower of Babel

The explosion of social media data is having a transformative effect on market intelligence and research.

An Economist article from late last year states the context well: “Big companies now obsessively monitor social media to find out what their customers really think about them…As communication grows ever easier, the important thing is detecting whispers of useful information in a howling hurricane of noise…the new world will be expensive.  Companies will have to invest in ever more channels to capture the same number of ears.  For listeners, it will be baffling.  Everyone will need better filters—editors, analysts, middle managers and so on—to help them extract meaning from the blizzard of buzz.”

Being able to extract this meaning is a challenge – it’s not easy to do – but it represents a significant opportunity for market researchers to gain competitive advantage.  In a series of posts, we’ll be addressing some of the questions that a researcher has to answer before they can drive that advantage for their employer.  These questions include:

1) How do I make sure my research is based on relevant data?

2) Which social data are most useful to a researcher?

3) Is ‘automated’ analysis – for example, sentiment analysis software – usable by market research professionals?

Before we address the first question, let’s take a moment to consider the context.  The fundamental challenge to anyone trying to make sense of how social media fits into a researcher’s toolkit has to understand is that social media ‘conversation’ essentially has two different types of use.  First, it can be used for ‘monitoring’ purposes (e.g., crisis response or customer service); second, it can be used for ‘insights’ purposes (e.g., analyzing online conversations that might inform product development, or as a way to measure brand perception).  These types of purposes have different requirements in terms of data, but the way in which social media monitoring tools are being used today often obscures this distinction.  Buyers end up looking for a silver bullet to hit both targets.  The difficulty with that approach is that when you’re using a monitoring tool for customer service, for example, you need to see every message that might be relevant; you have to err on the side of making sure you don’t miss any content, so you undoubtedly set your keywords or searches up with that in mind.  On the other hand, someone trying to analyze social media conversation to understand whether their company’s key brand values are resonating online, for example, needs to make sure that they’re only analyzing relevant content; irrelevant content here only serves to muddy the analytical waters.

The competition between these types of purpose – an analog of the trade-off between recall and precision in text analytics, in fact – should be clearly understood by any researcher looking to use social media for market research purposes.

So how do we identify which data are relevant and represent the opinions that we want to analyze?  Last year, my colleague Chris Boudreaux co-authored a research paper looking at the correlation between online sentiment and an offline brand-tracking.  The research showed that there is a correlation between the two measures, but only after controlling for one or a range of factors.  One of the controls identified as being key to any correlation was making sure that the person commenting online had experience with the brand in question.

This makes total sense: make sure you’re listening to the right people.  Analyzing social media data without controlling for whose comments you’re looking at would be like sending out an online survey to everyone in your sample database; you just wouldn’t do it.  Do you want to listen to what your customers thinks about your latest product?  If so, don’t listen to your own employees, and don’t listen to your competitors.  The opinions of both of these groups have their place, but not to answer that specific question.  So how do you configure your social media research with that in mind?

First, at the author level, you can choose to only include messages in your analysis that are posted by the people whose opinions you’re interested in.  The way you define groups of people here may in fact map to your existing customer segmentation taxonomy.

Second, you could choose to ‘listen’ only in those venues where the audience whose opinion you’re interested in is likely to be engaging.

Third, you can make sure that you’re only including in your analysis messages where your product is being talked about in a relevant context.

Using these three approaches will help you make sure you’re analyzing data from the relevant people, discussing the relevant issues – giving you a solid foundation from which to start your analysis.

For information on how Converseon can help you get to the right data, contact jsnyder@converseon.com.

sxsw
With South by Southwest in Austin heating up, Converseon will be in attendance and speaking.   If you’re in town, you may want to check out these sessions.
Human Language Technology and Where It’s Headed
#sxsw #nlproc

Language is the holy grail of artificial intelligence. When we imagine sharing a world with smart machines, we don’t think about logic, or problem solving, or winning at chess. We hear HAL-9000 declining to open the pod bay doors, and the Terminator saying he’ll be baaack. Researchers have been working on building computers we can talk to for 60 years; in the 1990s, Bill Gates predicted that speech would soon be “a primary way of interacting with the machine”. So why aren’t we talking to our computers yet ….Or are we? Thanks to new developments in human language technology (also known as “natural language processing”) and text analytics, computers are analyzing everything from e-mail and tweets to clinical records and and speed-date conversations. How does the technology work, when does it work well (and when not), what’s it doing for us, and where is it headed.

Our senior data scientist, Jason Baldridge will be presenting.

Dr. Philip Resnik, our lead data scientist will also be speaking on Tuesday on Language Technology and the Clinical Narrative.

Please do say hello if you attend.   If you’d like to connect with us there, please just send us an email and we’ll try to work out schedules.  Enjoy.
1 ... 3 4 5 6 7 8 9 10 11 12 13 ... 24