Mrs Trellis, red-shirts and savings – the limits of text analysis

Published by Tony Quinlan on

I’ve just finished Tim Harford‘s excellent Messy (paperback just out) alongside Giuseppe Tomasi di Lampedusa‘s The Leopard and both have thrown up examples of phrases that brought a smile to my face and, along with an example from a conference earlier this year, highlight one of my concerns about automatic forms of text analysis. To be clear, I’m not anti-text analysis as part of a broader piece of work, but I am concerned about the limits of automated analysis and the place of it in research practices.

Language clearly carries meaning, so when we’re looking to understand beliefs, narratives, cultural norms, it would be strange to ignore a fundamental element of communication. However, from experience both in my own practice and in working with clients, I’ve seen that there is a strong temptation to focus on language/text analysis* as the primary source of meaning – and that is something I’m against.

Example 1.  To return to Tim Harford, in a chapter on Life, he talks about the limitations of categorisation when filing things:

Making three copies of correspondence and filing once by date, once by topic and once by correspondent is a logical solution for a world in which we cannot predict whether we might need to look up all the letters sent and received late in October 2015, or all the letters about the faulty rumbleflange, or all the letters from a Mrs Trellis.

For some of us, those last two words are a clear signpost that Tim is a well-educated connoisseur of radio comedy programmes.

 

Example 2.  In Lampedusa’s Leopard, much of the early part of the novel takes part against the backdrop of Garibaldi’s campaign to unify Italy in 1960 – specifically in Sicily.  Young men in Garibaldi’s army were clearly identifiable by their clothing as shown by this passage later in the novel:

Don Fabrizio did not quite understand; he remembered both the young men in lobster red and very carelessly turned out.  “Shouldn’t you Garibaldini be wearing a red shirt, though?”

For any Star Trek fans or readers of this old post, there are two important words in there…

 

Example 3. At the UN Data Innovation Lab event earlier this year, one piece of research into Google searches highlighted that in one country (from memory, it was Colombia – apologies if I’m mis-remembering) there were two interesting insights – when the economy is dipping, people search more often for “savings” and when the economy is rising, people search more often for “shoes”.  Interesting as a possible indicator (presuming that the searches precede the economic move), but in terms of understanding what people need, “savings” isn’t as helpful as it might be.

 

For the examples, there are deeper layers of meaning that require contextual understanding:

  1. “Mrs Trellis” is a regular correspondent to “I’m Sorry I Haven’t A Clue” – a long-running BBC Radio 4 show.  The mere mention of her name to regular listeners evokes laughter.
  2. “Red shirts” has a very different meaning depending on whether you’re a Star Trek fan (see here) or reflecting on a significant moment in Italian history.
  3. “Savings” has multiple possible interpretations – is that savings you make when buying something at a discount or savings you put into a separate account for unexpected problems or purchases ?

Human interpretation by culturally and contextually appropriate people will help elicit those multiple layers of meaning. And algorithmic interpretation may help to group and theme material if it repeats in text-based data.

 

When we are looking to understand the meaning of large volumes of qualitative material and micro-narratives, we are better off relying first on meaning that has been signified by the contributors – the meta-data added by respondents in SenseMaker® work. Using that meta-data is a better indication of meaning initially – and then we can identify clusters of stories and text that can throw further illumination. At that secondary level of analysis, I think text analysis can significantly help – but not before.

 

There are, however, three other significant issues when using text analysis (or even over-simplistic tools like Wordle, as I did in the early days of working with narrative a decade ago).

  • Complex systems work needs information about modulators, decision-mechanisms, rituals, boundaries, identities and the like – elements that rarely come out of text analysis tools.  I’ve yet to see a text- or language-based tool that does anything other than show themes or connections between words – with the increasingly frequent addition of thesaurus-like elements to group similar concepts into single groups or phrases.  If we are looking to understand what is going on in a human system and to identify potential interventions or nudges, then we need to build a framework of questions and meta-data with that in mind – it is unlikely to come from text analysis.
  • The underlying algorithms need careful consideration – particularly in government and NGO use. We’re seeing the unintended and damaging effects of automated algorithms in many cases – as Cathy O’Neil, author of Weapons of Math Destruction blogs about here.  Facial recognition systems have been questioned and the consequences of flawed algorithms could be significant. The same questions need therefore to be asked of text analysis algorithms – how do they cluster, what do they dismiss, what inherent biases do we need to be aware of?
  • The final point is a personal hang-up and may be arrogance on my part. From experience, I have seen clients respond immediately to wordclouds and clusters, I’ve seen others start searching immediately for particular words and assume a hypothesis is proved by their presence. The attractiveness and intuitive communication of text-based data visualisations is appealling – but I believe we have a responsibility as consultants to help clients understand the deeper issues. If – as is often the case – they will leap to conclusions from a wordcloud and then pay less attention to the more rigorous meta-data-based material, I think we need to focus on the better-informed but less attractive element.

Ten years ago, before I used SenseMaker®, I would happily generate wordclouds from material I’d gathered with clients. Once I’d realised it was misleading but appealling, I stopped – and we haven’t used wordclouds since. These days I’m prepared to use them, but only as a secondary data visualisation to cast light on what has emerged from clusters of meta-data.

But I’m operating on less-than-perfect information – does anyone have any experience or deep knowledge that might help me put some of my concerns above to rest?

 

 

*I’m using language/text analysis generically here – I haven’t yet done the research into the various analysis tools available.  I have no doubt that, like any tool, there will be some that are better than others. My concerns stand in the face of any automated tool that claims to make meaning from fragmented, natural language.


2 Comments

Rick Davies · 2 November 2017 at 9:49 am

Have you tried using network visualisation software to show relationships between
self-signifiers? Self signifiers A and B are connected if co-occuring in same text, more such co occurrences = thicker links in network visualisation. When viewed with aid of node and relationship filters clusters can be identified, of both self signifiers, and of texts.

    Tony Quinlan · 21 November 2017 at 1:26 pm

    Thanks for the comment Rick, apologies for the slow delay due to little time while travelling.

    Interesting thought – I know that the UCL team of post-grads tried various angles on the text analysis last year, but I’d have to defer to Anna on what they saw.
    And to make sure I understand – when you say self-signifiers, are you referring to the same signification process of triads, etc that I do? (I’ve recently come across people using the same language but to refer to something completely different, so I’m keen to avoid misunderstandings.) We do do correlations between triad signifiers – to see what elements/modulators are frequently seen together (or frequently absent together – a correlation based on low values rather than high values for particular signifiers). That can be highly illuminating.

Comments are closed.