Why brand will always trump process

In conversation with Audrey Watters and Kin Lane yesterday, I managed to articulate something that’s been bothering me for a while. This is my attempt to record that so I can refer to it later.

Continuum of ambiguity

I refer to the above diagram a lot on this blog, as it’s my way of thinking about this space. I’m not going to explain it in depth here so for those new to it, check out the slides from this seminar I did at MMU last year, or check out the chapter on ambiguities in my ebook.

Ambiguity is a relative term. For those who have shared touchstones (e.g. people who work in a given sector) terms can contain more of a connotative aspect. They get what’s going on without having it all spelled out for them. However, there are those new to a sector, and there are also those who, quite rightly, want to take ideas, processes, and terminology and use them in other sectors.

Collectively, we come up with processes to do things, or names for particular approaches, or perhaps even a term for a nebulous collection of practices. For example, ‘agile development’, or ‘human-centered design’, or ‘digital literacy’. These work for a while but then start to use their explanatory power as they move along the continuum of ambiguity. Eventually, they become ‘dead metaphors, or clichés.

In that conversation with Audrey and Kin yesterday, I described this in terms of the way that, eventually, we’re forced to rely on brand rather than processes.

Let me explain.

Take the term ‘agile development’ — which is often shortened to just ‘agile’. This is an approach to developing software which is demarcated more by a particular mindset than a set of rules about how that development should happen. It’s more about ‘agility’ than ‘agile’.

That, of course, is lost when you try and take that out of its original setting (amongst the people are using it to be creatively ambiguous). All sorts of things are said to be ‘agile’. I even heard of one person who was using ‘agile’ to mean simply ‘hotdesking’!

The problem is that people will, either purposely or naïvely, use human-invented terms in ‘incorrect’ ways. This can lead to exciting new avenues, but it also spells the eventual death of the original term as it loses all explanatory power. A dead metaphor, as Richard Rorty says, is only good as the ‘coral reef’ on which to build other terms.

It’s therefore impossible, as individuals and organisations, to rely on a particular process over the long-term. The meaning of the term you use to describe that process will be appropriated and change over time. This means that the only defensive maneuver is to rely on brand. That’s why, for example, people call in McKinsey to do their consulting for them, rather than subscribe to a particular process.

As ever, this isn’t something to bemoan, but something to notice and to take advantage of. If you come up with a name for a process or way of doing things that is successful, then you no longer have control over it. Yes, you could attempt to trademark it, but even this wouldn’t stop it from having a different meaning ‘out in the wild’.

Instead, be ready to define the essence of what is important in your approach. This can then be codified in many different ways. You’re still going to have to rely on your brand (personal/organisational) to push things forwards, but at least you can do so without trying to hang on to an initial way of framing and naming the thing.

Why ontologies are best left implicit (especially for credentials)

Ontology is the philosophical study of the nature of being, becoming, existence or reality as well as the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology often deals with questions concerning what entities exist or may be said to exist and how such entities may be grouped, related within a hierarchy, and subdivided according to similarities and differences. (Wikipedia)

I’d argue that the attempt to define what ‘exists’ within a given system is usually a conservative, essentialist move. It’s often concerned with retro-fitting new things into the current status quo, a kind of Kuhnian attempt to save what might be termed ‘normal science’.

Perhaps my favourite example of this kind of post-hoc ontology is from a reference Jorge Luis Borges makes to a fictional taxonomy in a book he calls Celestial Emporium of Benevolent Knowledge:

On those remote pages it is written that animals are divided into (a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel’s hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance.

This, of course, is meant to be humorous. Nevertheless, we’re in danger when a dominant group sees the current state of play as the ‘natural order of things’. It’s bad enough when this is latent, but even worse when essentialst worldviews are codified into laws — and by ‘law’ I’d include ‘code’.

The thing that disturbs me most is when people accept the artefacts that have been left for them as the given circumstances of nature… It’s this automatic acceptance of how things are that leads to a sense of helplessness about changing any of them. (Douglas Rushkoff)

This week, a survey was sent out to the Open Badges community on behalf of the Credential Transparency Initiative. This initiative is funded by the Lumina Foundation, an organisation that describes itself as an “independent, private foundation committed to increasing the proportion of Americans with degrees, certificates and other high-quality credentials to 60 percent by 2025.” The Lumina Foundation therefore has a vested interest in deciding what counts as a ‘high-quality credential’.

The problem is, of course, that what one well-funded, high-profile group decides after ‘consulting the community’ is likely to be adopted more widely. This is how de facto standards emerge. They may decide to play the numbers game and equate certain types of badges with degrees. Or, they may choose to go to the other end of the spectrum and ensure that badges do not equate with ‘high-quality’ credentials. Either way, it’s not really up to them to decide.

The survey featured this highly problematic question:

CTI survey

There are all kinds of assumptions baked into this question that need to be unpacked. For example, perhaps the biggest is that all of these have an ‘essence’ independent of one another, rather than in relation to each other. I see this as an attempt, either consciously or unconsciously, to turn the notion of a ‘badge’ into what Richard Rorty termed a ‘dead metaphor’:

Old metaphors are constantly dying off into literalness, and then serving as a platform and a foil for new metaphors. (Contingency, Irony, and Solidarity, 16)

In my doctoral thesis (better consumed as this ebook), I used Rorty’s work along with that of William Empson to come up with a ‘continuum of ambiguity’:

Continuum of ambiguity

The idea behind this continuum is that almost every term we use to describe ‘reality’ is metaphorical in some way. Terms we use to refer to things (e.g. ‘badge’) contain both denotative and connotative aspects meaning that the person using the term cannot be absolutely certain that the person they are communicating with will understand what they mean in the same way.

Articulation of an idea

Image CC BY-ND Bryan Mathers

The more we try and create a one-to-one relationship between the utterance and the understanding of it, the more we are in danger of terms ‘falling off’ the continuum of ambiguity and becoming dead metaphors. They “lose vitality” and are “treated as counters within a social practice, employed correctly or incorrectly.” (Objectivity, Relativism, and Truth, 171). Such terms have the status of cliché.

The attempt to create a one-to-one relationship between a term as written or spoken, and the term as it is understood by an interlocutor or reader, is an understandable one. It would do away with the real, everyday problems we’re faced with when trying to understand the world from someone else’s point of view. As Rorty puts it, “the world does not provide us with any criterion to choose between alternative metaphors” (The Contingency of Language, 6). The problem is that if we have a single ontology, then we have a single worldview.

Returning to Open Badges, it would be difficult to do any interesting and useful work with the term if it becomes a dead metaphor. For example, I’m quite sure that there’s nothing many of those in Higher Education would like better than to demarcate what a badge ‘counts for’ and the situations in which it can be used. After all, organisations that have histories going back hundreds of years, and which are in the game of having a monopoly on ‘high-quality’ credentials need to protect their back. If they can create a dead metaphor-based ontology in which badges count as something much lower ‘quality’ (whatever that means) than the degrees they offer, then they can carry on as normal.

The fading conviction originating with Plato that language can adequately represent what there is in words opens the way for a pragmatic utilization of language as a means to address current needs through practical deliberations among thoughtful people. (Internet Encyclopedia of Philosophy)

At this point, I’m tempted to dive into differential ontology and the work of Derrida and Deleuze. Instead I’ll simply point out that the reductive attempt to define an essentialist ontology of credentials is doomed from the outset. What we need instead is to ensure that our use of terms such as ‘Open Badges’ are what I would call ‘productively ambiguous’ — that is to say, in the Pragmatist tradition, ‘good in the way of belief’.

Or, if you like your takeaways more pithy: Keep Badges Weird!

The professional use of metaphor

Seth Godin on metaphor:

The difference between the successful professional and the struggling amateur can often be seen in their respective facility with metaphor. The amateur struggles to accept that metaphor is even acceptable (“are atoms actually building blocks?”) or can’t find the powerful analogy needed to bring home the concept. Because all metaphors aren’t actually true, it takes confidence to use them well.

The name of the thing probably doesn’t matter

England and America are two countries separated by the same language. (George Bernard Shaw)

My family first got a television with a remote control when I was about the same age as my now nine-year old son. Like many British families, the remote control was called anything other than it’s proper name: the ‘doofer’, the ‘thing’, the ‘widget’. It didn’t matter because, 100% of the time, the person being asked to pass the remote control (or do something with it) knew what was being referred to.

One big thing I’ve noticed over the last few years is that American English and British English differs greatly around precision. Americans seem have a word for everything. I’m not an etymologist, but I should imagine that the reason British English lacks precision in some cases is because it’s had so many foreign influences. We tend to import words from other languages – in particular French (e.g. cliché) or German (e.g. schadenfreude) and, like our legal system, we base things off precedent. American English, on the other hand, was explicitly defined by Noah Webster.

As I noted in my thesis, follow-up ebook, and even the only (unpublished) academic paper I’ve ever written, ambiguity can be a good thing. It can be productive. It can lead to useful outcomes and provide breathing space for ideas to morph and evolve. Ambiguity can help avoid the situation where terms become (what Richard Rorty called) ‘dead metaphors’.

I was reminded of the difference between American English and British English this week in a post written, ironically enough, by a Frenchman, Serge Ravet. He took issue with something I’ve written about recently – namely Open Badges and credentialing. Serge’s point rests on the difference between ‘credentialing’ and ‘recognition’:

I used to say credentialing is ancillary to recognition. Credentialing is a servant to recognition and it should stay in that subordinate position. Problems arise when the servant becomes the master — think of Dirk Bogard in Joseph Losey’s The Servant. I am afraid that it is the situation we are fostering when equating Open Badges to credentials.

I’m not going to rehash the arguments I’ve already made, but instead I want to make another point, identical to title of this post, i.e. the name of the thing probably doesn’t matter. You don’t need to know the name of all the parts in the engine to fix the engine. We don’t need to be able to reel off the names of every different form of government to spot a tyranny. Or, to quote the Bard:

‘Tis but thy name that is my enemy;
Thou art thyself, though not a Montague.
What’s Montague? it is nor hand, nor foot,
Nor arm, nor face, nor any other part
Belonging to a man. O, be some other name!
What’s in a name? that which we call a rose
By any other name would smell as sweet;

I don’t particularly care how you define digital literacies so long as what you do with that definition is worthwhile. The same goes for Open Badges: so long as you’re using the interoperable metadata standard, you’re free to call what your’e doing anything you like.

As for what badges stand for, whether they’re ‘credentials’ or ‘recognition’ or whatever, let’s have a philosophical discussion! I’ve got a degree in Philosophy, bring it on. I love this stuff. But let’s not pretend what we’re doing is anything that’s likely to make any practical difference anytime soon. There’s a difference between an ontological position that says, “this thing is X, not Y” and “this thing can be whatever you want it to be!”

For the avoidance of doubt, I’m in the latter camp. Do what you want. Use Open Badges in tired, conventional, boring ways. Alternatively, use badges in pedagogically exciting ways that liberate young people from the shackles placed upon them. Respect your context. Or don’t. Just use the metadata standard. That’s the revolutionary thing.

Give people choice. See what happens.

 

Introduction

As part of my doctoral studies into digital literacy I stumbled into something that saved my thesis: ambiguity. This blog will serve to document occasional updates to my sporadic research into this area.

Just before submitting my thesis to Durham University in 2011 I wrote an article with my thesis supervisor entitled Digital literacy, digital natives, and the continuum of ambiguity. This takes the work of William Empson, as well as a couple of thinkers after him, and applies it to the concepts of ‘digital natives’ and ‘digital literacy’.

Four years later, in December 2015, I ran a seminar at Manchester Metropolitan University with a similar title to that 2011 article: