Business bullshit and ambiguity

In this week’s BBC Radio 4 programme Thinking Allowed, there’s an important part about ambiguity:

Laurie Taylor explores the origins and purpose of ‘Business Bullshit’, a term coined by Andre Spicer, Professor of Organizational Behaviour at Cass Business School, City University of London and the author of a new book looking at corporate jargon. Why are our organisations flooded with empty talk, injuncting us to “go forward” to lands of “deliverables,” stopping off on the “journey” to “drill down” into “best practice.”? How did this speech spread across the working landscape and what are its harmful consequences? They’re joined by Margaret Haffernan, an entrepreneur, writer and keynote speaker and by Jonathan Hopkin, Associate Professor of Comparative Politics at the LSE.

The particular part is the second section of the programme, in which Margaret Haffernan explains that organisations attempt (in vain) to  eliminate ambiguity. As such, they play a constant game of inventing new terms and initiatives, which not only work no better than the previous ones, but serve to justify inflated salaries.

The episode is available online here.

What we know about ‘knowledge’

There’s an ongoing flamewar between traditionalists and progressives, who believe that education should either be about ‘knowledge’ or about ‘skills’. This has been going on, in various forms, at least since Thomas Henry Huxley and Matthew Arnold squared off in the 19th century about what kind of education is required to foster ‘true culture’.

As Bruce Chatwin demonstrates in his modern-day classic The Songlines, there are ways of knowing that are based on action rather than ‘head knowledge’. He details how Australian aboriginal ‘knowledge’ is interwoven with their physical environment, is passed on primarily in an oral way, and comes with certain prohibitions as to who is allowed to ‘have’ such knowledge.

The Internet Encylopedia of Philosophy’s entry on knowledge lists four main types:

  1. Knowing by acquaintance
  2. Knowledge ‘that’
  3. Knowledge ‘wh’ (i.e. whether, who, what, why)
  4. Knowing ‘how’

I’ve always been of the opinion that the the second type of knowledge listed here, knowledge ‘that’, is of limited value. If I was coming up with my own personal hierarchy of the relative importance of these kinds of knowledge, I’d put this one at the bottom. It’s the kind of knowledge that may be foundational, but taken to absurd lengths, just means you’re good at pub quizzes.

For me, it’s knowing ‘how’ that’s of central importance, and what we should focus on in education. From the IEP’s entry on knowledge, citing the celebrate ‘ordinary language’ philosopher Gilbert Ryle:

What Ryle meant by ‘knowing how’ was one’s knowing how to do something: knowing how to read the time on a clock, knowing how to call a friend, knowing how to cook a particular meal, and so forth. These seem to be skills or at least abilities.

This is why I think that ‘knowledge’ vs. ‘skills’ is a false dichotomy. The article continues:

Are they not simply another form of knowledge-that? Ryle argued for their distinctness from knowledge-that; and often knowledge-how is termed ‘practical knowledge’. Is one’s knowing how to cook a particular meal really only one’s knowing a lot of truths — having much knowledge-that — bearing upon ingredients, combinations, timing, and the like?

Going back to the aboriginal example, this is where ‘knowledge’ that can’t be tested using a pencil-and-paper examination comes in. Knowing ‘how’ is usually described as a set of ‘skills’ in our culture, labelled as ‘vocational’, and given a back seat to the ‘more important’, ‘academic’ forms of knowledge. I think this is incorrect and should be remedied as soon as possible.

If Ryle was right, knowing-how is somehow distinct: even if it involves having relevant knowledge-that, it is also something more — so that what makes it knowledge-how need not be knowledge-that… Might knowledge-that even be a kind of knowledge-how itself, so that all instances of knowledge-that themselves are skills or abilities?

While reading to my six year-old daughter last night, the word ‘instinctively’ was used by the author. We had a brief conversation about it, which revealed that, even at her young age, she understands the difference between the knowledge ‘that’ which is acceptable at school, versus the knowing ‘how’ which is valuable currency at home. In other words, she’s playing the game.

Vagueness, ambiguity, and pragmatism

One of my favourite things about the Web is the ease at which serendipity occurs. We take it for granted these days but occasionally wonderful things happen that make us rediscover the joy of connection.

I was browsing The Setup, a wonderful site that interviews people about the hardware and software they use. Being particularly interested in those using Linux (as I do these days) I was delighted to come across John MacFarlane’s interview.

MacFarlane is a Professor of Philosophy at UC Berkeley and his website details both his interests and academic papers. I was delighted to come across a paper entitled Vagueness as Indecision, which is available as a preprint download.

It’s been a while since I studied formal logic at university, but I managed to get by while reading this paper, especially as MacFarlane gives homely examples. He takes an expressivist position in taking the position that, “vagueness… is literally indecision about where to draw lines”.

On page seven, and quoting a character from Spiderman in passing, MacFarlane states:

In principle, I can use ‘that’ to refer to any object. But with this freedom comes great responsibility. I must provide my hearers with enough cues to enable them to associate my use of ‘this’ with the same object I do, or communication will fail. Sometimes this doesn’t require anything extra, because it is mutually known that one object is salient, so that it will be assumed to be the referent in the absence of cues to the contrary. Other times it requires pointing. And in some cases simply pointing isn’t enough. But in every case, we’re obliged to do whatever is required to get our hearers to associate the same object with the demonstrative that we do. If we fail to do this, it will be sheer luck if they understand us.

In my book, The Essential Elements of Digital Literacies I pointed out that productive discourse involves interaction at the overlap of the denotative and connotative aspects of a term or phrase:

Connotative-denotativeWhat MacFarlane is pointing out is something similar, but outside the realm of ambiguity. For something to be ambiguous, it cannot be merely vague — although right on the left-hand boundary of that overlap is where the most vague ambiguous terms and phrases reside.

Within that overlap resides the continuum of ambiguity:

Continuum of ambiguityIn other words, just as ‘dead metaphors’ are to the right of Productive Ambiguity and occur when there’s denotation but little connotation, so ‘vagueness’ lies to the left of Generative Ambiguity and, as MacFarlane would put it, happens due to semantic indecision.

The crux of MacFarlane’s position is that to have a meaningful interaction, two people have to agree where the ‘boundaries’ are to what they’re discussing:

Here is the upshot. While in using a bare demonstrative like ‘this,’ one must have a definite object in mind, and successful uptake requires recognizing what object that is, there are no analoguous requirements for the use of ‘large.’ The speaker need not have in mind a particular delineation (even a ‘fuzzy’ one), and the hearer need not associate the speaker’s use with a particular delineation. What we get instead are constraints on delineations.  (p.11)

He continues, continuing his example of a trainee at an apple-sorting factory learning what a ‘large’ apple is:

Indecision is a practical state; it concerns plans and intentions, not belief. Just as one might plan to buy toothpaste, but not yet have settled on which toothpaste one will choose when confronted with a rack of them at the store, so one might have settled on counting apples greater than 84mm in diameter as large, without having settled on whether one would count a 78mm apple as large.

So for an interlocutor or reader to consider something ‘ambiguous’ they must ‘tolerate’ the semantic decisions made by the person with whom they’re interacting. If they don’t, then the lack delineation leads to vagueness.

I suggest, then, the ‘tolerance’ intuition can be explained as an awareness that a proposal to draw a sharp line in any particular place would be rejected for pragmatic reasons. Nothing about the meanings of vague words is inconsistent with drawing a sharp boundary; it’s just tha the cases in which a proposal to draw a sharp boundary would be sensible are few and unusual.

In other words, we humans are pretty good at getting by using heuristics. We agree to suspend disbelief (and therefore enter the continuum of ambiguity) to see whether doing so is, to use the words of William James ‘good in the way of belief’.

 

On vagueness, or, when is a heap of sand not a heap of sand?

Nothing new here for anyone who’s studied Philosophy, but still worth sharing for a general audience:

A vague word such as ‘heap’ is used so loosely that any attempt to locate its exact boundaries has nothing solid and reliable to go on. Although language is a human construct, that does not make it transparent to us. Like the children we make, the meanings we make can have secrets from us. Fortunately, not everything is secret from us. Often, we know there’s a heap; often, we know there isn’t one. Sometimes, we don’t know whether there is one or not. Nobody ever gave us the right to know everything!

Vagueness is an annoying, elusive concept — unlike ambiguity, which can be a much more productive one.

Reambiguation

Team Human

The Team Human podcast is a recent must-listen for me. One of the most recent episodes features Mushon Zer-Aviv on the concept of ‘reambiguation’. His starting point is that we should resist attempts to call what can be represented by digital data as ‘real’ as well as attempts to deprecate anything too messy (i.e. human).

To me, what Mushon discussed with Douglas Rushkoff, the host of Team Human, dovetails nicely with the continuum of ambiguity I’ve come up with. The idea is to maintain ‘creative ambiguity’, not reduce everything to ‘dead metaphors’.

Continuum of ambiguity

Robert Greene on the importance of ambiguity in creative endeavours

I’m re-reading Robert Green’s The Concise Mastery at the moment. Just now, I was struck by this passage:

Perhaps the greatest impediment to human creativity is the natural decay that sets in over time in any kind of medium or profession. In the sciences or in business, a certain way of thinking or acting that once had success quickly becomes a paradigm, an established procedure. As the years go by, people forget the initial reason for this paradigm and simply follow a lifeless set of techniques. In the arts, someone establishes a style that is new and vibrant, speaking to the particular spirit of the times. It has an edge because it is so different. Soon imitators pop up everywhere. It becomes a fashion, something to conform to, even if the conformity appears to be rebellious and edgy. This can drag on for ten, twenty years; it eventually becomes a cliché, pure style without any real emotion or need. Nothing in culture escapes this deadening dynamic.

This is exactly what I’m trying to get at with the continuum of ambiguity:

Continuum of ambiguity

What Greene refers to as ‘cliché’ is represented in this continuum by what Richard Rorty calls ‘dead metaphors’. We should always be looking for new ways to represent our ideas, rather than be wedded to terms and styles, which always end up out-of-date.

What do we mean by ‘open education’?

Socrates must have been one of the most annoying individuals to ever walk the earth. I still don’t get why he didn’t just leave the city instead of drinking the hemlock at the end of his life. Also, his incessant questioning may well have led to a widely-celebrated ‘method‘ but the dogmatism he displayed over definitions of things beggars belief. Things had definitions and people should act in accordance with objective, but abstract things such as ‘justice’ and ‘virtue’.

I say this by means of introduction, because this is certainly not a post intended to give a single ‘definition’ of open education, but rather to tease apart its meaning and explore how people use the term. As I mentioned in my doctoral thesis (and related ebook) terms such as ‘digital literacy’ and ‘open education’ are examples of zeugmas. In other words, we’re never quite sure which part of the phrase on which to place the emphasis: is it ‘open education’ or ‘open education‘?

Audrey Watters has already written on this topic and summarises well the problems with considering open education as a prozeugma (i.e. with the emphasis on ‘open’):

And it’s complicated, of course, by the multiple meanings of that adjective “open.” What do we mean when we use the word? Free? Open access? Open enrollment? Open data? Openly-licensed materials, as in open educational resources or open source software? Open for discussion? Open for debate? Open to competition? Open for business? Open-ended intellectual exploration?

The trouble is that it’s not just ‘open’ that’s a contested term, but ‘education’ as well. We tend to conflate ‘learning’ with ‘education’ — confusing something that happens inside us with something that happens to us.

A few months ago, as part of the work we were doing at the start of We Are Open Co-op, I asked people within my community what different kinds of ‘open’ there are in common parlance. I attempted to draw(!) both the examples I’d come up with by way of a stimulus and the contributions I received from people.

Open as in…

  • door (you are free to enter)
  • for business (you are invited to buy/sell/trade)
  • unlocked (you have access to a thing)
  • to ideas (you are willing to change your mind)
  • transparency (you can see into the ‘inside’ of something)
  • love (you are willing to be vulnerable to others)
  • space (you are free to use this resource)
  • amendments (you are happy to take on board other people’s suggestions)
  • exploring (you can discover new things)
  • open-ended (you can keep going, potentially forever)
  • flexible (you can change this to your own needs)
  • no barriers (you do not have to overcome hurdles to get started)

Some of these obviously overlap and, to be honest, some are just better metaphors than others.

Serendipitously, having started this post a few days ago, just yesterday Jim Groom posted about the ‘overselling’ of the open movement:

I’m quite ambivalent about the open movement more generally these days. What seemed like a movement defined by an anarchic spirit of revolution from 2004-2011 (at least for me—this was a fairly personal narrative) morphed into a fairly tame, almost conservative approach to education: massive lectures and free textbooks. I’m oversimplifying here of course, but at the same time the mad scramble around corporate sponsored MOOCs for elite universities from 2012 until just about now, coupled with the re-branding of OER, at least in the U.S., as predominantly a cost-saving measure left me fairly depressed.

Part of the problem, I think, is that we’ve so many different definitions of ‘open’ that it’s just not a useful term to use. We get ‘openwashing‘ by big corporates, who — consciously or unconsciously — attempt to move a term like ‘open’ from something that is a basis for creative ambiguity within a community, towards the realm of ‘dead metaphors’.

Continuum of ambiguity

Other times, we’ve just shot ourselves in the foot. As Jim Groom mentioned above, there’s been far too much focus on access when it comes to ‘open’ and not enough on ethos. Yes, it’s great that we’ve got so much openly-licensed stuff to use, but have we got an equal number of advocates for open education? I’d actually say that number is on the decline.

Instead, and this is something I keep coming back to, I’d use the diagram below to provide a simple way to show how the open education movement needs to move beyond — well beyond — mere Open Educational Resources (OERs).

Beetham & Sharpe (2009)

This is my version of a diagram that’s explained in this post and comes from original work by Helen Beetham and Rhona Sharpe. It’s ostensibly about digital literacies, but I think it’s much more widely applicable. It’s a development model that we can apply to educators becoming more familiar, and at ease with, open education.

Right now, there’s been enough work done around the emerging area of ‘Open Educational Practices’ for me to state with some confidence that at least pockets of the wider ecosystem are moving beyond just OERs. There’s even a badged online course for those who are curious.

What we need to do, and like many things, this is an identity issue, we need to move to the top of Beetham and Sharpe’s pyramid and think about what it means for people to identify as an ‘open educator’. It’s great having a fairly loose definition that appeals to those in the know within the extant community, but it’s more than a little confusing for those new to the whole thing.

Ideally, I’d like to see ‘open education’ move into the realm of what I term productive ambiguity. That is to say, we can do some work with the idea and start growing the movement beyond small pockets here and there. I’m greatly inspired by Douglas Rushkoff’s new Team Human podcast at the moment, feeling that it’s justified the stance that I and others have taken for using technology to make us more human (e.g. setting up a co-operative) and against the reverse (e.g. blockchain).

One way we can do this which is working at the top end of the pyramid is to make reclaiming our identity on the web easier to do. Reclaim Hosting is definitely doing a great job around this on the technical side, but we need something equally awesome (and not just short-term project-funded) on the cultural side of things.

So yes, in short…

The barrier to being an open educator is too damn high

Why brand will always trump process

In conversation with Audrey Watters and Kin Lane yesterday, I managed to articulate something that’s been bothering me for a while. This is my attempt to record that so I can refer to it later.

Continuum of ambiguity

I refer to the above diagram a lot on this blog, as it’s my way of thinking about this space. I’m not going to explain it in depth here so for those new to it, check out the slides from this seminar I did at MMU last year, or check out the chapter on ambiguities in my ebook.

Ambiguity is a relative term. For those who have shared touchstones (e.g. people who work in a given sector) terms can contain more of a connotative aspect. They get what’s going on without having it all spelled out for them. However, there are those new to a sector, and there are also those who, quite rightly, want to take ideas, processes, and terminology and use them in other sectors.

Collectively, we come up with processes to do things, or names for particular approaches, or perhaps even a term for a nebulous collection of practices. For example, ‘agile development’, or ‘human-centered design’, or ‘digital literacy’. These work for a while but then start to use their explanatory power as they move along the continuum of ambiguity. Eventually, they become ‘dead metaphors, or clichés.

In that conversation with Audrey and Kin yesterday, I described this in terms of the way that, eventually, we’re forced to rely on brand rather than processes.

Let me explain.

Take the term ‘agile development’ — which is often shortened to just ‘agile’. This is an approach to developing software which is demarcated more by a particular mindset than a set of rules about how that development should happen. It’s more about ‘agility’ than ‘agile’.

That, of course, is lost when you try and take that out of its original setting (amongst the people are using it to be creatively ambiguous). All sorts of things are said to be ‘agile’. I even heard of one person who was using ‘agile’ to mean simply ‘hotdesking’!

The problem is that people will, either purposely or naïvely, use human-invented terms in ‘incorrect’ ways. This can lead to exciting new avenues, but it also spells the eventual death of the original term as it loses all explanatory power. A dead metaphor, as Richard Rorty says, is only good as the ‘coral reef’ on which to build other terms.

It’s therefore impossible, as individuals and organisations, to rely on a particular process over the long-term. The meaning of the term you use to describe that process will be appropriated and change over time. This means that the only defensive maneuver is to rely on brand. That’s why, for example, people call in McKinsey to do their consulting for them, rather than subscribe to a particular process.

As ever, this isn’t something to bemoan, but something to notice and to take advantage of. If you come up with a name for a process or way of doing things that is successful, then you no longer have control over it. Yes, you could attempt to trademark it, but even this wouldn’t stop it from having a different meaning ‘out in the wild’.

Instead, be ready to define the essence of what is important in your approach. This can then be codified in many different ways. You’re still going to have to rely on your brand (personal/organisational) to push things forwards, but at least you can do so without trying to hang on to an initial way of framing and naming the thing.

Why ontologies are best left implicit (especially for credentials)

Ontology is the philosophical study of the nature of being, becoming, existence or reality as well as the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology often deals with questions concerning what entities exist or may be said to exist and how such entities may be grouped, related within a hierarchy, and subdivided according to similarities and differences. (Wikipedia)

I’d argue that the attempt to define what ‘exists’ within a given system is usually a conservative, essentialist move. It’s often concerned with retro-fitting new things into the current status quo, a kind of Kuhnian attempt to save what might be termed ‘normal science’.

Perhaps my favourite example of this kind of post-hoc ontology is from a reference Jorge Luis Borges makes to a fictional taxonomy in a book he calls Celestial Emporium of Benevolent Knowledge:

On those remote pages it is written that animals are divided into (a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel’s hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance.

This, of course, is meant to be humorous. Nevertheless, we’re in danger when a dominant group sees the current state of play as the ‘natural order of things’. It’s bad enough when this is latent, but even worse when essentialst worldviews are codified into laws — and by ‘law’ I’d include ‘code’.

The thing that disturbs me most is when people accept the artefacts that have been left for them as the given circumstances of nature… It’s this automatic acceptance of how things are that leads to a sense of helplessness about changing any of them. (Douglas Rushkoff)

This week, a survey was sent out to the Open Badges community on behalf of the Credential Transparency Initiative. This initiative is funded by the Lumina Foundation, an organisation that describes itself as an “independent, private foundation committed to increasing the proportion of Americans with degrees, certificates and other high-quality credentials to 60 percent by 2025.” The Lumina Foundation therefore has a vested interest in deciding what counts as a ‘high-quality credential’.

The problem is, of course, that what one well-funded, high-profile group decides after ‘consulting the community’ is likely to be adopted more widely. This is how de facto standards emerge. They may decide to play the numbers game and equate certain types of badges with degrees. Or, they may choose to go to the other end of the spectrum and ensure that badges do not equate with ‘high-quality’ credentials. Either way, it’s not really up to them to decide.

The survey featured this highly problematic question:

CTI survey

There are all kinds of assumptions baked into this question that need to be unpacked. For example, perhaps the biggest is that all of these have an ‘essence’ independent of one another, rather than in relation to each other. I see this as an attempt, either consciously or unconsciously, to turn the notion of a ‘badge’ into what Richard Rorty termed a ‘dead metaphor’:

Old metaphors are constantly dying off into literalness, and then serving as a platform and a foil for new metaphors. (Contingency, Irony, and Solidarity, 16)

In my doctoral thesis (better consumed as this ebook), I used Rorty’s work along with that of William Empson to come up with a ‘continuum of ambiguity’:

Continuum of ambiguity

The idea behind this continuum is that almost every term we use to describe ‘reality’ is metaphorical in some way. Terms we use to refer to things (e.g. ‘badge’) contain both denotative and connotative aspects meaning that the person using the term cannot be absolutely certain that the person they are communicating with will understand what they mean in the same way.

Articulation of an idea

Image CC BY-ND Bryan Mathers

The more we try and create a one-to-one relationship between the utterance and the understanding of it, the more we are in danger of terms ‘falling off’ the continuum of ambiguity and becoming dead metaphors. They “lose vitality” and are “treated as counters within a social practice, employed correctly or incorrectly.” (Objectivity, Relativism, and Truth, 171). Such terms have the status of cliché.

The attempt to create a one-to-one relationship between a term as written or spoken, and the term as it is understood by an interlocutor or reader, is an understandable one. It would do away with the real, everyday problems we’re faced with when trying to understand the world from someone else’s point of view. As Rorty puts it, “the world does not provide us with any criterion to choose between alternative metaphors” (The Contingency of Language, 6). The problem is that if we have a single ontology, then we have a single worldview.

Returning to Open Badges, it would be difficult to do any interesting and useful work with the term if it becomes a dead metaphor. For example, I’m quite sure that there’s nothing many of those in Higher Education would like better than to demarcate what a badge ‘counts for’ and the situations in which it can be used. After all, organisations that have histories going back hundreds of years, and which are in the game of having a monopoly on ‘high-quality’ credentials need to protect their back. If they can create a dead metaphor-based ontology in which badges count as something much lower ‘quality’ (whatever that means) than the degrees they offer, then they can carry on as normal.

The fading conviction originating with Plato that language can adequately represent what there is in words opens the way for a pragmatic utilization of language as a means to address current needs through practical deliberations among thoughtful people. (Internet Encyclopedia of Philosophy)

At this point, I’m tempted to dive into differential ontology and the work of Derrida and Deleuze. Instead I’ll simply point out that the reductive attempt to define an essentialist ontology of credentials is doomed from the outset. What we need instead is to ensure that our use of terms such as ‘Open Badges’ are what I would call ‘productively ambiguous’ — that is to say, in the Pragmatist tradition, ‘good in the way of belief’.

Or, if you like your takeaways more pithy: Keep Badges Weird!

The professional use of metaphor

Seth Godin on metaphor:

The difference between the successful professional and the struggling amateur can often be seen in their respective facility with metaphor. The amateur struggles to accept that metaphor is even acceptable (“are atoms actually building blocks?”) or can’t find the powerful analogy needed to bring home the concept. Because all metaphors aren’t actually true, it takes confidence to use them well.