Decision-making and ambiguity

Given the news of the passing of Terry Jones, it seems appropriate to kick things off with one of my favourite parts of any Monty Python film:


ARTHUR: Who lives in that castle?

WOMAN: No one lives there.

ARTHUR: Then who is your lord?

WOMAN: We don’t have a lord.


DENNIS: I told you. We’re an anarcho-syndicalist commune. We take it in turns to act as a sort of executive officer for the week.


DENNIS: But all the decisions of that officer have to be ratified at a special biweekly meeting.

ARTHUR: Yes, I see.

DENNIS: By a simple majority in the case of purely internal affairs,–

ARTHUR: Be quiet!

DENNIS: –but by a two-thirds majority in the case of more–

ARTHUR: Be quiet! I order you to be quiet!

WOMAN: Order, eh — who does he think he is?

ARTHUR: I am your king!

WOMAN: Well, I didn’t vote for you.

ARTHUR: You don’t vote for kings.

Monty Python and the Holy Grail: Peasant Scene

Recently, I’ve become really interested in how decisions are made. Not personal decisions, such as “shall I change career?” or “who should I marry?”, but organisational decisions, such as “which project management tool should we use?” or “what should our strategy for the next three years be?”

As useful as they can be elsewhere in life, for this, things like The Decision Book don’t really cut it here. What we need is an approach or matrix; a way of deciding how, going into the situation, decisions are going to be made.

Related to this, I think, is the “default operating system” of hierarchy. I’ve cited elsewhere Richard D. Bartlett talking about the bad parts of hierarchy as being ultimately about what he calls “coercive power relationships”. In a hierarchy, people towards the bottom of the pyramid are being paid by the person (or people) at the top of the pyramid, so what they say, goes.

This means that, within a hierarchy, you’ve got a structure for the decision-making process, with power relationships between participants. And then, ultimately, however democratic the process purports to be, it’s ultimately the Highest Paid Person’s Opinion (HiPPO) that counts.

But what about in other situations, where the decision-making structure hasn’t been created? Who decides then?

For me, this isn’t an idle, theoretical question. I’ve seen the problems it can cause, especially around inaction. You can get so far by meeting up and having a big old discussion, but then how to you come to a binding decision? It’s tricky.

With non-hierarchical forms of organising, even getting into the decision-making process requires two things to happen first:

  • Codification of power relationships
  • Agreement as to how a binding decision can be made by the group

Let’s consider a fictional, but relatable, example. Imagine there’s a group of parents who have voluntarily decided to come together to raise money for their childrens’ sports team. They are not forming a company, non-profit, charity or any other form of organisation. They do not operate within a hierarchy. Nor have they decided how binding decisions can be made by the group.

Now let us imagine that this group of parents has to decide how best to raise money for the sports team. And once they’ve done that, they have to decide what to spend the money on. How do those decisions get made? What kinds of approaches work?

It is, of course, an absolute minefield, and perhaps why volunteering for these kind of roles seems to be on the decline. These situations can be particularly stressful without guidance or some kind of logical approach to non-hierarchical organising.

Continuum of ambiguity

What has this got to do with ambiguity, and more specifically, the continuum of ambiguity shown above? I’d suggest that what is required in our fictional example is a way of organising that strikes a balance. In other words, one that that is Productively Ambiguous.

Hierarchies are a form of organising that can work well in many situations. For example, high-stakes situations, times when execution is more important than thought, and the military. For everything else, hierarchical organising can be a dead metaphor. It doesn’t represent how things are on the ground, and doesn’t allow any productive work to happen.

Imagine the situation if that volunteer group of parents decided to organise into a hierarchy. I should imagine they would spend more time thinking about and discussing power relationships and status than they would doing the work they’ve come together to achieve.

To the left of Productive Ambiguity lies creative ambiguity:

Whilst a level of consensus can exist within a given community within this Creative ambiguity part of the continuum, it nevertheless remains highly contextual. It is dependent, to a great extent, upon what is left unsaid – especially upon the unspoken assumptions about the “subsidiary complexities” that exist at the level of impression. The unknown element in the ambiguity (for example, time, area, or context) means that the term cannot ordinarily yet be operationalised within contexts other than communities who share prior understandings and unspoken assumptions.

Digital literacy, digital natives, and the continuum of ambiguity

Creative ambiguity relies on unspoken assumptions and previous tight bonds between people. This approach might work extremely well if, for example, the parents had themselves been part of a sports team together in their youth.

The chances are, however, that there would be at least a minority in the group who do not share this commonality. As a result, those unspoken assumptions would become a stumbling block and a barrier.

Far better, then, to focus on the area of productive ambiguity:

Terms within the Productive part of the ambiguity continuum have a stronger denotative element than in the Creative and Generative phases. Stability is achieved through alignment, often due to the pronouncement of an authoritative voice or outlet. This can take the form of a well-respected individual in a given field, government policy, or mass-media convergence on the meaning of a term. Such alignment allows a greater level of specificity, with rules, laws, formal processes and guidelines created as a result of the term’s operationalisation. Movement through the whole continuum is akin to a substance moving through the states of gas, liquid and solid. Generative ambiguity is akin to the ‘gaseous‘ phase, whilst Creative ambiguity is more of a ‘liquid‘ phase. The move to the ‘solid’ phase of Productive ambiguity comes through a process akin to a liquid ‘setting’.

Digital literacy, digital natives, and the continuum of ambiguity

Instead of hierarchy or unspoken assumptions, progress happens by following a path between over-specifying the approach, and allowing chaos to ensue.

In practice, this often happens by one or a small number of people exerting moral authority on the group. This occurs through, for example:

  • Successfully having done this kind of thing before
  • Being very organised and diligent
  • Having the kind of personality that put everyone at ease

I have more to write on all of this at some point in the future, but I will leave it here for now. It’s interesting that this is at odds with the way that I see many attempts at decision-making happen – either inside or outside organisations…

On the lack of ambiguity at the heart of ‘Open Core’

There’s a couple of articles it might be worth reading to give some background to this post:

Continuum of ambiguity

As I’ve argued many times over the last few years, ambiguity is really useful… until it isn’t. As soon as a concept becomes a dead metaphor it’s in trouble. We may be witnessing this with the term ‘Open Core’:

The open-core model is a business model for the monetization of commercially-produced open-source software. Coined by Andrew Lampitt in 2008, the open-core model primarily involves offering a “core” or feature-limited version of a software product as free and open-source software, while offering “commercial” versions or add-ons as proprietary software.


Let’s zoom out and define our terms, as the above definition lumps together ‘Free Software’ (actually ‘Free Libre Open Source Software’, or FLOSS) and ‘Open Source Software’ (or OSS). All OSS is FLOSS but not all FLOSS is OSS. The open source part is a necessary, but not a sufficient, condition.

Free software… [FLOSS] is computer software distributed under terms that allow users to run the software for any purpose as well as to study, change, and distribute it and any adapted versions. [FLOSS] is a matter of liberty, not price: users—individually or in cooperation with computer programmers—are free to do what they want with their copies of a free software (including profiting from them) regardless of how much is paid to obtain the program. Computer programs are deemed free if they give users (not just the developer) ultimate control over the software and, subsequently, over their devices.


FLOSS is as much as a political approach as it is a technological one. It’s pretty hardcore, and represents a positive (as opposed to a negative) form of liberty:

It is useful to think of the difference between the two concepts in terms of the difference between factors that are external and factors that are internal to the agent. While theorists of negative freedom are primarily interested in the degree to which individuals or groups suffer interference from external bodies, theorists of positive freedom are more attentive to the internal factors affecting the degree to which individuals or groups act autonomously.

Stanford Encyclopedia of Philosophy

That’s a good way to think about OSS, which is more concerned with rights and licenses, and allow collaboration to happen without interference:

Open-source software (OSS) is a type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. [OSS] may be developed in a collaborative public manner. [OSS] is a prominent example of open collaboration.


Because OSS focuses on the negative form of liberty, the temptation can be to find loopholes in it for commercial gain. This has happened with a bunch of companies using the Open Core model.

The definition of Free Software specifically allows software to be sold by anyone, for as much as they want. Open Core attempts to limit who can make money from software — which seems to be at odds not only with Free Software, but OSS too.

The concept of open-core software has proven to be controversial, as many developers do not consider the business model to be true open-source software. Despite this, open-core models are used by many open-source software companies.


I’m uneasy with the Open Core approach, but can understand the pressure to apply the business model when a company has investors.

The reason why people don’t like the Open Core approach is that it doubles-down on the negative form of liberty, turning Open Source into a dead metaphor. It’s extractive, and focused on making capitalists richer instead of the commons.

A philosophical approach to joining organisations

Consultants like me are sometimes engaged by clients on a very short-term basis, and sometimes embedded inside organisations for much longer periods of time. Yesterday, I started a period of ’embedding’, this time with Moodle.

While I’ll discuss elsewhere the things I’ll be working on over the coming weeks and months, in this post I want to consider what it’s like to start working with a new organisation from a philosophical perspective.

The areas of enquiry represented by what we call ‘Philosophy’ can be sub-divided in many ways and, of course, people differ as to how this should be done. When I think about Philosophy, over and above the (valuable) ‘history of ideas’ courses taught to undergraduates, I tend to use the following buckets:

  • Epistemology – what can we know?
  • Ethics how should we act/behave?
  • Ontology what exists in the world?
  • Metaphysics what else exists?

I’m sure some people reading this may disagree with these simple definitions, and with separating out metaphysics and ontology, but hopefully you’ll see why I tend to do this in a moment.

When a person joins a new organisation, there’s often a mad rush to get them ‘up-to-speed’ as quickly as possible. Not one minute should be wasted to ensure that they can reach full operating efficiency as soon as possible. Taken to its logical conclusion, I’m sure there are plenty of organisations that would like to be be able to send the required knowledge directly into a new employee’s brain, Matrix-style.

Plugged into the Matrix

I was ‘onboarded’ by Moodle HR yesterday and, while they can improve the experience, it was the first time an organisation has actually set out its information landscape. It sounds like such a simple thing, but this ontology of the organisation as it sees itself, is a hugely valuable thing to share. After all, the inverse of this — finding out about things in a piecemeal way — can be rather anxiety-inducing.

In the past, I’ve joined organisations that explicitly don’t share things like org charts and what technologies they use. The reason given for this is often that such documents would be ‘out of date as soon as they’re created’, but in reality it’s usually because there’s a huge disconnect between different parts of the organisation. There is no map or shared reality.

Ontology is an easy one for organisations to focus upon. They can point to things and draw employees’ attention to them, even if it’s just directing people to a URL or a particular app. What’s harder is getting to the other three: the epistemology, ethics, and metaphysics of an organisation.

In an organisational context, the question of ethics isn’t solved simply by having a mission or a values statement. It has to be lived and demonstrated. There are large, organisation-wide ethical issues that can only really be solved by the leadership team. Examples of these might include the type of investment that the organisation takes, diversity issues in hiring, or the way it interacts with the natural environment.

As well as these large, organisation-wide issues, there are also much smaller, everyday ethical issues. In fact, some of these might not even be put under a banner of ‘ethics’ by most people. For example, I’d include in this list of smaller ethical issues things like the amount to which line managers and senior management ‘check up’ on employees.

“If you don’t have your own time, then you have no control of your day. And if you have no control of your day then you end up working longer than you should.” (Jason Fried)

These also include the way in which members of the organisation interact with one another. A lot of this would perhaps traditionally go under ‘culture’ or perhaps ‘etiquette’ but, actually, I think considering this as part of the wider ethics of an organisation is a better way to think about it.

It obviously takes time to figure out the lived reality of ethics within an organisation. The same is even more true of its epistemology and metaphysics. We’re going a stage deeper here.

You can see the ontology of an organisation; you can point to different things that exist. To a great extent you can see the outputs of the ethics of an organisation; you can point to the outputs it creates. When it comes to epistemology and metaphysics, however, we’re in a less tangible realm: what can we know? what else exists?

It takes a while to understand the epistemology of an organisation. A new employee (or contractor) isn’t going to be able to figure out an organisation’s approach to the above questions in the first few days, or even weeks, after joining. Again, a lot of these issues are lumped within the rather unhelpful category of ‘culture’.

Epistemological questions are particularly interesting for organisations that deal primarily in bits and bytes and digital ‘stuff’ that can affect people’s lives in a material way. The frontiers offline are physical, whereas the frontiers online are conceptual.

Consequently, when organisations ask ‘what can we know?’ it’s a collection of individuals with hopes, dreams, and inbuilt-biases making value judgements. More than that, it’s a collection of people responding to a particular set of pressures affecting them individually and corporately, and coming to collective epistemological decisions.

I’d include examples such as Facebook’s algorithmic approach to matching ‘people you may know’ here, as well as edtech companies gathering brainwave data to measure ‘student engagement’. As the Contrafabulists show with their work around predictions, when you say that the world is, or will be, a certain way, you’re revealing your epistemology.

Metaphysics is a harder thing to pin down, and perhaps the most difficult thing for an organisation to access directly. Some schools of philosophy, in fact, believe that metaphysics is meaningless and not worth studying. I disagree.

If we conceptualise metaphysics as asking the question what else exists? then we can see that this is the kind of question that can drive organisations forward and help them to improve. This is particularly important for organisations who create digital products and services, as they can literally invent these from ones and zeroes.

Martin Dougiamas, CEO of Moodle, shared with me a book called Reinventing Organizations that’s inspired him recently. Although I’m yet to read it, even the book’s website gives an example of the kind of thing I mean when talking about organisational metaphysics. There’s a ‘pay-what-feels-right’ option for the book, instead of a single price. If we step back and think about it for a moment, this is an acknowledgement that the full value of something like a book can’t be captured in a financial transaction. It also makes us question what a ‘book’ actually is when it’s digital and the distribution value drops close to zero.

I haven’t finished thinking about this philosophical approach to joining organisations. No doubt, any comments I receive below and on various networks of which I’m part will help inform my thinking. As, of course, will my experiences as I spend more time with Moodle.

The important thing for me is to realise that when you’re joining an organisation, what you’re doing is plugging yourself (a complex mixture of thoughts, emotions, and biases) into something that isn’t necessarily an easy thing to understand. To hurry and try and ‘get-up-to-speed’ quickly, therefore, might actually waste more time than it saves.

Business bullshit and ambiguity

In this week’s BBC Radio 4 programme Thinking Allowed, there’s an important part about ambiguity:

Laurie Taylor explores the origins and purpose of ‘Business Bullshit’, a term coined by Andre Spicer, Professor of Organizational Behaviour at Cass Business School, City University of London and the author of a new book looking at corporate jargon. Why are our organisations flooded with empty talk, injuncting us to “go forward” to lands of “deliverables,” stopping off on the “journey” to “drill down” into “best practice.”? How did this speech spread across the working landscape and what are its harmful consequences? They’re joined by Margaret Haffernan, an entrepreneur, writer and keynote speaker and by Jonathan Hopkin, Associate Professor of Comparative Politics at the LSE.

The particular part is the second section of the programme, in which Margaret Haffernan explains that organisations attempt (in vain) to  eliminate ambiguity. As such, they play a constant game of inventing new terms and initiatives, which not only work no better than the previous ones, but serve to justify inflated salaries.

The episode is available online here.

What we know about ‘knowledge’

There’s an ongoing flamewar between traditionalists and progressives, who believe that education should either be about ‘knowledge’ or about ‘skills’. This has been going on, in various forms, at least since Thomas Henry Huxley and Matthew Arnold squared off in the 19th century about what kind of education is required to foster ‘true culture’.

As Bruce Chatwin demonstrates in his modern-day classic The Songlines, there are ways of knowing that are based on action rather than ‘head knowledge’. He details how Australian aboriginal ‘knowledge’ is interwoven with their physical environment, is passed on primarily in an oral way, and comes with certain prohibitions as to who is allowed to ‘have’ such knowledge.

The Internet Encylopedia of Philosophy’s entry on knowledge lists four main types:

  1. Knowing by acquaintance
  2. Knowledge ‘that’
  3. Knowledge ‘wh’ (i.e. whether, who, what, why)
  4. Knowing ‘how’

I’ve always been of the opinion that the the second type of knowledge listed here, knowledge ‘that’, is of limited value. If I was coming up with my own personal hierarchy of the relative importance of these kinds of knowledge, I’d put this one at the bottom. It’s the kind of knowledge that may be foundational, but taken to absurd lengths, just means you’re good at pub quizzes.

For me, it’s knowing ‘how’ that’s of central importance, and what we should focus on in education. From the IEP’s entry on knowledge, citing the celebrate ‘ordinary language’ philosopher Gilbert Ryle:

What Ryle meant by ‘knowing how’ was one’s knowing how to do something: knowing how to read the time on a clock, knowing how to call a friend, knowing how to cook a particular meal, and so forth. These seem to be skills or at least abilities.

This is why I think that ‘knowledge’ vs. ‘skills’ is a false dichotomy. The article continues:

Are they not simply another form of knowledge-that? Ryle argued for their distinctness from knowledge-that; and often knowledge-how is termed ‘practical knowledge’. Is one’s knowing how to cook a particular meal really only one’s knowing a lot of truths — having much knowledge-that — bearing upon ingredients, combinations, timing, and the like?

Going back to the aboriginal example, this is where ‘knowledge’ that can’t be tested using a pencil-and-paper examination comes in. Knowing ‘how’ is usually described as a set of ‘skills’ in our culture, labelled as ‘vocational’, and given a back seat to the ‘more important’, ‘academic’ forms of knowledge. I think this is incorrect and should be remedied as soon as possible.

If Ryle was right, knowing-how is somehow distinct: even if it involves having relevant knowledge-that, it is also something more — so that what makes it knowledge-how need not be knowledge-that… Might knowledge-that even be a kind of knowledge-how itself, so that all instances of knowledge-that themselves are skills or abilities?

While reading to my six year-old daughter last night, the word ‘instinctively’ was used by the author. We had a brief conversation about it, which revealed that, even at her young age, she understands the difference between the knowledge ‘that’ which is acceptable at school, versus the knowing ‘how’ which is valuable currency at home. In other words, she’s playing the game.

Vagueness, ambiguity, and pragmatism

One of my favourite things about the Web is the ease at which serendipity occurs. We take it for granted these days but occasionally wonderful things happen that make us rediscover the joy of connection.

I was browsing The Setup, a wonderful site that interviews people about the hardware and software they use. Being particularly interested in those using Linux (as I do these days) I was delighted to come across John MacFarlane’s interview.

MacFarlane is a Professor of Philosophy at UC Berkeley and his website details both his interests and academic papers. I was delighted to come across a paper entitled Vagueness as Indecision, which is available as a preprint download.

It’s been a while since I studied formal logic at university, but I managed to get by while reading this paper, especially as MacFarlane gives homely examples. He takes an expressivist position in taking the position that, “vagueness… is literally indecision about where to draw lines”.

On page seven, and quoting a character from Spiderman in passing, MacFarlane states:

In principle, I can use ‘that’ to refer to any object. But with this freedom comes great responsibility. I must provide my hearers with enough cues to enable them to associate my use of ‘this’ with the same object I do, or communication will fail. Sometimes this doesn’t require anything extra, because it is mutually known that one object is salient, so that it will be assumed to be the referent in the absence of cues to the contrary. Other times it requires pointing. And in some cases simply pointing isn’t enough. But in every case, we’re obliged to do whatever is required to get our hearers to associate the same object with the demonstrative that we do. If we fail to do this, it will be sheer luck if they understand us.

In my book, The Essential Elements of Digital Literacies I pointed out that productive discourse involves interaction at the overlap of the denotative and connotative aspects of a term or phrase:

Connotative-denotativeWhat MacFarlane is pointing out is something similar, but outside the realm of ambiguity. For something to be ambiguous, it cannot be merely vague — although right on the left-hand boundary of that overlap is where the most vague ambiguous terms and phrases reside.

Within that overlap resides the continuum of ambiguity:

Continuum of ambiguityIn other words, just as ‘dead metaphors’ are to the right of Productive Ambiguity and occur when there’s denotation but little connotation, so ‘vagueness’ lies to the left of Generative Ambiguity and, as MacFarlane would put it, happens due to semantic indecision.

The crux of MacFarlane’s position is that to have a meaningful interaction, two people have to agree where the ‘boundaries’ are to what they’re discussing:

Here is the upshot. While in using a bare demonstrative like ‘this,’ one must have a definite object in mind, and successful uptake requires recognizing what object that is, there are no analoguous requirements for the use of ‘large.’ The speaker need not have in mind a particular delineation (even a ‘fuzzy’ one), and the hearer need not associate the speaker’s use with a particular delineation. What we get instead are constraints on delineations.  (p.11)

He continues, continuing his example of a trainee at an apple-sorting factory learning what a ‘large’ apple is:

Indecision is a practical state; it concerns plans and intentions, not belief. Just as one might plan to buy toothpaste, but not yet have settled on which toothpaste one will choose when confronted with a rack of them at the store, so one might have settled on counting apples greater than 84mm in diameter as large, without having settled on whether one would count a 78mm apple as large.

So for an interlocutor or reader to consider something ‘ambiguous’ they must ‘tolerate’ the semantic decisions made by the person with whom they’re interacting. If they don’t, then the lack delineation leads to vagueness.

I suggest, then, the ‘tolerance’ intuition can be explained as an awareness that a proposal to draw a sharp line in any particular place would be rejected for pragmatic reasons. Nothing about the meanings of vague words is inconsistent with drawing a sharp boundary; it’s just tha the cases in which a proposal to draw a sharp boundary would be sensible are few and unusual.

In other words, we humans are pretty good at getting by using heuristics. We agree to suspend disbelief (and therefore enter the continuum of ambiguity) to see whether doing so is, to use the words of William James ‘good in the way of belief’.


On vagueness, or, when is a heap of sand not a heap of sand?

Nothing new here for anyone who’s studied Philosophy, but still worth sharing for a general audience:

A vague word such as ‘heap’ is used so loosely that any attempt to locate its exact boundaries has nothing solid and reliable to go on. Although language is a human construct, that does not make it transparent to us. Like the children we make, the meanings we make can have secrets from us. Fortunately, not everything is secret from us. Often, we know there’s a heap; often, we know there isn’t one. Sometimes, we don’t know whether there is one or not. Nobody ever gave us the right to know everything!

Vagueness is an annoying, elusive concept — unlike ambiguity, which can be a much more productive one.


Team Human

The Team Human podcast is a recent must-listen for me. One of the most recent episodes features Mushon Zer-Aviv on the concept of ‘reambiguation’. His starting point is that we should resist attempts to call what can be represented by digital data as ‘real’ as well as attempts to deprecate anything too messy (i.e. human).

To me, what Mushon discussed with Douglas Rushkoff, the host of Team Human, dovetails nicely with the continuum of ambiguity I’ve come up with. The idea is to maintain ‘creative ambiguity’, not reduce everything to ‘dead metaphors’.

Continuum of ambiguity

Robert Greene on the importance of ambiguity in creative endeavours

I’m re-reading Robert Green’s The Concise Mastery at the moment. Just now, I was struck by this passage:

Perhaps the greatest impediment to human creativity is the natural decay that sets in over time in any kind of medium or profession. In the sciences or in business, a certain way of thinking or acting that once had success quickly becomes a paradigm, an established procedure. As the years go by, people forget the initial reason for this paradigm and simply follow a lifeless set of techniques. In the arts, someone establishes a style that is new and vibrant, speaking to the particular spirit of the times. It has an edge because it is so different. Soon imitators pop up everywhere. It becomes a fashion, something to conform to, even if the conformity appears to be rebellious and edgy. This can drag on for ten, twenty years; it eventually becomes a cliché, pure style without any real emotion or need. Nothing in culture escapes this deadening dynamic.

This is exactly what I’m trying to get at with the continuum of ambiguity:

Continuum of ambiguity

What Greene refers to as ‘cliché’ is represented in this continuum by what Richard Rorty calls ‘dead metaphors’. We should always be looking for new ways to represent our ideas, rather than be wedded to terms and styles, which always end up out-of-date.

What do we mean by ‘open education’?

Socrates must have been one of the most annoying individuals to ever walk the earth. I still don’t get why he didn’t just leave the city instead of drinking the hemlock at the end of his life. Also, his incessant questioning may well have led to a widely-celebrated ‘method‘ but the dogmatism he displayed over definitions of things beggars belief. Things had definitions and people should act in accordance with objective, but abstract things such as ‘justice’ and ‘virtue’.

I say this by means of introduction, because this is certainly not a post intended to give a single ‘definition’ of open education, but rather to tease apart its meaning and explore how people use the term. As I mentioned in my doctoral thesis (and related ebook) terms such as ‘digital literacy’ and ‘open education’ are examples of zeugmas. In other words, we’re never quite sure which part of the phrase on which to place the emphasis: is it ‘open education’ or ‘open education‘?

Audrey Watters has already written on this topic and summarises well the problems with considering open education as a prozeugma (i.e. with the emphasis on ‘open’):

And it’s complicated, of course, by the multiple meanings of that adjective “open.” What do we mean when we use the word? Free? Open access? Open enrollment? Open data? Openly-licensed materials, as in open educational resources or open source software? Open for discussion? Open for debate? Open to competition? Open for business? Open-ended intellectual exploration?

The trouble is that it’s not just ‘open’ that’s a contested term, but ‘education’ as well. We tend to conflate ‘learning’ with ‘education’ — confusing something that happens inside us with something that happens to us.

A few months ago, as part of the work we were doing at the start of We Are Open Co-op, I asked people within my community what different kinds of ‘open’ there are in common parlance. I attempted to draw(!) both the examples I’d come up with by way of a stimulus and the contributions I received from people.

Open as in…

  • door (you are free to enter)
  • for business (you are invited to buy/sell/trade)
  • unlocked (you have access to a thing)
  • to ideas (you are willing to change your mind)
  • transparency (you can see into the ‘inside’ of something)
  • love (you are willing to be vulnerable to others)
  • space (you are free to use this resource)
  • amendments (you are happy to take on board other people’s suggestions)
  • exploring (you can discover new things)
  • open-ended (you can keep going, potentially forever)
  • flexible (you can change this to your own needs)
  • no barriers (you do not have to overcome hurdles to get started)

Some of these obviously overlap and, to be honest, some are just better metaphors than others.

Serendipitously, having started this post a few days ago, just yesterday Jim Groom posted about the ‘overselling’ of the open movement:

I’m quite ambivalent about the open movement more generally these days. What seemed like a movement defined by an anarchic spirit of revolution from 2004-2011 (at least for me—this was a fairly personal narrative) morphed into a fairly tame, almost conservative approach to education: massive lectures and free textbooks. I’m oversimplifying here of course, but at the same time the mad scramble around corporate sponsored MOOCs for elite universities from 2012 until just about now, coupled with the re-branding of OER, at least in the U.S., as predominantly a cost-saving measure left me fairly depressed.

Part of the problem, I think, is that we’ve so many different definitions of ‘open’ that it’s just not a useful term to use. We get ‘openwashing‘ by big corporates, who — consciously or unconsciously — attempt to move a term like ‘open’ from something that is a basis for creative ambiguity within a community, towards the realm of ‘dead metaphors’.

Continuum of ambiguity

Other times, we’ve just shot ourselves in the foot. As Jim Groom mentioned above, there’s been far too much focus on access when it comes to ‘open’ and not enough on ethos. Yes, it’s great that we’ve got so much openly-licensed stuff to use, but have we got an equal number of advocates for open education? I’d actually say that number is on the decline.

Instead, and this is something I keep coming back to, I’d use the diagram below to provide a simple way to show how the open education movement needs to move beyond — well beyond — mere Open Educational Resources (OERs).

Beetham & Sharpe (2009)

This is my version of a diagram that’s explained in this post and comes from original work by Helen Beetham and Rhona Sharpe. It’s ostensibly about digital literacies, but I think it’s much more widely applicable. It’s a development model that we can apply to educators becoming more familiar, and at ease with, open education.

Right now, there’s been enough work done around the emerging area of ‘Open Educational Practices’ for me to state with some confidence that at least pockets of the wider ecosystem are moving beyond just OERs. There’s even a badged online course for those who are curious.

What we need to do, and like many things, this is an identity issue, we need to move to the top of Beetham and Sharpe’s pyramid and think about what it means for people to identify as an ‘open educator’. It’s great having a fairly loose definition that appeals to those in the know within the extant community, but it’s more than a little confusing for those new to the whole thing.

Ideally, I’d like to see ‘open education’ move into the realm of what I term productive ambiguity. That is to say, we can do some work with the idea and start growing the movement beyond small pockets here and there. I’m greatly inspired by Douglas Rushkoff’s new Team Human podcast at the moment, feeling that it’s justified the stance that I and others have taken for using technology to make us more human (e.g. setting up a co-operative) and against the reverse (e.g. blockchain).

One way we can do this which is working at the top end of the pyramid is to make reclaiming our identity on the web easier to do. Reclaim Hosting is definitely doing a great job around this on the technical side, but we need something equally awesome (and not just short-term project-funded) on the cultural side of things.

So yes, in short…

The barrier to being an open educator is too damn high