“While this crowd-based model of expertise cannot substitute for the highly educated scholar’s years of research and careful consideration of a single topic…”
I have two problems with this. One is that if you’re going to advocate for crowdsourced education then just do it and don’t placate the masses with ‘oh but don’t worry universities you still know best’. Perpetuating the ‘proper education’ myth isn’t helping anyone. The other is an interesting and highly pervasive assumption that has bugged me for a long time now. My vaguely-tweeted stream of consciousness as I was reading was probably not clear enough and so et voilá, this post (although thanks to everyone who has already twit-sparred with me on this point).
We have in place in society at large, education particularly and academia specifically, an assumption that knowledge cannot exist of its own accord, it must be verified by others. Which, generally, is fair enough because the world is full of idiots who will happily believe anything sans any sort of critical thinking. However – when this assumption is so pervasive that it manifests itself in systems like peer review and beliefs like ‘Wikipedia is not a reliable source’, we have a problem. It strikes me that several of the component assumptions that contribute to this are completely spurious.
‘knowledge must be reviewed by experts’
Great. We certainly want people who know what they’re talking about to review and critique things. The problem I have with this is that we seem to assume that only ‘subject experts’ from within a particular discipline fit this description. It strikes me as a rather narrow way to think about things and an excellent way to promote insularity. I’m not saying we should abandon it entirely but broadening our definition of ‘expert’ would be entirely beneficial. I also have an issue with the means by which someone becomes to be considered an expert, which you can read about here.
‘experts must come from institutions’
So – where does one source these experts? From universities (or, if we’re talking about the Wikipedia issue, encyclopaedia and dictionary companies). The problem with this is that anyone who has worked anywhere, ever, knows that being intelligent, rational and in possession of deep and critical thinking on a subject are not necessarily criteria for employment, even in the upper echelons of prestigious universities.What I’m saying here is that sometimes qualified tradesmen do dodgy, awful work and sometimes your brother-in-law who’s an accountant but a bit handy does an excellent job repairing your fence and saves you a lot of money.
‘experts must be verified by an authority’
Related to the above is the fact that we assume that gaining qualifications indicates you know what you are talking about. In many cases this is true. However, we all know that Ps get degrees, and that all institutions are not created equal, and that some of the smartest people we know don’t have degrees at all. Additionally, I ask – why do we assume that the people at the institution are able to judge someone as competent or knowledgable? And why do we assume that the metrics that we use to do this are reliable (for instance, it seems misguided to me to assume that because somebody can write an essay and complete an online quiz it follows that they can think critically on a topic and/or have achieved ‘learning’)?
‘crowdsourced knowledge cannot be accurate’
It’s a follow-on from the above – because ‘crowds’ do not contain ‘experts’. What I find stunningly ironic is that the academe routinely shuns Wikipedia for this exact reason but happily endorses peer review, even though crowdsourcing is the exact process by which peer review is conducted. Just because you limit your ‘crowd’ to employed academics does not mean this is not what you are doing. What makes the knowledge of two blind reviewers more accurate or legitimate than two academics editing a Wikipedia article? Substitute the words ‘review panel’ for ‘internet’ in the cartoon above and it is no less relevant. At least on Wikipedia I can pull a named history of edits and contributors. I can’t say the same for the review comments on a paper I’ve submitted.
The next question is, of course, what do I suggest we do about it?
A. Crowd-based non-anonymous reputation mechanisms.
To clarify. Think about the way that you engage in a professional community (or an interest community or any sort of community at all really), and the metrics by which you personally use to determine whether a person is credible (or ‘an expert’ or whatever). It’s generally a subtle and multi-layer process, which may or may not include degrees held or institutions worked at, but that more than likely also includes things like how the person engages online, what aspects of themselves they make public, how they behave, what they publish (including blogs and tweets) and so on. Now imagine that 50 or 1000 or 10,000 people (from all sorts of sectors and disciplines) are all engaging in the same process around the same person. Sure some of those people will probably be idiots or sycophants (just like in the current world of blind peer review, so we’re not losing out on much here). A lot of them won’t be and the result will be kind of a critical-mass picture of that person’s credibility. This process, to me, holds more water than a handful of letters or a tenure contract. A public process of reputation allows a really transparent and broad definition of expertise via which knowledge and research can be verified.
It’s not an easy solution. But the more we insist on ignoring the subtle layers that make up ‘expertise’ and the process of knowledge verification, the more in danger we are of making the academe insular and narrow (and yes, I did just hear many of you say ‘more so? Is that even possible?’).