Tomatoes, timers and two-day papers


I wrote a post a few weeks ago about my frustrations with the academic writing process. Comments from Chris and M-H on that post have had my mind ticking over for the last while on it – that we don’t have to engage in academic writing the way we’re taught to, and that the Pomodoro and Shut Up and Write techniques are about giving you the focus to write freely.

Now, don’t get me wrong, my conceptual frustrations with academic writing still stand. But, because this is my line of work and I do sometimes have to play by the rules, I needed some way to engage in the process without getting in the way of myself. Enter Pomodoro. I downloaded an app and gave myself 25 minutes to give it a shot – 25 minutes to do nothing but write. No referencing, no stopping to check that a source validated what I was saying, no making sure I had a citation for everything I wanted to say. I didn’t look anything up at all. In short, I composed.

After 25 minutes I had 700 words. So I did it again. A few pomodoros later I’d written 2700 words in a day, which is somewhat unheard of for me. Not a single word of it was a citation or quote. Call me obtuse but the idea that I could just write whatever the hell I wanted was somewhat of a relevation. Obviously one needs to go back and make edits and add citations, but ignoring them completely in the first instance made such a difference to my ability to write. After another session this morning I effectively had the bulk of a paper written, in two days (after editing & referencing it won’t be a two-day paper but the alliteration worked nicely in the title and you get the idea).

Ultimately it’s not about the pomodoro itself – I don’t think it particularly matters that I do things in 25-minute increments with 5-minute breaks. What it has been is a catalyst for my thinking. I have the terrible (or excellent, depending on your viewpoint; I tend to the latter) habit normally of being very good at the ‘ideas and action’ part of research, but when it comes to writing I get so frustrated by the idea that I can’t write what I want in whatever form I want and actual people won’t read the end result anyway that I end up giving up in disgust and not writing anything at all. The stupid red tomato timer has at least highlighted a way out of that for me. It’s not a solution to the endemic problem of academic writing at large, but it’s at least a way to let my Trojan horse keep on rolling.

And yes, I’m writing this on a pomodoro now. 3 minutes to go.

NB: It appears that I have gotten myself into trouble now. After tweeting about starting to do pomodoros, @catspyjamasnz ‘helpfully’ decided August would be pomodoro-a-day month and now I can’t wriggle out of it. Sigh…

Says who?

On the internet, nobody knows you're a dog


I’m going to start with this article, which the ever-prolific @marksmithers tweeted this morning. I’m not even going to touch on the ‘new model of learning’ nonsense; it was this line that struck me:

“While this crowd-based model of expertise cannot substitute for the highly educated scholar’s years of research and careful consideration of a single topic…”

I have two problems with this. One is that if you’re going to advocate for crowdsourced education then just do it and don’t placate the masses with ‘oh but don’t worry universities you still know best’. Perpetuating the ‘proper education’ myth isn’t helping anyone. The other is an interesting and highly pervasive assumption that has bugged me for a long time now. My vaguely-tweeted stream of consciousness as I was reading was probably not clear enough and so et voilá, this post (although thanks to everyone who has already twit-sparred with me on this point).

We have in place in society at large, education particularly and academia specifically, an assumption that knowledge cannot exist of its own accord, it must be verified by others. Which, generally, is fair enough because the world is full of idiots who will happily believe anything sans any sort of critical thinking. However – when this assumption is so pervasive that it manifests itself in systems like peer review and beliefs like ‘Wikipedia is not a reliable source’, we have a problem. It strikes me that several of the component assumptions that contribute to this are completely spurious.

‘knowledge must be reviewed by experts’

Great. We certainly want people who know what they’re talking about to review and critique things. The problem I have with this is that we seem to assume that only ‘subject experts’ from within a particular discipline fit this description. It strikes me as a rather narrow way to think about things and an excellent way to promote insularity. I’m not saying we should abandon it entirely but broadening our definition of ‘expert’ would be entirely beneficial. I also have an issue with the means by which someone becomes to be considered an expert, which you can read about here.

‘experts must come from institutions’

So – where does one source these experts? From universities (or, if we’re talking about the Wikipedia issue, encyclopaedia and dictionary companies). The problem with this is that anyone who has worked anywhere, ever, knows that being intelligent, rational and in possession of deep and critical thinking on a subject are not necessarily criteria for employment, even in the upper echelons of prestigious universities.What I’m saying here is that sometimes qualified tradesmen do dodgy, awful work and sometimes your brother-in-law who’s an accountant but a bit handy does an excellent job repairing your fence and saves you a lot of money.

‘experts must be verified by an authority’

Related to the above is the fact that we assume that gaining qualifications indicates you know what you are talking about. In many cases this is true. However, we all know that Ps get degrees, and that all institutions are not created equal, and that some of the smartest people we know don’t have degrees at all. Additionally, I ask – why do we assume that the people at the institution are able to judge someone as competent or knowledgable? And why do we assume that the metrics that we use to do this are reliable (for instance, it seems misguided to me to assume that because somebody can write an essay and complete an online quiz it follows that they can think critically on a topic and/or have achieved ‘learning’)?

‘crowdsourced knowledge cannot be accurate’

It’s a follow-on from the above – because ‘crowds’ do not contain ‘experts’. What I find stunningly ironic is that the academe routinely shuns Wikipedia for this exact reason but happily endorses peer review, even though crowdsourcing is the exact process by which peer review is conducted. Just because you limit your ‘crowd’ to employed academics does not mean this is not what you are doing. What makes the knowledge of two blind reviewers more accurate or legitimate than two academics editing a Wikipedia article? Substitute the words ‘review panel’ for ‘internet’ in the cartoon above and it is no less relevant. At least on Wikipedia I can pull a named history of edits and contributors. I can’t say the same for the review comments on a paper I’ve submitted.

The next question is, of course, what do I suggest we do about it?

A. Crowd-based non-anonymous reputation mechanisms.

To clarify. Think about the way that you engage in a professional community (or an interest community or any sort of community at all really), and the metrics by which you personally use to determine whether a person is credible (or ‘an expert’ or whatever). It’s generally a subtle and multi-layer process, which may or may not include degrees held or institutions worked at, but that more than likely also includes things like how the person engages online, what aspects of themselves they make public, how they behave, what they publish (including blogs and tweets) and so on. Now imagine that 50 or 1000 or 10,000 people (from all sorts of sectors and disciplines) are all engaging in the same process around the same person. Sure some of those people will probably be idiots or sycophants (just like in the current world of blind peer review, so we’re not losing out on much here). A lot of them won’t be and the result will be kind of a critical-mass picture of that person’s credibility. This process, to me, holds more water than a handful of letters or a tenure contract. A public process of reputation allows a really transparent and broad definition of expertise via which knowledge and research can be verified.

It’s not an easy solution. But the more we insist on ignoring the subtle layers that make up ‘expertise’ and the process of knowledge verification, the more in danger we are of making the academe insular and narrow (and yes, I did just hear many of you say ‘more so? Is that even possible?’).

Notes on words


It’s occurred to me that I have a problem with research and writing. My problem is that my undergraduate training was as a musician – specifically, a composition major. Let me explain.

When you are working as a composer (or as a performer, or as anything other than a musicologist really), the way you conduct and communicate research is vastly different to standard research practice. Your first task, before doing anything else, is to be a sponge. To listen and listen and listen to everything you can, analysing it and pulling it apart to understand its context, construction, execution and so on. This is a continuous thing that never really stops. But then – you create. You start writing (or practicing). Everything you’ve absorbed informs and shapes what you create, but you are using it as a foundation for a pure act of creation. You can create whatever you like, regardless of whether anyone else has created something similar before or not, and you are accountable only to your own informed and educated sense of aesthetic. You’re not required to stop every five bars and reference somebody else’s work. You’re not required to append a list of everything you listened to before creating the work to the end of each composition or recital. It is assumed that you have listened and read widely and that this is digested through your own creative processes to produce music. It is up to the listener to recognise influences and hear the stylistic shaping of the work of others.

This is my problem. I still behave like a musician, despite the fact that I now work in education research. Every day I am reading and analysing and digesting everything I can get my hands on. Articles and papers and posts and environments and spaces, nothing is safe. But when it comes time to create I come unstuck. I find it incredibly stifling that I cannot just write, cannot just create. I cannot say something without providing very specific references to somebody who has already said something similar. I am required to stop and reference somebody else’s work and append a list of everything I’ve read to the end of each creation. It is not enough to assume I have read and experienced widely, and I cannot leave it up to the reader to recognise influences and hear the shaping of the work of others. It is infuriating.

This became clear to me only recently. I had always just assumed I was a hard slogger when it came to writing and it wouldn’t come easily. But then, I started running Coffeecourses and keeping everything in a running ‘syllabus’ document. 11,000 words and counting, no problem. I was free to write as I wanted to and it came easily. Then it occurred to me this blog passed 50,000 words a while back. Same thing. It’s only when I am forced to write in conventional academic style that I come unstuck. When I cannot just create.

So perhaps this muso bent of mine has me at a disadvantage. I can’t be alone in recognising this as madness, though. Why can’t academic writing be like composing? Why does a fundamentally creative act have to be so stifled by convention? What would happen if we just stopped referencing Someone (2011: 16) every five lines and just relied on our informed and educated sense of academic aesthetic to identify a well-informed, well-constructed work?

On being a bad academic.


I really wasn’t planning to post twice today, or even once frankly, but on the back of my previous post I couldn’t resist. I’m also a terrible academic.

On paper I’m not. I publish stuff and go to conferences and all the proper things. But in terms of how I think and what I value, I’m a bad academic. I do and think a lot of naughty stuff. Like write this blog, for instance.

Here’s the bottom line. What I care about is people doing awesome stuff and telling people about it. Trailing cool new ways to teach? Awesome. Fiddling with soil or chemicals or genes to work out a better way to do things? Awesome. Working out what a 17th century writer or painter was on about? Awesome. But truly, I care not two hoots about the methodology you used to get there. I don’t care if your work is peer-reviewed or considered ‘rigorous’ or published in whatever journal. I don’t care how big your grant was or how hard it was to get. Obviously if you are a scam artist or doing stuff that’s spurious or inethical I’m not going to respect your work, but generally, I just don’t value any of the traditional measures of ‘good research’. I value action and brilliant thinking and creativity, and I value communicating this in immediate and engaging ways.

Here’s some other things I don’t care about. APA. Frameworks. Proper academic language. Word counts. Jargon. Publishing records. Journal rankings. Impact factors. And significantly, people who have built expensive keynoting careers on all of the above without ever actually doing anything awesome at all. We have created this epic monolith of quantifying and accountability measures for an art that is completely subjective and it’s nuts. These measures have resulted in a culture of meeting them at whatever cost and it obscures the fundamental point of what we’re trying to do – do awesome stuff and tell people about it.

So yeah. I’m a bad academic. I write things like ‘so yeah’ and write posts about having no respect for the traditional systems of academia. I tell people they should use Spongebob Squarepants memes in their theses and stop writing papers. And I think there should be more of it. We need more bad academics. We need people to question this stuff and start sneaking some crazy into this system of ours. If enough of us get out our inner nutter we might eventually get to a system where we value what counts in research, rather than counting the value.


PS I hope you have noticed I have started to use pictures. I used to also not care about pictures but I’m trying :).

Memebase your thesis.

Just a quick aside and 5-minute challenge on the back of reading this article about effective dissemination of research. Blyth makes the excellent point: ‘Turn it into things people can understand, let go of the academese, and people will engage’.

To which point – I think there’s value in using interweb zeitgeist to communicate your research in ways that are much more fun, approachable and grokable (is that a word? Is now) than a rambling paper full of polysyllabic guff. Why not memebase your research questions, or knock up a ‘What I Do’ poster instead of a conclusion? Turn your case study methodology into a ’15 amazing’ Twistedsifter post. Chuck Norris your validity analysis. Aside from being amusing, it’s an excellent exercise in brevity and isolating the key features of your research, and an excellent way not to take yourself so seriously. Why not take 5 minutes and give it a shot?

Want to play? your research question/s, and tag them #thesismeme. Why not. I did.

Counting your eggs after the chickens become nuggets

Last night I helped my husband document his ERA submission. He’s in a creative arts discipline, and for the first time in 2012 the ERA will now count ‘non-traditional outputs’ as valid research outputs – in this case, concerts/performances, CDs, compositions and the like. It’s fabulous that non-traditional research work is finally seen as valid – although, in true government style, they have now demanded 5 years (2005-2010) of outputs be documented ASAP. Fun. Steve had over 20 outputs for those years.

The exercise, though, has me thinking. In real terms, 20+ outputs in 5 years is well above that of many academics, but until the ERA made their decision this year, as far as quantifiable research has been concerned Steve’s sat on his hands doing absolutely nothing because he’s never published a paper. And in an industry that focuses on being ‘research active’, that’s an issue. It might not have come to anything here, but if he worked at USyd, things might have been very different.

This is the problem with bringing new rules in retrospectively. USyd (details here if you haven’t come across the story) and other institutions are making decisions that directly impact on staff employment, based on arbitrary quantitative definitions of what ‘research active’ is. ERA widening their criteria for accepted publications is meaningless to the person who lost their job several years earlier based on old criteria, no matter how retrospectively ERA’s new decision extends. Suddenly deciding to accept something you’ve previously ignored is useful for the future, but universities are making employment decisions based on the past. I’d hazard a guess that at least some of those 100 axed from USyd were research active, just not in a form recognised by either ERA or their institution, and if ERA decides in 5 years’ time to accept what they’ve done as valid output, no retrospective application of this is going to reinstate their employment.

Don’t get me wrong, I am not aiming to excuse those who are on academic positions but just don’t do research. What I am concerned about is those of us who engage in non-traditional communication of our research – a retrospective decision might not be enough to save us if the horse has already bolted.

Punking practice-led research

It has frustrated me for a rather long time now that in general research is about writing about doing stuff, cf actually doing stuff. Obviously things are done that generate data, but the focus is generally not on building a ‘product’ as the output itself – you can build something, but only the writing about it is counted as research output. This frustration came to a head recently as I just could not justify fitting what I want to do with the University of Awesome into a traditional 50K word model. I’ve always been about ‘doing things’ and can’t reconcile this with traditional research output, however open, fluid and web-based that is.

Enter the practice-led research model. In HDR terms this currently manifests itself as PhDs/Masters in Creative Practice or similar, and is offered almost exclusively in creative arts disciplines – art, design, music, theatre, creative writing etc. The model focuses on ‘researcher as practitioner’ and considers the creation of a ‘product’ the majority research output. For HDR purposes an accompanying exegesis is required, but this is effectively a reflection on the building process and relevant issues. It’s an excellent model that focuses on ‘doing’ – building and creating things – but until recently has been a bit of a sideline model of research (this year is the first year ERA has acknowledged the products of practice-led research as ‘countable’ output).

But. It is COMPLETE MADNESS that creative arts seem to be the only disciplines in which this is the norm. On doing a quick lit trawl, it seems that there is virtually no precedent for products themselves to be considered research output in other disciplines. Everywhere else, we focus on the ‘writing about’ being the research output. Any form of doing isn’t part of the word count and isn’t published. It’s indicative of an academic culture of favouring observation over action (which tbh doesn’t win us any fans outside of academia). It’s an issue in HDR and it’s an issue in academic publishing, and it needs to change.

So – I’m calling punk. Practice-led research should be a norm, not an exception in every discipline. Ditching word count and valuing creation gives us all sorts of possibilities to play around with. It’s where I’ve been going with the University of Awesome – a change in the game of edu research. It’s not about collecting the data any more – it’s about building it.