Second verse, same as the first

So the PhD is a thing again. I’d put it on suspension for a year while I had a kind of mental sabbatical, but now I’m back at it again with the white vans (oh god please stop with the outdated meme references).

The essence of my PhD is unchanged, which for those of you who haven’t been playing along since back in the day when I started a masters is ‘ok how do we do some cool stuff while everyone around us is simultaneously banging on about innovation and not giving anyone any funding or support?’. But out of necessity the structure and focus has changed so I thought I’d tell you some stories about that.

The key problem I have is that because a) the work that I do and the sphere that I work in means the stuff that I do is very much of a time and prone to becoming outdated very quickly and b) I am angling for what feels like the world’s longest candidature – if I get it in under 10 years I’ll be happy; current trajectory is 8ish years. And I went back to what I’d written and the projects I was doing when I first started out and it was kind of like when you look back on photos of what you were wearing in high school or find an old journal full of teen emo poetry. There was no way I could submit that as some kind of endorsement of my current thinking.

Since starting again from scratch was approximately as appealing as dental surgery, my solution is to approach my PhD as a kind of longitudinal autoethnography in the form of project case studies. Longitudinal because I’m taking a freaking long time to do this thing, auto because the projects I’m writing about are the ones I’m involved in and ethnography because that’s how the case studies will tell a story. A story of how grassroots innovation happened in the changing climate of higher education and what we might be able to learn, or not, in terms of how to actually do innovative stuff, from each of the case studies.

You’ll all be surprised to learn the structure I have pitched is unconventional. While the thesis will be topped and tailed with normal bits like a lit review, the body of the thesis will be a modular situation, made up of ten tripartite case study sections. Tripartite because each will have an article describing the case study (may be published, may be not, you may have already read or seen me present on some of them), an accompanying exegesis that positions the case study in the overarching narrative – why this type of project with this type of thinking happened that this particular point in time, and what impact may have been had, and then the project artefact itself. Since all of these projects involve the creation of a site – Moodle Dailies, Coffeecourses etc etc – I feel like it’s essential to include the site within the work, much like a creative practice doctorate includes the creative product as part of the work. We – educators, ed devs/learning designers/academic developers/instructional designers/whatever you want to call our sort of people, researchers – frequently produce creative pedagogical output that is as valid as creative arts and design work and should be acknowledged as such. So I suppose it’s a portfolio of sorts, as well as an unconventional thesis.

At any rate, watch this space. Could be fun, could be terrifying. But importantly, could be a thing.

Tomatoes, timers and two-day papers


I wrote a post a few weeks ago about my frustrations with the academic writing process. Comments from Chris and M-H on that post have had my mind ticking over for the last while on it – that we don’t have to engage in academic writing the way we’re taught to, and that the Pomodoro and Shut Up and Write techniques are about giving you the focus to write freely.

Now, don’t get me wrong, my conceptual frustrations with academic writing still stand. But, because this is my line of work and I do sometimes have to play by the rules, I needed some way to engage in the process without getting in the way of myself. Enter Pomodoro. I downloaded an app and gave myself 25 minutes to give it a shot – 25 minutes to do nothing but write. No referencing, no stopping to check that a source validated what I was saying, no making sure I had a citation for everything I wanted to say. I didn’t look anything up at all. In short, I composed.

After 25 minutes I had 700 words. So I did it again. A few pomodoros later I’d written 2700 words in a day, which is somewhat unheard of for me. Not a single word of it was a citation or quote. Call me obtuse but the idea that I could just write whatever the hell I wanted was somewhat of a relevation. Obviously one needs to go back and make edits and add citations, but ignoring them completely in the first instance made such a difference to my ability to write. After another session this morning I effectively had the bulk of a paper written, in two days (after editing & referencing it won’t be a two-day paper but the alliteration worked nicely in the title and you get the idea).

Ultimately it’s not about the pomodoro itself – I don’t think it particularly matters that I do things in 25-minute increments with 5-minute breaks. What it has been is a catalyst for my thinking. I have the terrible (or excellent, depending on your viewpoint; I tend to the latter) habit normally of being very good at the ‘ideas and action’ part of research, but when it comes to writing I get so frustrated by the idea that I can’t write what I want in whatever form I want and actual people won’t read the end result anyway that I end up giving up in disgust and not writing anything at all. The stupid red tomato timer has at least highlighted a way out of that for me. It’s not a solution to the endemic problem of academic writing at large, but it’s at least a way to let my Trojan horse keep on rolling.

And yes, I’m writing this on a pomodoro now. 3 minutes to go.

NB: It appears that I have gotten myself into trouble now. After tweeting about starting to do pomodoros, @catspyjamasnz ‘helpfully’ decided August would be pomodoro-a-day month and now I can’t wriggle out of it. Sigh…

Says who?

On the internet, nobody knows you're a dog


I’m going to start with this article, which the ever-prolific @marksmithers tweeted this morning. I’m not even going to touch on the ‘new model of learning’ nonsense; it was this line that struck me:

“While this crowd-based model of expertise cannot substitute for the highly educated scholar’s years of research and careful consideration of a single topic…”

I have two problems with this. One is that if you’re going to advocate for crowdsourced education then just do it and don’t placate the masses with ‘oh but don’t worry universities you still know best’. Perpetuating the ‘proper education’ myth isn’t helping anyone. The other is an interesting and highly pervasive assumption that has bugged me for a long time now. My vaguely-tweeted stream of consciousness as I was reading was probably not clear enough and so et voilá, this post (although thanks to everyone who has already twit-sparred with me on this point).

We have in place in society at large, education particularly and academia specifically, an assumption that knowledge cannot exist of its own accord, it must be verified by others. Which, generally, is fair enough because the world is full of idiots who will happily believe anything sans any sort of critical thinking. However – when this assumption is so pervasive that it manifests itself in systems like peer review and beliefs like ‘Wikipedia is not a reliable source’, we have a problem. It strikes me that several of the component assumptions that contribute to this are completely spurious.

‘knowledge must be reviewed by experts’

Great. We certainly want people who know what they’re talking about to review and critique things. The problem I have with this is that we seem to assume that only ‘subject experts’ from within a particular discipline fit this description. It strikes me as a rather narrow way to think about things and an excellent way to promote insularity. I’m not saying we should abandon it entirely but broadening our definition of ‘expert’ would be entirely beneficial. I also have an issue with the means by which someone becomes to be considered an expert, which you can read about here.

‘experts must come from institutions’

So – where does one source these experts? From universities (or, if we’re talking about the Wikipedia issue, encyclopaedia and dictionary companies). The problem with this is that anyone who has worked anywhere, ever, knows that being intelligent, rational and in possession of deep and critical thinking on a subject are not necessarily criteria for employment, even in the upper echelons of prestigious universities.What I’m saying here is that sometimes qualified tradesmen do dodgy, awful work and sometimes your brother-in-law who’s an accountant but a bit handy does an excellent job repairing your fence and saves you a lot of money.

‘experts must be verified by an authority’

Related to the above is the fact that we assume that gaining qualifications indicates you know what you are talking about. In many cases this is true. However, we all know that Ps get degrees, and that all institutions are not created equal, and that some of the smartest people we know don’t have degrees at all. Additionally, I ask – why do we assume that the people at the institution are able to judge someone as competent or knowledgable? And why do we assume that the metrics that we use to do this are reliable (for instance, it seems misguided to me to assume that because somebody can write an essay and complete an online quiz it follows that they can think critically on a topic and/or have achieved ‘learning’)?

‘crowdsourced knowledge cannot be accurate’

It’s a follow-on from the above – because ‘crowds’ do not contain ‘experts’. What I find stunningly ironic is that the academe routinely shuns Wikipedia for this exact reason but happily endorses peer review, even though crowdsourcing is the exact process by which peer review is conducted. Just because you limit your ‘crowd’ to employed academics does not mean this is not what you are doing. What makes the knowledge of two blind reviewers more accurate or legitimate than two academics editing a Wikipedia article? Substitute the words ‘review panel’ for ‘internet’ in the cartoon above and it is no less relevant. At least on Wikipedia I can pull a named history of edits and contributors. I can’t say the same for the review comments on a paper I’ve submitted.

The next question is, of course, what do I suggest we do about it?

A. Crowd-based non-anonymous reputation mechanisms.

To clarify. Think about the way that you engage in a professional community (or an interest community or any sort of community at all really), and the metrics by which you personally use to determine whether a person is credible (or ‘an expert’ or whatever). It’s generally a subtle and multi-layer process, which may or may not include degrees held or institutions worked at, but that more than likely also includes things like how the person engages online, what aspects of themselves they make public, how they behave, what they publish (including blogs and tweets) and so on. Now imagine that 50 or 1000 or 10,000 people (from all sorts of sectors and disciplines) are all engaging in the same process around the same person. Sure some of those people will probably be idiots or sycophants (just like in the current world of blind peer review, so we’re not losing out on much here). A lot of them won’t be and the result will be kind of a critical-mass picture of that person’s credibility. This process, to me, holds more water than a handful of letters or a tenure contract. A public process of reputation allows a really transparent and broad definition of expertise via which knowledge and research can be verified.

It’s not an easy solution. But the more we insist on ignoring the subtle layers that make up ‘expertise’ and the process of knowledge verification, the more in danger we are of making the academe insular and narrow (and yes, I did just hear many of you say ‘more so? Is that even possible?’).

Notes on words


It’s occurred to me that I have a problem with research and writing. My problem is that my undergraduate training was as a musician – specifically, a composition major. Let me explain.

When you are working as a composer (or as a performer, or as anything other than a musicologist really), the way you conduct and communicate research is vastly different to standard research practice. Your first task, before doing anything else, is to be a sponge. To listen and listen and listen to everything you can, analysing it and pulling it apart to understand its context, construction, execution and so on. This is a continuous thing that never really stops. But then – you create. You start writing (or practicing). Everything you’ve absorbed informs and shapes what you create, but you are using it as a foundation for a pure act of creation. You can create whatever you like, regardless of whether anyone else has created something similar before or not, and you are accountable only to your own informed and educated sense of aesthetic. You’re not required to stop every five bars and reference somebody else’s work. You’re not required to append a list of everything you listened to before creating the work to the end of each composition or recital. It is assumed that you have listened and read widely and that this is digested through your own creative processes to produce music. It is up to the listener to recognise influences and hear the stylistic shaping of the work of others.

This is my problem. I still behave like a musician, despite the fact that I now work in education research. Every day I am reading and analysing and digesting everything I can get my hands on. Articles and papers and posts and environments and spaces, nothing is safe. But when it comes time to create I come unstuck. I find it incredibly stifling that I cannot just write, cannot just create. I cannot say something without providing very specific references to somebody who has already said something similar. I am required to stop and reference somebody else’s work and append a list of everything I’ve read to the end of each creation. It is not enough to assume I have read and experienced widely, and I cannot leave it up to the reader to recognise influences and hear the shaping of the work of others. It is infuriating.

This became clear to me only recently. I had always just assumed I was a hard slogger when it came to writing and it wouldn’t come easily. But then, I started running Coffeecourses and keeping everything in a running ‘syllabus’ document. 11,000 words and counting, no problem. I was free to write as I wanted to and it came easily. Then it occurred to me this blog passed 50,000 words a while back. Same thing. It’s only when I am forced to write in conventional academic style that I come unstuck. When I cannot just create.

So perhaps this muso bent of mine has me at a disadvantage. I can’t be alone in recognising this as madness, though. Why can’t academic writing be like composing? Why does a fundamentally creative act have to be so stifled by convention? What would happen if we just stopped referencing Someone (2011: 16) every five lines and just relied on our informed and educated sense of academic aesthetic to identify a well-informed, well-constructed work?

On being a bad academic.


I really wasn’t planning to post twice today, or even once frankly, but on the back of my previous post I couldn’t resist. I’m also a terrible academic.

On paper I’m not. I publish stuff and go to conferences and all the proper things. But in terms of how I think and what I value, I’m a bad academic. I do and think a lot of naughty stuff. Like write this blog, for instance.

Here’s the bottom line. What I care about is people doing awesome stuff and telling people about it. Trailing cool new ways to teach? Awesome. Fiddling with soil or chemicals or genes to work out a better way to do things? Awesome. Working out what a 17th century writer or painter was on about? Awesome. But truly, I care not two hoots about the methodology you used to get there. I don’t care if your work is peer-reviewed or considered ‘rigorous’ or published in whatever journal. I don’t care how big your grant was or how hard it was to get. Obviously if you are a scam artist or doing stuff that’s spurious or inethical I’m not going to respect your work, but generally, I just don’t value any of the traditional measures of ‘good research’. I value action and brilliant thinking and creativity, and I value communicating this in immediate and engaging ways.

Here’s some other things I don’t care about. APA. Frameworks. Proper academic language. Word counts. Jargon. Publishing records. Journal rankings. Impact factors. And significantly, people who have built expensive keynoting careers on all of the above without ever actually doing anything awesome at all. We have created this epic monolith of quantifying and accountability measures for an art that is completely subjective and it’s nuts. These measures have resulted in a culture of meeting them at whatever cost and it obscures the fundamental point of what we’re trying to do – do awesome stuff and tell people about it.

So yeah. I’m a bad academic. I write things like ‘so yeah’ and write posts about having no respect for the traditional systems of academia. I tell people they should use Spongebob Squarepants memes in their theses and stop writing papers. And I think there should be more of it. We need more bad academics. We need people to question this stuff and start sneaking some crazy into this system of ours. If enough of us get out our inner nutter we might eventually get to a system where we value what counts in research, rather than counting the value.


PS I hope you have noticed I have started to use pictures. I used to also not care about pictures but I’m trying :).

Memebase your thesis.

Just a quick aside and 5-minute challenge on the back of reading this article about effective dissemination of research. Blyth makes the excellent point: ‘Turn it into things people can understand, let go of the academese, and people will engage’.

To which point – I think there’s value in using interweb zeitgeist to communicate your research in ways that are much more fun, approachable and grokable (is that a word? Is now) than a rambling paper full of polysyllabic guff. Why not memebase your research questions, or knock up a ‘What I Do’ poster instead of a conclusion? Turn your case study methodology into a ’15 amazing’ Twistedsifter post. Chuck Norris your validity analysis. Aside from being amusing, it’s an excellent exercise in brevity and isolating the key features of your research, and an excellent way not to take yourself so seriously. Why not take 5 minutes and give it a shot?

Want to play? your research question/s, and tag them #thesismeme. Why not. I did.

Counting your eggs after the chickens become nuggets

Last night I helped my husband document his ERA submission. He’s in a creative arts discipline, and for the first time in 2012 the ERA will now count ‘non-traditional outputs’ as valid research outputs – in this case, concerts/performances, CDs, compositions and the like. It’s fabulous that non-traditional research work is finally seen as valid – although, in true government style, they have now demanded 5 years (2005-2010) of outputs be documented ASAP. Fun. Steve had over 20 outputs for those years.

The exercise, though, has me thinking. In real terms, 20+ outputs in 5 years is well above that of many academics, but until the ERA made their decision this year, as far as quantifiable research has been concerned Steve’s sat on his hands doing absolutely nothing because he’s never published a paper. And in an industry that focuses on being ‘research active’, that’s an issue. It might not have come to anything here, but if he worked at USyd, things might have been very different.

This is the problem with bringing new rules in retrospectively. USyd (details here if you haven’t come across the story) and other institutions are making decisions that directly impact on staff employment, based on arbitrary quantitative definitions of what ‘research active’ is. ERA widening their criteria for accepted publications is meaningless to the person who lost their job several years earlier based on old criteria, no matter how retrospectively ERA’s new decision extends. Suddenly deciding to accept something you’ve previously ignored is useful for the future, but universities are making employment decisions based on the past. I’d hazard a guess that at least some of those 100 axed from USyd were research active, just not in a form recognised by either ERA or their institution, and if ERA decides in 5 years’ time to accept what they’ve done as valid output, no retrospective application of this is going to reinstate their employment.

Don’t get me wrong, I am not aiming to excuse those who are on academic positions but just don’t do research. What I am concerned about is those of us who engage in non-traditional communication of our research – a retrospective decision might not be enough to save us if the horse has already bolted.

Punking practice-led research

It has frustrated me for a rather long time now that in general research is about writing about doing stuff, cf actually doing stuff. Obviously things are done that generate data, but the focus is generally not on building a ‘product’ as the output itself – you can build something, but only the writing about it is counted as research output. This frustration came to a head recently as I just could not justify fitting what I want to do with the University of Awesome into a traditional 50K word model. I’ve always been about ‘doing things’ and can’t reconcile this with traditional research output, however open, fluid and web-based that is.

Enter the practice-led research model. In HDR terms this currently manifests itself as PhDs/Masters in Creative Practice or similar, and is offered almost exclusively in creative arts disciplines – art, design, music, theatre, creative writing etc. The model focuses on ‘researcher as practitioner’ and considers the creation of a ‘product’ the majority research output. For HDR purposes an accompanying exegesis is required, but this is effectively a reflection on the building process and relevant issues. It’s an excellent model that focuses on ‘doing’ – building and creating things – but until recently has been a bit of a sideline model of research (this year is the first year ERA has acknowledged the products of practice-led research as ‘countable’ output).

But. It is COMPLETE MADNESS that creative arts seem to be the only disciplines in which this is the norm. On doing a quick lit trawl, it seems that there is virtually no precedent for products themselves to be considered research output in other disciplines. Everywhere else, we focus on the ‘writing about’ being the research output. Any form of doing isn’t part of the word count and isn’t published. It’s indicative of an academic culture of favouring observation over action (which tbh doesn’t win us any fans outside of academia). It’s an issue in HDR and it’s an issue in academic publishing, and it needs to change.

So – I’m calling punk. Practice-led research should be a norm, not an exception in every discipline. Ditching word count and valuing creation gives us all sorts of possibilities to play around with. It’s where I’ve been going with the University of Awesome – a change in the game of edu research. It’s not about collecting the data any more – it’s about building it.

Some of the best learning I’ve done

I graduated the other day. It’s a qualification I’ve been working towards for the last 18 months. Very few teachers or academics achieve this qualification – let me tell you a little about it.

It’s a fairly affordable course. It’s not HECS-supported but after the initial $100 to purchase the courseware it’s only cost me around $13 a month, depending on the US dollar. It’s open to anyone and the application process was very simple.

The course direction itself was entirely up to me. I was able to design my own learning path and outcomes. The workload was entirely up to me also, but I found I was so engaged in the material I willingly spent at least an hour working on it most days. I was able to work on what was most interesting and relevant to me at any given time. This was the same for every other student in the course – it was entirely student-driven and the instructors were students also. There was no syllabus, no framework, no predefined outcomes and no pressure do do things in a certain manner.

The course was completely hands-on. A theoretical component was available if I wanted to engage in it, but this still was required to be backed up by practice. All outcomes were achieved by doing, not writing about doing. Those who only wrote about the material without engaging in practice found their status with other students dropped and they weren’t able to complete the course.

There were no set readings, but whenever I felt I needed to do some research, there was a rich wealth of information available – all written by students, many of whom had become experts in their discipline. All this literature was freely available online and written in accessible language.

There were opportunities for me to work alone or in groups. Some projects allowed me to work with a group of over 20 people to achieve outcomes, sometimes I worked with only one or two others and often I worked alone. All groups were student-assigned and purpose-designed, and communication in groups was always efficient. I also had the opportunity to compete against other students as a way to hone my skills. Collaboration was such a powerful part of this course.

The course cohort was an incredibly diverse range of people, and all of them contributed to my learning as effective teachers. Many of these ‘teachers’ were children, and many again were much older than me. All had come to the course from an incredibly wide range of backgrounds and there was always someone with a different perspective to learn from.

All my work in the course was publicly visible – anyone could track my progress via the course website. I could easily talk with, seek advice from and share successes with people, whether they were taking the course or not. There was always someone to provide support if I needed it.

I learned so much doing the course. I developed a very diverse skillset and learned much about myself as well as the course material. And while I may have finished this course, there are still hundreds of opportunities for postgraduate learning and gaining higher qualifications.

Unfortunately, this isn’t a qualification I can ever put on my CV. Most people tell me it was a complete waste of time. Neither DEEWR or NSWIT will recognise it as professional development, nor can I use it on a promotion application. None of the skills I’ve developed are recognised as valid. Which is a shame, because the courses that I *can* use for the this have few or none of the features I’ve described above.

What is this qualification? Level 85 in World of Warcraft. It’s not a Masters, it’s not a PhD, but it is some of the most valuable learning I’ve ever done and an achievement I’m quite proud of. With any luck, one day the rest of the world will recognise this.

Chasing unicorns – adventures in academic publishing

I’ve recently proposed to the eResearch committee an agenda item on investigating non-traditional online publishing (blogs, social media, self-published eBooks, curation etc etc) as valid research output. And by valid I mean recognised by the powers that dole out the money. I’ve previously written a paper on alternative academic publishing as a conceptual issue, but didn’t address the fact that the government will not recognise these alternative forms either in the ERA or via DEEWR funding ticks – which is, of course, a whole other kettle of fish. It’s an issue I’ve talked about with several people previously (@thesiswhisperer, @gsyoung et al) but haven’t followed through on until now. While I still haven’t decided if I am an idiot for both deciding to address this from an administrative POV and persevere with a traditional committee, it’s at any rate something that needs to be done before we all die in a quagmire of journal articles. The following is my outline of the issue as it currently stands.

Non-traditional outputs

The 2012 ERA has, for the first time, allowed that some disciplines may count ‘non-traditional research outputs’. The allowable types are listed as the following:

  • Original Creative Works;
  • Live Performance of Creative Works;
  • Recorded/Rendered Creative Works; and
  • Curated or Produced Substantial Public Exhibitions and Events.

Being married to a musician, composer and sometimes-academic, I am the first one to say that it is fabulous that creative works are finally being recognised as valid research output (at least, providing that one submits an official ERA form explaining in depth how the work meets the criteria for research…). However, for the purposes of those of us who work in non-creative-arts disciplines, there is a rather big oversight here. The ERA seems to define ‘non-traditional’ in terms of format rather than something more traditional in format (ie text) that is published in a non-traditional medium (ie most of the things the internet facilitates – see my aforementioned paper for more elaboration on this). This is the type of recognition that needs to happen in order for, say, blogging, to be recognised as valid research activity. I’m holding out some hope that a precedent of the one will aid a case for the other, but at this point in time the only acceptable form of online text is the closed ‘container’ of text published via traditional publication avenues.

Peer Review

To be acknowledged by both ERA and DEEWR as research output, the output must have gone through ‘expert peer review’ in a standardised form. While I get that external validation is necessary to ensure that nutters don’t go around publishing whatever they like, I have three problems with this. The first is ‘expert’, the second is ‘peer’ and the third is ‘review’. ‘Expert’ is nice in theory but how many times does it actually mean ‘a colleague of the editor/organiser’ or ‘somebody in a roughly similar discipline area’? The notion of ‘peer’ seems to me quite limited since it assumes that only other academics are worthy to undertake review, and the concept of ‘review’ itself can be slanted in infinite ways by the experience, bias, inclination and available time of the reviewer. If we are going to propose that open online forms of text be considered output, we also need to propose that open, crowdsourced peer review (which is standard in these media) be considered to satisfy the peer review condition. This is likely a tougher sell than the point above since we as a culture have an obsession with qualification, ivory towers (and thus no individual accountability) and the textbook concept of ‘expert’.

Academic writing

This, I think, is the most significant issue of the three. Currently, the only form of text that is accepted as research output is academic text – a book, chapter or article written in academic language using an accepted citation format (although, the ERA website does make reference to how things like public policy reports may be counted under the new ‘non-traditional outputs’ allowance). But if we consider that the fundamental point of research is dissemination, that the internet is one of the best ways to facilitate this, and that academic text is one of the least effective forms of online text (these were the central points to my above paper), then we have to start arguing that the concept of academic writing needs to diversify. Because online text often functions more as a conversation than a standalone document, where does that conversation fit into research dissemination? In this regard, we’re verging more into the ‘creative work’ territory but it’s a long bow to draw for an audience that worships at the altar of APA 5th (thanks @pcoutas and @cfellows65536 for the twitsparring on this point). As an addenda it’s interesting to note that APA style guidelines do make provisions for the citations of social media and other online text, but that citation is subsequently not recognised when tracking research impact.


So there are the current issues I see that need to be conquered in the quest for the unicorn of research recognition. Maybe I’m nuts. Taking on conceptual issues is one thing, but actually getting administrative recognition is another entirely. For those of you who feel like some fun reading (!), you can peruse the below: