Dropping tablets and other edu-memes

So this article has been doing the rounds today. TL;DR version: One Laptop Per Child dropped a bunch of tablet devices off in an Ethiopian village and left the kids to it to see if they would learn to read. What I find interesting is the extraordinary number of assumptions and cultural memes around education inherent in the project, which to me are symptomatic of endemic thinking generally in education. I thought it was worth unpacking a few of these:

Learning doesn’t happen without teachers

The fact that this project was designed as a rogue experiment illustrates how widespread the notion is that learning is something specific that happens in a school with facilitation by teachers (otherwise, why bother making a point of their absence?). Apparently the concept that kids might learn on their own without input is completely alien and requires an experiment to find out if it can actually happen.

Learning only means certain things

I found it quite telling that the tablets were ‘preloaded [with] alphabet-training games, e-books, movies, cartoons, paintings, and other programs’ and locked down to prevent most kinds of customisation or ‘non-intended use’ (props to the kids for swiftly working out how to hack around this). Not only does this indicate that those running the project felt the need to so heavily direct the kids’ potential interactions and learning experiences with the devices, it highlights how ingrained our thinking is around what constitutes learning. Literacy. Reading. Etc. No non-serious games were loaded on the devices, and I’ll hazard a guess no internet connectivity was provided. Obviously literacy is important (although see my point below) but the rather narrow definition of ‘learning’ that was provided for in the design of this project I find frankly astonishing. And what bothers me is that the results – where kids quickly designed their own learning pathways through bypassing restrictions – will probably not result in anyone reconsidering this. I very much doubt that any repeat (or similar) of this project will result in the devices being configured or preloaded any differently. The success of this project still appears to be measured solely by the fact that kids started to learn to read, not by the fact they learned to thwart restrictive software and hack into an OS.

We must fix the broken people

I’ll preface this one by saying I am no sociologist, linguist or expert in cultural studies. However. I find it strange that the idea of education in other cultures generally (and this project specifically) is approached as a deficit model according to western concepts. The students in this project were ‘poor’ and had ‘no access to schooling’. The medium for both determining and teaching literacy appeared to be English. But who are we to assume that other cultures have the same attitudes to and definitions of what constitutes learning, education and literacy? The concept of westerners appearing in another country to ‘fix the natives’ or the ‘poor kids’ by bestowing them with western technology and principles just doesn’t sit well with me. The article uses the quote ‘if they can learn to read, they can read to learn’ – but does this hold true in a non-western linguistic tradition? How is the written word approached in Ethiopian culture?

We must set some goals and measure them

It strikes me that both the expectations and the metrics used to measure outcomes in this project were remarkably shallow. When first delivered, the project founder thought the kids would just ‘play with the boxes’ and expressed surprise when they got the devices out and powered them up. Way to set the bar low. These aren’t cats or babies, these are children with vast cognitive capabilities. Then, after several months, a measure of success was determined to be the fact that kids were singing the alphabet song. The fact that a child can recall and sing a song verbatim is indicative of approximately nothing other than the fact that kids are natural imitators. Why not instead investigate the analytic, problem-solving and creative thinking abilities that were utilised and developed in working around the anti-customisation software as a measure of outcomes? I question both the need to have defined goals at all and the limited nature of the goals and metrics in this project. Why not just stop at dropping off the devices and seeing what happens?

The last paragraph of the article kind of sums these issues up for me:

Giving computers directly to poor kids without any instruction is even more ambitious than OLPC’s earlier pushes. “What can we do for these 100 million kids around the world who don’t go to school?” McNierney said. “Can we give them tool to read and learn—without having to provide schools and teachers and textbooks and all that?”

I’m not convinced that we are going to see any kind of significant changes in education while this kind of thinking still holds. Our unwillingness to redefine the parameters of education is viral – it’s incredibly pervasive on a global scale. And until it starts to change, projects like these aren’t going to yield anything more than superficial results that get keynoted at shiny edtech events.

Little boxes

Pre Trip Checklist by OregonDOT, on Flickr
Creative Commons Attribution 2.0 Generic License  by  OregonDOT 

 

On my mind lately has been the concept of benchmarking, standardisation and QA. I’ve got to this point through watching several things – ACODE/Quality Matters etc benchmarking projects, seeing the issues students have when they are subject to a poor learning experience (props to @UNESupport here for awesome work getting student voices heard), and things like this post from @markdrechsler on how we negotiate the line between standardisation and customisation in systems like Moodle. What I keep coming up against is the overlap between assuring quality (whatever ‘quality’ means) and promoting innovation, because to my mind there’s not a whole lot of overlap at all.

For those not in an area in which we advertise, the title of this post comes from the soundtrack to our TV ads – little boxes made of ticky tacky and they all looked just the same. Aside from being possibly the world’s most irritating song, this is also the fundamental premise of benchmarking/QA processes – a series of tick boxes that will ensure all learning experiences look more or less the same. Yet the strength of this is also its weakness – the fact that the ‘outliers’ will fail the rubric. If you have as your rubric some kind of consensus on what makes a ‘good course’, then (theoretically) you can easily identify ‘bad courses’. Now, while I have some fairly significant reservations about the measures we use for ‘good’ vs ‘bad’, there are certainly some truly crap examples of learning design out there, and it’s certainly true that we want to identify and get rid of these. Little will kill a love of learning faster than a really poorly designed or taught unit.

But.

If I asked you what truly innovative learning looked like, could you tell me? Could you give me a list of features that innovative learning design should have so I can check them off? What about innovative products or systems? Do we even have a metric by which we can define innovation at all? The answer to all of these should be no (and IMHO half of the problem is when people believe the answer is yes). Sometimes the best, most innovative things that turn up are things that we could never conceptualise – except for the one person who did. It brings us neatly to the problem – as soon as you apply some kind of universal standardisation measure to insure against bad or stupid, you immediately also rule out the potential for amazing. The car did not get invented by making sure it had all the features of a horse.

Where do we draw the line? I don’t have an answer but I don’t believe benchmarking and QA is the solution – is it really worth benchmarking ‘good’ at the expense of ‘awesome’? We’re missing the point if we’re aiming for compliance, since all the best, most creative thinking is completely non-compliant. How do we allow for this while keeping the crap to a minimum?

Lame-based learning

Games by Ian D, on Flickr
Creative Commons Attribution 2.0 Generic License  by  Ian D 

 

Just came across this article which, once again, has me despairing about the way people talk about games-based learning. Gates speaks with two fundamental flaws in his thinking (it may have been more but the second half of the article was auth-walled), which seem fairly common across the board.

“it’s an adjunct to a serious curriculum”

We’re never going to get anywhere with this if we keep assigning it to ‘other’. Anything fun, anything creative, anything outside the box – it’s not serious. We’ll stick it in somewhere because we like that it ‘promotes engagement’ but then we’ll get on with the proper learning in normal ways. It’s this kind of thinking that has driven the gamification trend – ‘we like the engagement games offer but we still want proper learning because games are a bit scary so let’s add some points and badges and make it a thing’. It fundamentally misses the point of why people are engaging in games in the first place.

Games aren’t ‘other’. Games aren’t a layer you can apply to something else. Games *are* proper learning. It just doesn’t look like any other kind of learning we have in our brain’s cultural inbox. It leads to the next problem:

“Imagine if kids poured their time and passion into a video game that taught them math concepts while they barely noticed, because it was so enjoyable”

They already are, Bill. This is what strikes me as the most fundamental problem with the discussion of games in learning – the idea that games do not contain learning unless you deliberately put it in there. Commercial game designers design for engagement and nothing else, which appears to terrify most people in the business of education – after all, you can’t possibly be learning unless you’re planning and talking about it explicitly. However, the fact that you can’t ‘see’ the learning doesn’t mean it’s not there. If you discard Angry Birds or PvZ or Minecraft or WoW as merely ‘entertainment’ you’re discarding a whole host of rich learning in not just maths but physics, design, literacy, social skills, resilience and too many others to keep listing. This is why IMHO the entire genre of ‘educational games’ needs to die – the design is usually for learning first, and usually in a one-dimensional way (the example given in the article has the objective of ‘manipulating fractions’). And then people wonder why kids aren’t spending hours playing them, setting up Vent servers, building wikis, making machinima and so on – all the hallmarks of ‘engagement’ in commercial games.

Learning in games is messy. It’s implicit and incidental and you can’t control it. It’s the elephant in the room when we talk about games. It’s daunting to think that the people who are producing some of the best learning environments/experiences didn’t actually mean for anyone to learn anything at all (at least not in an outcomes & curriculum sense). I really do wish that people – particularly rather high-profile ones – would consider this before banging on about ‘levels’ and winning being a motivator.

Elephant. Meet room.

A conversation with @steve_collis this morning brought up something that for me is a huge, pervasive issue in edu but something most of us aren’t talking about. We talk a lot about changing the way educators do education, changing the way institutions work, how broken things are from a delivery point of view. I do it all the time. What we almost never talk about is the fact that education (and particularly higher education, which is a non-compulsory fee-based sector) is a consumer industry and our consumers’ (students, but also parents – aka ‘the voting public’) concept of education is a very, very big elephant.

It’s the thing with desks. It’s the thing with teacher cf student. It’s the thing with readings and homework and tests and forums and exams. Steve’s point this morning was around this video, which shows (among other things), an “innovative vision” of future education – kids sitting at desks watching a teacher. The only thing that was different was they all had sexy bits of touchscreen glass with stuff whizzing around on it. It’s pretty standard fare as far as stuff like this goes.

It’s the thing where any non-traditional models get poor reviews, low attendance or low engagement. It’s the thing where you ask students about higher education and they talk about printing readings and downloading podcasts and quizzes. It’s the thing where you read parents commenting on news sites about the value of high scores and discipline and doing what you’re told and homework. It’s the thing where every singe instance of education in media and entertainment involves desks, papers, exams and above all, study. It’s the thing where schools and universities promote themselves via grades and scores and achievements. It’s nobody’s fault, but it’s a culture and it’s a problem and we’re not talking about it.

So. Choices. We can continue to work in a demand-driven model. We can innovate ruthlessly and damn the customers (anyone remember the backlash when Apple stopped putting firewire in stuff?). We can ignore it all because thinking about education as a commercial industry is soulless and horrible. Or we can start talking about it and thinking about how we might go about a culture shift from the consumers’ perspective. What we have in our favour is the fact that we know that education could be better, and we have some pretty hardcore and convincing examples of when it is better (MassivelyMinecraft, SCIL/Anarchy in Learning, BIE etc). What we need is a way to sell it. Convincingly and pervasively.

So. Let’s talk. How do we start changing minds? Not of teachers, but of students. Of parents. Of the Herald-commenting public. Of the media. How do we sell them a better picture?

Coffeecourses are go!

A couple of weeks ago I wrote a post on my thought processes around building a completely different PD model. This week, it’s a reality. The site is built and ready to be used. And the part that pleases me most is that you lot can all play too.

To describe what exactly Coffeecourses are, I’ll just quote myself:

‘a kludgy hybrid of Codecademy-style feed-based courses, games-based/task-based learning a la The Moodle Dailies (which is offered as part of the project) and online shopping sites’

It’s basically an anytime, anywhere, anyone model. No lecture-tantamount workshops, no ‘click here and do this’, no ‘groupwork’, no requirements. I finally feel like I’m doing something that jibes with my approach to edu rather than being all punk online and then turning around and selling out and giving trad-delivery workshops because it’s what people keep asking for. It was essentially zero-cost to create (only cost is paying $20/mo hosting just to make it nil demand for central IT support) and, with the exception of the Moodle Dailies, is completely open and anyone from anywhere can sign up.

If you’ve got a few minutes, take a look – would love feedback:

http://adu.une.edu.au/coffeecourses

You’re all welcome to sign up if one of the courses tickles your fancy (none particularly revolutionary but some were requests from colleagues and seem important ‘small steps’ for those just starting to get their feet wet). The subscription courses all kick off on Monday so if you like your content served a la Codecademy you’ll need to do the Feedburner thing before then, otherwise just wait and watch and play with it afterwards.

Building out-of-the-box PD models – some behind-the-scenes thinking

Most of you are probably aware that part of my job involves running training and professional development for lecturers. For the last few years, just like everywhere else on earth, this has meant running workshops.  And for the last few years I have been banging my head against the metaphoric wall because workshops are very, very broken. I’ve said before that geography and time are very poor criteria for just about anything, and workshops are a case in point. Those of you who’ve ever been involved in giving or taking PD know that there are a whole host of other reasons workshops are broken, which I won’t go into because it’s not really the point of this post and I’m probably preaching to the choir anyway. However, the nature of demand-driven systems and status-quo concepts mean that workshops are the expected form of PD delivery and doing anything different is a long, hard sell. Hence why we and everyone else have adhered to this model for so long. But. One can only deal with not doing anything about it for so long, so, long story short, this year, I’ve said ‘enough’. I’m not running workshops. Which opened up a nice array of possibilities of what one might do instead (and which happens to be the answer to the question ‘what would I do as part of the University of Awesome?’).

What I’m exploring is the idea of a kludgy hybrid of Codecademy-style feed-based courses, games-based/task-based learning a la The Moodle Dailies (which is offered as part of the project) and online shopping sites. The working title is ‘Coffeecourses’ – the idea being that, instead of having to get to a workshop in a given place at a given time and then remember everything that was covered, the courses are run as a series of short (c. 10 min) tasks that you can complete anywhere, any time, over a cup of coffee. Tasks are fed out via RSS so whenever there’s a new task it lands in your inbox or feed reader (although anyone is free to grab any or all of the course content in retrospect via the site itself). All of this is being built in self-hosted WordPress – slightly naughty of me, but Moodle just can’t cope with this model of delivery. [EDIT: I’m reminded here of @jimgroom‘s post here and must credit @macalba for being an excellent sysadmin & tolerator of my edupunkness]

The online shopping component fits in to the picture via the registration system. For a long time I’ve been fascinated by the idea of a course catalogue run as an online store using a shopping cart. It allows for nice possibilities like crowd-sourced course choosing (‘students who bought this course also bought…’) and bundling (‘purchase these three commonly purchased courses as a bundle…’), easy registration (email, credit card, done), tracking (both from institution and student end) and so on. It has the added bonus of, since online shopping is so ubiquitous, bucketloads of purpose-developed sites, plugins and addons that are all well-maintained and properly coded. Now, while this idea has Buckley’s of being adopted by universities any time soon, internal staff PD is an ideal test case for exploring this kind of model. I’ve been using the e-Commerce WP plugin, which, with a bit of hacky fiddling, has worked out nicely. Courses are listed as ‘products’, which staff can add to a cart, complete a simple checkout process to register, then get subscription instructions as a ‘digital download’.

I won’t lie, it’s going to be a hard sell to convince people this model of PD has merit. Certainly it addresses the concerns most people have about getting to and recalling content from workshops, but it has a high level of self-directedness, and we are in the somewhat ironic position of being a major distance ed provider where many (most?) staff still do not accept online learning as a model for their own learning. It’s a battle we desperately need to have, though, so I’m willing to jump in and annoy a few people for the sake of really starting to change people’s perceptions on this. In my favour is the fact that my partner in crime, @stuffy65, is offering a webinar-based model of online PD, which still has some of the benefits above without completely blowing everyone’s minds so it’s a good complementary strand that functions as a conceptual intermediary.

So – feedback. Nuts or sheer genius? What’s this kind of model missing? What’s it doing well? Would you take this kind of course in preference to a F2F workshop if you had the option?

Some of the best learning I’ve done

I graduated the other day. It’s a qualification I’ve been working towards for the last 18 months. Very few teachers or academics achieve this qualification – let me tell you a little about it.

It’s a fairly affordable course. It’s not HECS-supported but after the initial $100 to purchase the courseware it’s only cost me around $13 a month, depending on the US dollar. It’s open to anyone and the application process was very simple.

The course direction itself was entirely up to me. I was able to design my own learning path and outcomes. The workload was entirely up to me also, but I found I was so engaged in the material I willingly spent at least an hour working on it most days. I was able to work on what was most interesting and relevant to me at any given time. This was the same for every other student in the course – it was entirely student-driven and the instructors were students also. There was no syllabus, no framework, no predefined outcomes and no pressure do do things in a certain manner.

The course was completely hands-on. A theoretical component was available if I wanted to engage in it, but this still was required to be backed up by practice. All outcomes were achieved by doing, not writing about doing. Those who only wrote about the material without engaging in practice found their status with other students dropped and they weren’t able to complete the course.

There were no set readings, but whenever I felt I needed to do some research, there was a rich wealth of information available – all written by students, many of whom had become experts in their discipline. All this literature was freely available online and written in accessible language.

There were opportunities for me to work alone or in groups. Some projects allowed me to work with a group of over 20 people to achieve outcomes, sometimes I worked with only one or two others and often I worked alone. All groups were student-assigned and purpose-designed, and communication in groups was always efficient. I also had the opportunity to compete against other students as a way to hone my skills. Collaboration was such a powerful part of this course.

The course cohort was an incredibly diverse range of people, and all of them contributed to my learning as effective teachers. Many of these ‘teachers’ were children, and many again were much older than me. All had come to the course from an incredibly wide range of backgrounds and there was always someone with a different perspective to learn from.

All my work in the course was publicly visible – anyone could track my progress via the course website. I could easily talk with, seek advice from and share successes with people, whether they were taking the course or not. There was always someone to provide support if I needed it.

I learned so much doing the course. I developed a very diverse skillset and learned much about myself as well as the course material. And while I may have finished this course, there are still hundreds of opportunities for postgraduate learning and gaining higher qualifications.

Unfortunately, this isn’t a qualification I can ever put on my CV. Most people tell me it was a complete waste of time. Neither DEEWR or NSWIT will recognise it as professional development, nor can I use it on a promotion application. None of the skills I’ve developed are recognised as valid. Which is a shame, because the courses that I *can* use for the this have few or none of the features I’ve described above.

What is this qualification? Level 85 in World of Warcraft. It’s not a Masters, it’s not a PhD, but it is some of the most valuable learning I’ve ever done and an achievement I’m quite proud of. With any luck, one day the rest of the world will recognise this.