Something very odd happened when I finally got around to updating my
Facebook profile: suddenly the ads appearing at the right hand side of my wall
were, shall we say, just a little bit more targeted. The mention of my love of
sailing produced ads for tee-shirts with sailing themes. My use of the word
"education" produced a slew of ads promoting degrees and lesson plans. And, of
course, the politics -- the suggestions for interested "likes" and connections
were selected through Facebook's best prediction of what I already believed and
thought. The same has been true of my perusals through the pages of Amazon.com,
where "suggestions" are obviously coming from data gathered from the very few
things I've purchased, and the dozens of things I've examined. (I'm one of those
who uses user reviews there to help evaluate things I really have no interest in
buying.) Netflix is also trying to do that as well, though it's even further off
the mark.
I admit to the traditional conspiratorial take on such processes, but I also know that, since I am aware that's how such things work, I can avoid the problem by simply avoiding the services. After all, since I use them to gather data, it is probably unfair to assume that these services don't have a right to gather data from me. I also have never purchased a tee-shirt suggested by Facebook, so I do not feel a loss of control simply by the ad's presence.
But there are implications there for how we do the business of education. After all, as education professionals in the world of "data-driven instruction," high-stakes student (and teacher performance) assessment, so-called classroom performance systems, and lots of other data-gathering tools, we are, in fact, encouraging our teachers to produce and analyze such data about their students, and then using that data to drive our interaction with them. Is that a bad thing? No, as long as we know its limits. And, even more important, as long as we know the implications such data-driven activities have for the nature of learning, interacting, and being human.
A brief article in this week's New York Times Magazine (Sept. 19, 2010), written by Microsoft engineer Jaron Lanier, helped bring this home for me just a bit. In celebrating the personal way in which his father taught in public schools, he decried how information access seems to be doing damage to the way in which students invent themselves...
I admit to the traditional conspiratorial take on such processes, but I also know that, since I am aware that's how such things work, I can avoid the problem by simply avoiding the services. After all, since I use them to gather data, it is probably unfair to assume that these services don't have a right to gather data from me. I also have never purchased a tee-shirt suggested by Facebook, so I do not feel a loss of control simply by the ad's presence.
But there are implications there for how we do the business of education. After all, as education professionals in the world of "data-driven instruction," high-stakes student (and teacher performance) assessment, so-called classroom performance systems, and lots of other data-gathering tools, we are, in fact, encouraging our teachers to produce and analyze such data about their students, and then using that data to drive our interaction with them. Is that a bad thing? No, as long as we know its limits. And, even more important, as long as we know the implications such data-driven activities have for the nature of learning, interacting, and being human.
A brief article in this week's New York Times Magazine (Sept. 19, 2010), written by Microsoft engineer Jaron Lanier, helped bring this home for me just a bit. In celebrating the personal way in which his father taught in public schools, he decried how information access seems to be doing damage to the way in which students invent themselves...
But then, that's what we're encouraging our teachers to do with them, and what they're watching us do to them as they negotiate the world we (and the commercial interests we endorse and embrace) present them online.We see the embedded philosophy bloom when students assemble papers as mash-ups from online snippets instead of thinking and composing on a blank piece of screen...What is really lost when this happens is the self-invention of a human brain. If students don't learn to think, then no amount of access to information will do them any good (p.35).
It is not my intent to make this yet another call for the teaching of critical thinking (though it is that), or an indictment of our over-indulgence in data-driven assessment cycles (though it is that, as well). What we should do, as teachers, is to take a very long look at the technology tools we choose, and the way in which we choose to use them, to see what sorts of pedagogical and learning level implications they carry.
A simple online computer does not, in itself, come with much implied pedagogy. But Google, of course, does -- it assumes that the user's interests, needs, and understandings can be well predicted by algorithms written by (as Lanier calls them) geeks pushing key words and concepts around. The kinds of questions Google answers can only be basic and informational, the lowest sorts of learning goals on Bloom's ("taxonomy of learning") or Webb's ("depth of knowledge") scales. At the secondary or post-secondary level, even at an information level, Google won't be able to reach out to the best minds on many topics, even if the student asks a well-founded question of it. But unless the results of this work is used to encourage a student to start with a blank screen, it is unlikely that the results will be reflective of much of a change in the student.
Probably the hottest types of educational technologies these days are so-called smart classroom tools. These tools are becoming quite clever in their presentation, the types and quality of the questions they present to students, and the speed in which the answers are processed and presented back to teachers. But what pedagogy do they most often imply? What opportunities do such tools present to allow students to invent themselves?
That is not to say that the questions a classroom performance system asks of a student are not worth asking, or the answers worth knowing. There are lots of pedagogical goals which are excellently served by such systems. But Arthur C. Clarke once stated that "Any teacher that can be replaced by a machine should be!" This rather callous assessment of bad instruction might be revised somewhat...any pedagogical/learning goal which could be best delivered by a machine, should be. Most of the questions a teacher-driven performance system asks could be asked and taught quite successfully in the absence of the teacher. If, however, our instructional goal is for a student to process and present information, opinions, and products drawn onto a blank slate, then a teacher is required, and different tools should be selected.
This sort of examination of goals and implied pedagogy has, as its core, huge implications for instructional delivery and selection of appropriate tools. It also has huge implications for the current structure of instructional delivery. If we are selecting tools to place them in the hands of teachers to support the ways in which they already teach, then we're missing out on huge opportunities to improve efficiency, and better manage the advantages a teacher brings to a student. If we are selecting tools which simply support a student's ability to mash-up learning fragments rearranged for teacher consumption, then we have squandered an enormous opportunity for students to create, and in creating, re-define themselves.