Sunday, November 21, 2010

Autumn Leaves, Rhubarb, and Student Attention Span

         On most topics, Rose, a former next door neighbor of mine, was sweet and wonderful, a feisty, diminutive old lady who would leave a grocery bag full of rhubarb hanging from our back fence every week or so throughout June and much of July. (Rose and her husband had a massive bed of the stuff – they didn’t even like it much, so we got it all, and bunches of tomatoes and squash to boot.)
        But every fall, leaves brought out a very different side of Rose. Our massive oak and maple trees would provide a multi-colored blanket neatly covering several back yards. Ours were not the only large trees in the neighborhood, but Rose’s property was different, she had only one small ornamental tree within reach of her back porch. More than once we woke up to the sound of Rose raking up those big yellow maple leaves, and throwing big piles of them over our fence. She was quite happy to tell us exactly what she thought of those leaf-spewing behemoths, and tried hard to convince us that we should cut them down.
        My wife and I, of course, were quite proud of those trees, and couldn’t imagine anything crazier than killing off two living things much older than we were, which contributed shade, nesting sites, not to mention carbon-dioxide scrubbing and water retention. It was the height of silliness. Also, being the “young moderns” we considered ourselves to be, we knew the law: leaves are the responsibility of the person who owns the property on which they fall, regardless of how, and from whence, they came. We did occasionally help her rake, but we were certainly not swayed.
        My wife and I have moved, and aged, and Rose and her dear husband are no longer “with us.” After spending the last three weekends cleaning up yard trash and dealing with other people’s leaves, I’m just a little more sympathetic with her problem. But I’m trying to hold onto my previous slant, even as I hang up the rakes and break out the ibuprofen.
       A cover article on this Sunday’s New York Times (Growing Up Digital, Wired for Distraction, Sunday, Nov. 21, 2010) is yet another story on tech-savvy young people whose lives are seemingly one text and hyperlink away from academic inattention and failure. “Several recent studies show that young people tend to use home computers for entertainment, not learning, and that this can hurt school performance, particularly in low-income families…Research also shows that students often juggle homework and entertainment…using the Internet, watching TV, or using some other form of media either ‘most’ (31%) or ‘some’ (25%) of the time that they are doing homework…” (p. 20, print version). I guess it isn’t terribly surprising that many teachers are just a bit reluctant to open the floodgates – to provide flexible use of student computers in the classroom, or worse, allow students to use their cell phones and other personal gadgets. It would be a little like giving every student their own TV on which they could watch anything, right there in the classroom. Only, of course, this is worse, since current tools also provide them with a means to engage with anyone, anywhere, on anything. And they do…
        The article is also about a school embracing the idea that engaging students means leveraging the same technologies they use. But the end of the article describes an English teacher there who has finally resorted to having students read aloud in class. This upside-down approach (read in class, engage outside of class) is this teacher’s attempt to counter her inability to induce students to read at all. Although I’m not quite Rose in this instance (I actually use the same tools the students in the story use), there are times when I find myself shaking my fist at the “stand of tall trees next door,” just like the English teacher in the story does at these tools. Pursuits which require extended time and attention, and products which reflect the results of same, seem to be disappearing, and there are lots of folks who point to the tools themselves – “slates,” smart phones, social networking sites, even just plain old hyperlinked Web delivery – as the culprits.
        Are we, like Rose, just old geezers whining about change and inconvenience? There are a few things missing from this discussion, and I’ll mention two here.
Chickens vs. Eggs
        There is a good explanation for why young people have sorted these technologies out as entertainment platforms, even as much of the rest of the world plunges into their use for productivity, commerce, and learning. Most kids, of course, in the absence of other forces, will naturally look for the entertainment value in anything. After all, if they didn’t, they’d be adults. Teachers, for a variety of reasons (some good, some perhaps less so), have not exactly rushed headlong towards embracing these tools for their own personal use. As a result, they can’t model effective use of these tools for their students, and, more importantly, have little interest in requiring such use out of their students. It’s not surprising then that, if given access, students use the technology in school for what they always use it for elsewhere.
        Although there are implications for us here, this, of course, does not directly address what we should do to change things, or why…
Who’s Distracted, Really?
       Interestingly, in the same edition of the Times, in her Magazine weekly column, “The Medium,” Virginia Heffernan makes the case that, in fact, the whole issue of short attention span and distraction is a myth. She contends that we, as humans, attend to that which we view as important. The ability to stay focused on something doesn’t exist in a cultural (or, by extension, a technological) context. It’s much more deeply embedded than that. Hence, if people (or students) are distracted, it’s for good reasons, or reasons of boredom.
        For the short term, engagement can be enhanced by a gadget or sexy delivery method, but such engagement will have a very short shelf life, and will not produce the same results that true engagement in the underlying content or goals would. That is, we should not expect a technology tool itself to tip the balance towards engaged learning. But that works both ways – we also cannot indict our technology tools for distracting students from the interest and engagement of an assignment either. Yes, their capabilities can be distracting, but following a distraction implies more than an avenue of escape – it also implies the need to escape in the first place.
        And that is the key to how to dig ourselves out of this conundrum. Technology tools have the ability to support our students in doing things they can’t do without them – connect, create, share, and construct in completely new ways. That is the reason why these tools are so powerful in the workplace, and, not incidentally, why kids find them so entertaining. But we cannot simply decide to credit, or blame, these tools for providing engagement or distraction. The topic, activity, and our personal involvement in it as educators and advocates must provide that. The real proof of engagement comes from making an assignment one that a student is interested in doing.
So, yes, these “trees” will produce an obligation on our part to clean up after their excesses. They will not take care of themselves, nor will they induce our students into doing so. But we will not be served by simply “cutting them down,” either. If we do so, we may have produced a leaf-less fall, but the rest of the seasons will be blanched and dry.
        Here's to you, Rose. You’re still wrong, but I do miss those rhubarb pies.

Friday, November 12, 2010

Implied Pedagogy Part 2: To a hammer...

Scenario One:
It wasn’t all that long ago that I finally got around to getting a [new cool tool]. It really was a revelation. I purchased it to replace something I’d been using for quite awhile, but the expanded capabilities it represented had a profound influence on me in two ways: it greatly increased and made more powerful the main purpose of the original technology. But, more importantly, it began to reveal a myriad of ways in which its capabilities could be used for behaviors and experiences of which I hadn’t even thought…

Scenario Two:
It wasn’t all that long ago that I finally got around to getting a [new cool tool]. It was a little constricting when compared to what I normally used, but I was willing to forgive those limitations due to some important advantages. But, over time, I found it was slowly impacting my behavior in two ways. I was using the new device much more often – it was beginning to completely replace my normal device. But even more important, I was tending to abandon my attention to and interest in the sorts of things my previous device easily supported, but it didn’t. It was, in fact, affecting how much I attended to things I knew to be important…

        The “new cool tool” in each of the above scenarios is, in fact, the same device – a smart phone. The difference in outcomes, of course, is in what other technology the device tended to replace. In the first instance, it replaced an ordinary cell phone. In the second case, it was displacing a computer.
       An ordinary phone is actually quite powerful, allowing its user to connect, in real time, to almost anyone with a similar device anywhere in the world. But a smart phone brings with it a huge collection of bonus capabilities. Connections to other people can be through voice, text, image, even video, and delivered in real time or in formats consumable at any other time. Besides connections to people, it provides access to masses of information, delivered in easily consumable and easy-to-manage pieces through simple and intuitive applications. All of this from a device that fits into your pocket, and works almost anywhere in the world.
       Of course, with the exception of the “fits into your pocket” part, a computer can do all of that as well. What a computer lacks in portability and ease of use, it gains in quality of delivery, increased versatility, more powerful user interfaces, and simple real estate. That “real estate” isn’t just the size of the screen (though that’s very important as well). It’s the scope and size of the things a computer can access. A smart phone’s apps generally reside on the device, helping to slice up the outside world into pieces the small screen and limited processing power can digest. A computer’s very complex and versatile operating system (and equally powerful browser) provides the ability for it to support and deliver a mass of capabilities living elsewhere on the so-called cloud – from office tools to content and learning management – without any help from an installed application, and any need to reduce its size and complexity. The “easily consumable and easy-to-manage pieces” of smart phone information is, in fact, a restriction, which profoundly impacts the behaviors and expectations of its users, and the possible outcomes from its use.
        We’re ready to look at what all this looks like to the learner, educator, and education technology coordinator. A regular theme of mine is that the selection of a technology can have profound implications for how we teach (pedagogy), as well as why we teach (hoped-for outcomes). Previously we looked at human behaviors (“doing vs. watching”), and compared devices to those behaviors. Since a smart phone tends to replace technologies we already use, we need to measure how it changes existing behaviors: how it impacts the manner in which a student interacts with the learning process, and how it impacts the scope and sequence of a teacher’s instructional practice. This discussion could also be applied to any device running a cell phone operating system, including personal digital assistants (PDA’s – iPods are an example), and “slate” devices such as the iPad.
       To make our analysis somewhat better embedded in our instructional interests, we’ll select an arbitrary assignment, a critical analysis of an online resource, a YouTube video.
       Since an ordinary cell phone wouldn’t actually support such an activity, replacing it with a smart phone (or similar device) would immediately open up a new world of possibilities – students would be able to view the video, read comments made about it, and access support materials relating to the content of the video, alone, and on their own time. In addition, the smart phone would provide a platform through which students could text remarks about the video directly to their peers, as well as the teacher. They could even contribute these remarks to a thread hosted online through any of a dozen social networking platforms, thereby making the assignment more collaborative. This is the “…myriad of ways in which the technology could be used for [new] behaviors and experiences…”
       Now, let’s see what happens to this activity when the smart phone is used to replace a computer. As you might guess, since the computer can, in fact, do everything the smart phone can do and more, the impact in this case is restrictive. Watching a video on a very small screen limits the detail and impact that a computer screen or larger display might deliver (though an iPad-like device would improve that). Computer-savvy students would surely miss the ability to read comments and reference materials in real time as the video played. But the most important restriction would be in the mode and manner in which the student actually did his analysis. With no traditional keyboard and no access to true word processing, the writing process native to a smart device is “Twitter-friendly,” encouraging small amounts of text with no formatting. Writing a several-page analysis of the video on a smart phone (even an iPad) would be unthinkable. The process of collaborating between peers would be similarly limited.
       Of course, our mistake is in assuming that “…the smart phone is used to replace a computer.” It can’t, so, for this assignment, we would be wrong in selecting this technology. But the larger problem is well illustrated by Scenario Two above. When we purchase a device, or acquire a technology for classroom use, we spend hours trying to figure out how to induce it to do what we can already do elsewhere. In this case, the device really isn’t up to the task, and our increased use results in a change in the way in which we consume information, and even more important, how we communicate information to others. It’s the old adage, “To a hammer, everything looks like a nail.” To a smart phone, everything looks like a Tweet.
       In the world of consumer technologies, this is unfortunate, but otherwise probably not that interesting. For a social studies teacher teaching the subtleties of human thinking, or a language arts teacher teaching the entire range of human expression, the presence and overuse of this technology gives them yet another barrier to their instructional goals. There are dozens of appropriate applications for such devices, and the fact that they are becoming ubiquitous is an exciting prospect for teachers who want to encourage their students to be connected and interactive with the world of peers, experts and information, at any time and from anywhere. If the devices are supplied by students, super, you’ve leveraged new capabilities you didn’t have before. But more likely the school will have to supply them. I’m already hearing from school tech coordinators that they intend to stop buying computers and focus on iPads. Before running into the arms of a very seductive new technology, one should look long and hard at the sorts of things you want your students to do and learn, and pick the tool best suited for as many of them as you can.
       It may very well be that these technologies will expand and improve, changing this discussion. But we already have devices which cover our needs as educators to support large, in-depth, complex and subtle learning activities for our students. The impact of the presence of small, low-power devices on educational practice will be positive depending on how, for what – and most importantly, in place of what – we choose to use them. Our enthusiasm for them should not decide for us what and how we want our students to learn.

Monday, November 1, 2010

Doers and Watchers: A Tool's Implied Pedagogy

Are you a doer, or a watcher?
A doer creates what a watcher consumes. We all are both at various times of any day, but one could argue that, in a specific context, most people are primarily one or the other. Since there are a lot more television watchers than actors, the presumption is there are a lot more watchers in that context.
Are your students doers, or watchers?
      This question is a great deal different from the previous one, influenced by what we might infer from the word "student." Here's what Dictionary.com says about that word:
stu-dent. [stood-nt, styood-] - noun
  1. a person formally engaged in learning, esp. one enrolled in a school or college; pupil: a student at Yale.
  2. any person who studies, investigates, or examines thoughtfully: a student of human nature.
The first definition points to a state of being (formal or informal), the second to a behavior. But even the first uses the word "engaged," implying that being a student is, primarily, an activity requiring one's conscious participation.
              That’s actually different than one might guess, since most people (even many teachers) presume that a teacher is the doer, and students watch. That’s the traditional “lecture” instructional paradigm. But current research on learning indicates that knowledge is constructed by a student, rather than induced by a teacher. Research in effective technology use and integration into instruction goes further, pointing to student-directed work in knowledge construction, especially in the case of higher-order learning: "depth of knowledge 4," or "synthesis" or “evaluation” from Bloom's Taxonomy (see LoTi, ACOT2).
From this perspective, the act of teaching is the act of providing the tools, materials and environment whereby students can successfully engage, interactively participate in, and direct the learning process.
       So, we're back to the question.
Are your students doers, or watchers?

...and the answer should be that, if they're truly students, they must be doers. That isn't to say that watching must never happen in the classroom, but if it does, it should be aimed at lower-order goals, or as a preparation for doing.

Are your educational technology purchase decisions aimed at doing, or watching?

       Every education technology purchase carries with it an implied pedagogy, and sorting that out has never become more complicated than in the digital age. Fifty years ago, when television first became one of the available technologies for the classroom, the implied pedagogy was “watching,” and many teachers were upset that we'd be turning an entire generation of students into passive consumers. But no consistent and measurable negative impact was ever found. One might speculate that, since the primary instructional paradigm at the time was lecture, TV just replaced one "watching" context with another. The “TV in the classroom” controversy simply ran out of steam.
       But that's quite different from today. Students at MIT -- one of the best universities in the country (and, not incidentally, one of the most digital) -- spend, on average, over 50 hours a week engaged in digital media (see Digital Nation, a PBS FrontLine special). This media is interactive: email, Skype, Facebook, texting, Twitter, etc. Clearly, when such students are left to their own devices (pun intended), they are usually doers. So when we try to work out how best to allocate limited educational resources and tech purchase budgets, we’re not doing it in the same context as the teachers of 50 years ago. The selection of education technologies today is taking place against a backdrop of interactive "doing" by almost every young person, as soon as their school day ends. The expectation for engagement, and the social and intellectual presence of a student in such engagement, makes the selection of any classroom technology very different from fifty years ago. We're no longer competing with the lecture, we're competing with Facebook.
       We’ll now look at the underlying implication for "watching" vs. "doing" for several popular categories of instructional technology, to see what they’re implied pedagogy actually is.

Classroom Response Systems.
       "Clickers" are all the rage. They make assessment fun. They give instant feedback, which can provide direction to instruction. They are very engaging for students (at least for now, while they're still new).
       For our discussion, they're a really great metaphor for making succinct what we mean by students' "doing" the business of learning. Assessments, whether delivered by paper or classroom response systems, do ask students to do something. But it is impossible to avoid the implications of response systems -- students do not inherently build knowledge interactively through any assessment tool. They do not control the process, and usually interact with the content in a teacher-directed manner.
"Smart" Classroom Tools
      
These tools are associated with digital projection systems and interactive whiteboards, as well as hand-held devices such as the Smart Slate. These systems differ a great deal from classroom response systems in that their effectiveness is in direct student manipulation. Like classroom response systems, these tools can be very effective in producing engaging and interactive activities for students in a classroom setting.
       However, once again, all students in the class will usually be doing the same thing. As a matter of fact, even when students are interacting directly with the technology, the number doing so will be small (usually one, often zero when a teacher uses it exclusively as a presentation tool), and all others will be truly watching.
       Before you conclude that I am against such technologies, let me qualify. As any student of Norman Webb and Benjamin Bloom will tell you, there are important learning goals associated with each of the levels they describe, even the lowest ones, with activities (some of which are just watching) appropriate for each. In addition, a great teacher can very effectively use any tool to encourage a wide array of instructional approaches, just as they can turn an ordinary chalkboard into a student-driven knowledge construction tool. But in current instructional practice, higher-order thinking and learning are usually the neglected goals. Not incidentally, they’re also the ones which benefit the most from student-directed, socially engaging learning activities. So we need to make sure that we deliberately provide technologies which inherintly support these higher goals (and, not incidentally, reflect the practices students are using outside of school). The implied pedagogy of the above tools means that they will not, in themselves, satisfy the needs and goals of higher order learning goals.
       In the 21st Century, where information and interactivity is delivered in large part over digital networks, that usually means an individual computing device. There are dozens of ways a classroom can provide such devices to students: PDAs/iPods, iPads and eReaders, netbooks/laptops, classroom workstations/terminals, even smart phones. All have advantages and disadvantages (a topic for a future blog entry).
       So when you map out how your classroom, your school, or your district supports and purchases technologies, ask yourself…
Are at least some your educational technology purchase decisions aimed at "doing?"