I've been wanting to write about an open science teaching experiment I've been doing at U. New Mexico that I think has been really successful. I've been thinking about presenting at a local teaching conference on campus (and I should still do that), and other venues, but I think blogging about it here will be an effective way of letting others know about it.
What it is in a nutshell
The course is Junior Lab (modern physics lab course) at the U. New Mexico. We run the course as close to "open science" as we can, with a course wiki on OpenWetWare as the foundation. The students post everything on the wiki: their primary lab notebooks on the wiki, analysis notes, Matlab code and Excel sheets, formal report rough drafts, and final drafts. Further, all of my instructor feedback is also posted on their work, in the margins. The only written things we don't put on the wiki are letter grades and a few confidential emails. (We also have no mechanism for putting video or audio conversations on the wiki...though that's an interesting and scary idea that just occurred to me.)
Does it work?
I say emphatically: YES! I have just finished teaching the course for the second time and I have been very pleased both times. I've also only received positive feedback about the open science style of the course--including many unsolicited emails from students saying the wiki was very helpful for them. Unfortunately, I only have anectdotal evidence...next fall I want to implement some kind of pre- and post-testing for assessment, as I talked about in yesterday's blog.
What have been some good outcomes?
Good science. When I decided to teach this course on OWW (OpenWetWare), I purposefully didn't put much planning into it (my style of time management). I also decided to just give it a whirl and "be bold" figuring the worst that could happen would probably not be much worse than an average lab course. Thus, I didn't really imagine all of the wonderful things that would happen as we got started. I think the thing that I have enjoyed the best is seeing the students reading the lab notebook of other students to get hints for how to set it up, how to do the analysis, etc...and then citing and linking the help they got!. I was seeing this happen and just so delighted to see them practicing science the way it should be practiced. For some of them, this was natural, for others, I could tell they felt like they were cheating or something, because it seemed too easy. I just constantly reminded them that this was science and too keep looking at previous work and to keep citing.
Open science training. I don't have evidence for this, but I feel like these students will be much more likely to practice open science later in their careers. I suspect they'll be required by at least one future instructor or advisor to go back to paper and pen and "science 1.0" ... and having been through this course, I also suspect they will rebel and lead changes wherever they are.
Much better instructor / student communication. One of the principles from "The One Minute Manger" (a cheesy little book that is very much worth reading) is to give feedback as close to instantaneously as possible. After reading that book, I think it's pretty obvious that early feedback is much more valuable than delayed. But in traditionally run lab courses, where labs are handed in on paper, the feedback is necessarily delayed quite a bit. Having the course in public on the wiki allows me to leave feedback and "grade" any time I have internet access. My goal is to give the feedback very quickly, but to be honest, I didn't do so hot this semester. Maybe for the first half of the course, I was able to provide written feedback within one week, but attending the New Faculty Workshop in November completely derailed me and I wasn't happy with being two to four weeks behind sometimes. Fortunately, I think the feedback is much more important early on in the course. Even with my failures, I still think this was a very positive aspect of the course compared to the paper alternatives.
What have been some of the challenges?
It takes a lot of time. Having 14 students in the lab, I'd say I spent at least 10 hours / week (a lot more at certain times) providing written feedback (aka "grading"), plus the 6 hours in lab, and about 3 hours / week preparing low-quality lectures. (We have one hour of lecture on statistical data analysis and other science topics, see agenda here.) Adding up those numbers doesn't seem like a lot, so I maybe am estimating incorrectly. In any case, it feels like a lot of time, and for sure the way I teach it is not at all scalable to more students. I feel like the one-on-one interactions (both real and virtual) are a critical aspect of the course--the students are apprentices. This is in contrast to the other course I teach (Conceptual Physics for >100 non-scientists), where I also value the personal interactions with the students, but not as essentially as in this course.
Technical difficulties. OpenWetWare is a fantastic resource, and I am enormously grateful for everything the OWW founders and Bill Flanagan have provided to help with this course. It is a very solid foundation and pretty much has everything we need for this course to work very well and to be far superior to the traditional version of a lab course. That said, there are very many technical improvements that could add a lot of value. Some of these were brought up at the Lab Notebook brainstorming session we had in October 2007. For example, integrated spreadsheeting would be great (but time-intensive to implement). Also, some kind of "auto-save" is necessary...a few students throughout the semester lost data due to either glitches or being logged out, or even their own mistakes. As we all know, losing data is a crushing blow, so it needs to be kept very much to a minimum. Two good students lost this battle and resorted to paper and pencil followed by uploading later, which of course is not a desirable outcome. (One idea I have to solve this with MediaWiki is to just install a "Save and Keep Editing" button, which would essentially be a "preview" button that actually saves the entry in the data base in addition to keeping the editing window open.) There are all kinds of other things, but I think I'm getting into a broader discussion of electronic lab notebooks in general.
What do I want to do differently next year?
I'm very happy with the way the course has worked so far. Most of the ideas I have for changing things next year are not related to the open science aspects of the course (for example, improving or adding new experiments). I would love to hear about any ideas you have, so please post them on the comments here! Also, if you would like to emulate open science aspects of this course at your institution, I would be really happy to help you get started.
Sunday, December 28, 2008
Saturday, December 27, 2008
Learning versus teaching
I sort of introduced this teaching blog with my first post on my science blog. As I said, I've been inspired by Rosie Redfield's research and teaching blogs and thus I've set mine up so I have separate teaching, research, and "other science thoughts" blogs. I'm still not sure whether that's a good idea, after reading this "how to blog" article from Slate that I saw on A Blog around the Clock. Well in anycase, it probably doesn't matter too much, and I'm wasting too much time thinking about it.
I'm pretty excited about having a blog about teaching (my field is physics) and connecting with others out there to exchange ideas with. So far I've taught two courses at U. New Mexico: Physics 102 (conceptual physics, aka physics without math, aka physics for non-scientists, aka why the sky is blue) and Physics 307L (Junior Lab, aka modern physics lab). These courses are quite different in terms of the student population, what we do in class, and how I spend my time. I have really loved teaching both courses, much more than I think I would have predicted before I started as an assistant prof. in 2006. I'm looking forward to sharing (hopefully over the next few weeks) many of the things I have tried out that I think have been successful. For example, I've been really happy with the "open science" aspect of Junior Lab (see the course site on OpenWetWare). Another thing I'm looking forward to talking about is a really successful experience collaborating with the course TA via private wiki for the conceptual physics course.
So far, all of my "evidence" for success in teaching is anectdotal or non-scientific, which leads me to what I thought would be most appropriate to talk about in my first teaching blog. A couple months ago, I was really fortunate to attend the "New Faculty Workshop" for physics and astronomy faculty. I cannot recommend this workshop strongly enough: if you're an assistant professor of physics or astronomy in your first couple years, you absolutely should have your chair nominate you for this workshop. Ironincally, in my case, I had to miss a class and got way behind in grading due to the workshop...but it was very much worth it. (By the way: Thank you to everyone who contributed their time to leading this workshop! I won't try to list all the names for fear of leaving someone out.)
One of the main things I learned at this workshop is that anectdotal evidence for good teaching techniques isn't a good measure of student learning. For example, I sort of took seriously the end-of-semester student evaluations without ever stopping to think about them scientifically: what do they tell me about student learning or any other goals I have set for the course (besides student happiness)? Furthermore, even the exams aren't really assessing student learning since I don't have a baseline pre-test. A pre-test is pretty common sense as far as scientific thinking goes...but it never really occurred to me before this workshop, I don't think. The pre-test is one of the main things I want to implement this coming semester (conceptual physics)...choosing the pretest will be the subject of future blogs, hopefully.
In addition to learning about assessment in physics, I learned a bunch of other stuff. I actually took a lot of notes on my private wiki (back in November), and below I'm going to transfer over a lot of those thoughts. Yeah, I am breaking some rule about long blogging, but I'll throw some headings in there to increase the chance of anyone reading parts of this post. I'll also edit it slightly.
Older Notes from New Faculty Workshop
Assessment
The number one thing I want to implement is real assessment: Pre and Post-testing so I can assess the effectiveness of my teaching. There are all sorts of existing standardized tests, one of the more popular being the FCI (force concept inventory), although that may not be relevant for my version of 102. Probably for Junior Lab, I will have to use my own free-response questions, since the goals are not easily tested by multiple choice, and there are few enough students to assess what they say and try to measure learning. If you care about learning, then assessment really should be a precursor to any kind of new educational thing you're going to try. Many of the results are counter-intuitive. For example, strategies that have a major, significant affect on learning seem to decrease student's end-of-semester attitudes towards physics. That is to say: the student feedback scores (while important for tenure) are actually not a good indicator of learning. Now, attitudes is of course an important outcome sometimes, and long-term attitudes have less data. But the point is you can't rely on student's opinions. A corollary of this is that there is no correlation at all of learning outcomes with lecturing "ability." Basically all lecturing is ineffective, whether it's from award-winning lecturers or really crappy lectures. The thing to google here is the "hake plot" and Eric Mazur at Harvard. Above and to the right is a schematic of the "hake plot." It takes a minute to understand it, so I recommend going to Redish's site about the research. What the plot shows is that no matter the student background, or lecturer ability, only about 25% of the possible learning occurs with plain-old lecture. There were a number of stunning things to me from Eric Mazur's talk (which in addition to being very informative was very entertaining as well). I was really surprised when I saw this plot, for two reasons. First, I was embarrassed to realize that there is a lot of good research (with real data) out there about how to effectively teach. Second, I was surprised that the data very strongly indicated that any kind of passive lecture is really not very effective. This was fun to see, because I intuitively had already been thinking this.
Some active learning tidbits
It turns out a lot of the things I was already doing in P102 are similar to some effective strategies, although perhaps not implemented correctly. For example, JiTT (Just in Time Teaching), peer-instruction, think-pair-share (I don't remember the definitions of all of these), interactive lecture demos. I should read more about these, and decide whether to implement them "correctly" based on the existing research, and also to choose assessment that will really measure the learning.
Setting effective course goals
I don't remember who, but someone asserted that 3 goals is pretty much the maximum number of goals a course should have. Actually I'm not really sure if someone said this directly, but it sticks in my mind. I realized, that my list of goals for Junior Lab is way too long--and thus none of the students could possibly remember it while actually working in lab. So, I need to reduce these to 3 or 4 clear and measureable goals. Plus, of course, I need to work these into a pre- and post-test for assessment.
Ideas about a Biophysics 101 or "Conceptual Biophysics" course
I am continually asked by faculty here what I think the biophysics curriculum should be here at UNM. I have consistently resisted implementing courses willy-nilly, thinking we don't yet have critical mass here, and it's not clear what is an effective course to implement. My own bias is our graduate students definitely do not need another required course. But I now finally I have an idea for what "biophysics" course should be taught, or at least one that I would like to develop. It fits in with what I've been realizing that it's really not any more strange having "biophysics" in the department than it is having "astronomy" (aka astrophysics). Many departments (such as ours) are "Physics and Astronomy", and there's usually (as far as I can tell) not any discussion over whether astronomy is really physics etc. In contrast, there is frequent discussion and / or discomfort over how biophysics fits in our department. But when I step back and think about it, Astronomy is really just physics applied to many non-physics things. Consider optical biophysics versus observational astronomy: both are physicists who have to know a lot about light, spectroscopy (atomic physics), optics, etc. in order to get the information they want. Anyway, I think this is maybe a valuable parallel so that other faculty in the department don't get so confused about biophysics. One of the best sessions was taught by Ed Prather from Arizona (I think), who teaches Astronomy 101. He pointed out how they really teach the students physics, and the students love it -- they just don't tell them that they're teaching them physics. The physics is so they can understand the astronomy--so they learn physics because they like astronomy. I think there could be a perfect parallel by teaching a "biophysics" course that was all about biology, but really required learning a bunch of physics to understand the biology. I am thinking first at the conceptual level, but it could possibly extend to algebra or even calculus physics I suppose, but the audience would start with non-physics majors. I already do a little of this in 102. For example, talking about the applications of fluorescence in biology, which fits in with the atomic physics, light, etc. topics. Brownian motion is also dominant, and of course, all kinds of forces inside cells. When talking about this with people at dinner, it occurred to me that it would be funny learning about forces first in cell biology world, because instead of having to imagine friction-less surfaces, you instead have to get used to "massless" systems w/ tons of friction. Why would that be any worse than assuming no friction? In fact, one misconception students usually have is Newton's first law, because we all grow up in a world where things stop pretty quickly when the external force apparently goes away. Anyway, this is the idea, and it seems completely achievable to me, perhaps even without too much more effort as an evolution of P102. However, I would be very smart to wait until post-tenure to try this.
Resources I don't want to forget about
I saw an example of Oregon State physics education system, which was really impressive (by Kenneth Krane). Their classrooms had computers for groups of students, and personal whiteboards (you can get this at Home Depot for like $12, but I forget the name of it...not called whiteboard, though...and then cut it into little pieces). They ask the students something like "write down a feature of vector dot products", for example, and then walk around and collect whiteboards and then bring them to the front. (This would seem to parallel our P102 "brainstorming" sessions.) (Paradigms in Physics wiki at OSU.)
A book I want to read
Somebody recommended a book, "How People Learn," (Actually checking Amazon, it's probably How the Brain Learns, judging by popularity.) which is somewhat popular I guess (or it might be "How People Learn (2000)" by the NRC, N. Acad. Press)...I think this book maybe had the recommendation to tell students to pause and write down something that they just learned. I should use something like this in P102 (and maybe even Junior Lab)...all students are going to want to take notes. In this case, I'll tell them to spend time thinking about what they learned, and there (supposedly?) is data that this has an effect on retention.
Interactive lecture demos
I think it was during the interactive lecture demo talk (which was run by two physics profs from different universities) that they said, "The physical world is the authority" (that is: the instructor is not the authority, but the actual way the physical world operates is). I like this, and it's the point of doing interactive lecture demos. I think in my first day of P102 I use a quote from Feynman saying that theory without experiment is useless, which is similar.
We learned a lot about this method throughout the workshop, but my notes are too scattered to copy over for the most part. Peer instruction techniques were at the heart of the Hake plot above (first presented by Mazur to us), with research showing this simple technique significantly improved student learning. Both Mazur and Prather were adamant that you need good peer instruction questions (such that half the students know the answer to begin with), but I am not convinced. I still believe a question in which 0% of the students know the correct answer can result in improved long-term learning via the peer instruction method. I don't feel like I was shown data showing that long-term learning was only improved by these "50%" questions. While thinking of this, I was also reminded of very cool education research I saw in Science Magazine earlier this year (The Critical Importance of Retrieval for Learning).
Kinesthetic Learning
During both the Oregon State session and the Prather astronomy session, I was very impressed by the power of kinesthetic learning. Unlike some of the other things, this is something in which I haven't dabbled, but which I think I really want to try to implement. Here's a brief wikipedia article about kinesthetic learning. The idea is to thave the students act out physics concepts that are otherwise challenging to visualize. In Prather's session, he showed how to use several student actors to demonstrate light traveling 50 million light years (or whatever) from a supernova, and when events happen relative to observation due to the finite speed of light. In the OSU session, we tried acting out unit vectors in spherical coordinates with our arm, and I easily saw the benefit of a group of students doing this together. Two ideas that occurred to me for Physics 102: having students act out the photoelectric effect (perhaps even picking the people with blue versus red shirts) and also having students jump between seats in different rows to model electrons in energy shells.
Context is important to learning
Chandralekha Singh gave a talk about improving teaching of quantum mechanics. Thankfully I haven't had to teach this course yet. I mean thankfully from both my and the students' perspectives! She had some good general points, though. One was an example I was reminded of that I want to use in my P102 course. I think it's called the Wason selection test, and it's a really great way of demonstrating how important the context of a problem is to the way students will view it. You can read the link...I think the version she used was F, K, 3, 7 (instead of colored cards) and Coke, Beer, 25 years, 16 years. I could use this exercise in my first day of P102, combined with also the "Traxoline" lecture that Ed Prather gave.
Modeling, Coaching, Practicing
Kenneth Heller gave a good lecture (the irony of this workshop is that half of it was lectures convincing us that lectures are not an effective learning technique...but only a pseudo-irony if that's even a word) about viewing learning physics from the perspective of modeling, coaching, and practicing. (This page maybe describes what he said, I'm not sure, I didn't read it.) It was a good analogy between teaching someone to play golf and to do physics problems. If we were to teach someone to play golf the same way we often teach physics, it'd be something like this: Tiger Woods hits a 6 iron 220 yards straight and high in front of his students. He then says, "OK, that's how you do it. Your homework is to do it. Also, your homework is to also figure out how to hit 2 through 9 irons. On the exam, we'll surprise you with a 3-wood." But the way you would actually teach golf, of course, is to show them how it looks when done well, and then have them try it out while coaching them on, and then iterating the process. I liked this way of thinking about learning problem solving.
I'm pretty excited about having a blog about teaching (my field is physics) and connecting with others out there to exchange ideas with. So far I've taught two courses at U. New Mexico: Physics 102 (conceptual physics, aka physics without math, aka physics for non-scientists, aka why the sky is blue) and Physics 307L (Junior Lab, aka modern physics lab). These courses are quite different in terms of the student population, what we do in class, and how I spend my time. I have really loved teaching both courses, much more than I think I would have predicted before I started as an assistant prof. in 2006. I'm looking forward to sharing (hopefully over the next few weeks) many of the things I have tried out that I think have been successful. For example, I've been really happy with the "open science" aspect of Junior Lab (see the course site on OpenWetWare). Another thing I'm looking forward to talking about is a really successful experience collaborating with the course TA via private wiki for the conceptual physics course.
So far, all of my "evidence" for success in teaching is anectdotal or non-scientific, which leads me to what I thought would be most appropriate to talk about in my first teaching blog. A couple months ago, I was really fortunate to attend the "New Faculty Workshop" for physics and astronomy faculty. I cannot recommend this workshop strongly enough: if you're an assistant professor of physics or astronomy in your first couple years, you absolutely should have your chair nominate you for this workshop. Ironincally, in my case, I had to miss a class and got way behind in grading due to the workshop...but it was very much worth it. (By the way: Thank you to everyone who contributed their time to leading this workshop! I won't try to list all the names for fear of leaving someone out.)
One of the main things I learned at this workshop is that anectdotal evidence for good teaching techniques isn't a good measure of student learning. For example, I sort of took seriously the end-of-semester student evaluations without ever stopping to think about them scientifically: what do they tell me about student learning or any other goals I have set for the course (besides student happiness)? Furthermore, even the exams aren't really assessing student learning since I don't have a baseline pre-test. A pre-test is pretty common sense as far as scientific thinking goes...but it never really occurred to me before this workshop, I don't think. The pre-test is one of the main things I want to implement this coming semester (conceptual physics)...choosing the pretest will be the subject of future blogs, hopefully.
In addition to learning about assessment in physics, I learned a bunch of other stuff. I actually took a lot of notes on my private wiki (back in November), and below I'm going to transfer over a lot of those thoughts. Yeah, I am breaking some rule about long blogging, but I'll throw some headings in there to increase the chance of anyone reading parts of this post. I'll also edit it slightly.
Older Notes from New Faculty Workshop
Assessment
The number one thing I want to implement is real assessment: Pre and Post-testing so I can assess the effectiveness of my teaching. There are all sorts of existing standardized tests, one of the more popular being the FCI (force concept inventory), although that may not be relevant for my version of 102. Probably for Junior Lab, I will have to use my own free-response questions, since the goals are not easily tested by multiple choice, and there are few enough students to assess what they say and try to measure learning. If you care about learning, then assessment really should be a precursor to any kind of new educational thing you're going to try. Many of the results are counter-intuitive. For example, strategies that have a major, significant affect on learning seem to decrease student's end-of-semester attitudes towards physics. That is to say: the student feedback scores (while important for tenure) are actually not a good indicator of learning. Now, attitudes is of course an important outcome sometimes, and long-term attitudes have less data. But the point is you can't rely on student's opinions. A corollary of this is that there is no correlation at all of learning outcomes with lecturing "ability." Basically all lecturing is ineffective, whether it's from award-winning lecturers or really crappy lectures. The thing to google here is the "hake plot" and Eric Mazur at Harvard. Above and to the right is a schematic of the "hake plot." It takes a minute to understand it, so I recommend going to Redish's site about the research. What the plot shows is that no matter the student background, or lecturer ability, only about 25% of the possible learning occurs with plain-old lecture. There were a number of stunning things to me from Eric Mazur's talk (which in addition to being very informative was very entertaining as well). I was really surprised when I saw this plot, for two reasons. First, I was embarrassed to realize that there is a lot of good research (with real data) out there about how to effectively teach. Second, I was surprised that the data very strongly indicated that any kind of passive lecture is really not very effective. This was fun to see, because I intuitively had already been thinking this.
Some active learning tidbits
It turns out a lot of the things I was already doing in P102 are similar to some effective strategies, although perhaps not implemented correctly. For example, JiTT (Just in Time Teaching), peer-instruction, think-pair-share (I don't remember the definitions of all of these), interactive lecture demos. I should read more about these, and decide whether to implement them "correctly" based on the existing research, and also to choose assessment that will really measure the learning.
Setting effective course goals
I don't remember who, but someone asserted that 3 goals is pretty much the maximum number of goals a course should have. Actually I'm not really sure if someone said this directly, but it sticks in my mind. I realized, that my list of goals for Junior Lab is way too long--and thus none of the students could possibly remember it while actually working in lab. So, I need to reduce these to 3 or 4 clear and measureable goals. Plus, of course, I need to work these into a pre- and post-test for assessment.
Ideas about a Biophysics 101 or "Conceptual Biophysics" course
I am continually asked by faculty here what I think the biophysics curriculum should be here at UNM. I have consistently resisted implementing courses willy-nilly, thinking we don't yet have critical mass here, and it's not clear what is an effective course to implement. My own bias is our graduate students definitely do not need another required course. But I now finally I have an idea for what "biophysics" course should be taught, or at least one that I would like to develop. It fits in with what I've been realizing that it's really not any more strange having "biophysics" in the department than it is having "astronomy" (aka astrophysics). Many departments (such as ours) are "Physics and Astronomy", and there's usually (as far as I can tell) not any discussion over whether astronomy is really physics etc. In contrast, there is frequent discussion and / or discomfort over how biophysics fits in our department. But when I step back and think about it, Astronomy is really just physics applied to many non-physics things. Consider optical biophysics versus observational astronomy: both are physicists who have to know a lot about light, spectroscopy (atomic physics), optics, etc. in order to get the information they want. Anyway, I think this is maybe a valuable parallel so that other faculty in the department don't get so confused about biophysics. One of the best sessions was taught by Ed Prather from Arizona (I think), who teaches Astronomy 101. He pointed out how they really teach the students physics, and the students love it -- they just don't tell them that they're teaching them physics. The physics is so they can understand the astronomy--so they learn physics because they like astronomy. I think there could be a perfect parallel by teaching a "biophysics" course that was all about biology, but really required learning a bunch of physics to understand the biology. I am thinking first at the conceptual level, but it could possibly extend to algebra or even calculus physics I suppose, but the audience would start with non-physics majors. I already do a little of this in 102. For example, talking about the applications of fluorescence in biology, which fits in with the atomic physics, light, etc. topics. Brownian motion is also dominant, and of course, all kinds of forces inside cells. When talking about this with people at dinner, it occurred to me that it would be funny learning about forces first in cell biology world, because instead of having to imagine friction-less surfaces, you instead have to get used to "massless" systems w/ tons of friction. Why would that be any worse than assuming no friction? In fact, one misconception students usually have is Newton's first law, because we all grow up in a world where things stop pretty quickly when the external force apparently goes away. Anyway, this is the idea, and it seems completely achievable to me, perhaps even without too much more effort as an evolution of P102. However, I would be very smart to wait until post-tenure to try this.
Resources I don't want to forget about
- Resource that look very promising
- COMPADRE -- the leader of this is also very receptive to ideas for improvement
- PHET (very nice applets)
- Physlets (and related) -- this would be for more advanced students, as it allows changing the code at various levels (I'm not sure if this is the correct link: http://webphysics.davidson.edu/applets/DownLoad_Files/default.html)
- I really should have a simulation component as much as possible in Junior Lab
I saw an example of Oregon State physics education system, which was really impressive (by Kenneth Krane). Their classrooms had computers for groups of students, and personal whiteboards (you can get this at Home Depot for like $12, but I forget the name of it...not called whiteboard, though...and then cut it into little pieces). They ask the students something like "write down a feature of vector dot products", for example, and then walk around and collect whiteboards and then bring them to the front. (This would seem to parallel our P102 "brainstorming" sessions.) (Paradigms in Physics wiki at OSU.)
A book I want to read
Somebody recommended a book, "How People Learn," (Actually checking Amazon, it's probably How the Brain Learns, judging by popularity.) which is somewhat popular I guess (or it might be "How People Learn (2000)" by the NRC, N. Acad. Press)...I think this book maybe had the recommendation to tell students to pause and write down something that they just learned. I should use something like this in P102 (and maybe even Junior Lab)...all students are going to want to take notes. In this case, I'll tell them to spend time thinking about what they learned, and there (supposedly?) is data that this has an effect on retention.
Interactive lecture demos
I think it was during the interactive lecture demo talk (which was run by two physics profs from different universities) that they said, "The physical world is the authority" (that is: the instructor is not the authority, but the actual way the physical world operates is). I like this, and it's the point of doing interactive lecture demos. I think in my first day of P102 I use a quote from Feynman saying that theory without experiment is useless, which is similar.
- Also, I thought at this point (and other times during the workshop) how it would probably be quite effective if I could figure out a way of having the students "bet" or "invest" in the clicker questions. This is biased by my own love of gambling and investing, of course. But I feel like it could be quite effective...it seems to me that people really change their perspective when they have money riding on it. This relates to a quote Eric Mazur had when he first gave his Harvard students the FCI exam (which they did not so hot on, after scoring very well on his exams)...a frustrated "A" student asked, "Professor Mazur...are we supposed to answer the way you taught us, or the way we really think about the problem???"
We learned a lot about this method throughout the workshop, but my notes are too scattered to copy over for the most part. Peer instruction techniques were at the heart of the Hake plot above (first presented by Mazur to us), with research showing this simple technique significantly improved student learning. Both Mazur and Prather were adamant that you need good peer instruction questions (such that half the students know the answer to begin with), but I am not convinced. I still believe a question in which 0% of the students know the correct answer can result in improved long-term learning via the peer instruction method. I don't feel like I was shown data showing that long-term learning was only improved by these "50%" questions. While thinking of this, I was also reminded of very cool education research I saw in Science Magazine earlier this year (The Critical Importance of Retrieval for Learning).
Kinesthetic Learning
During both the Oregon State session and the Prather astronomy session, I was very impressed by the power of kinesthetic learning. Unlike some of the other things, this is something in which I haven't dabbled, but which I think I really want to try to implement. Here's a brief wikipedia article about kinesthetic learning. The idea is to thave the students act out physics concepts that are otherwise challenging to visualize. In Prather's session, he showed how to use several student actors to demonstrate light traveling 50 million light years (or whatever) from a supernova, and when events happen relative to observation due to the finite speed of light. In the OSU session, we tried acting out unit vectors in spherical coordinates with our arm, and I easily saw the benefit of a group of students doing this together. Two ideas that occurred to me for Physics 102: having students act out the photoelectric effect (perhaps even picking the people with blue versus red shirts) and also having students jump between seats in different rows to model electrons in energy shells.
Context is important to learning
Chandralekha Singh gave a talk about improving teaching of quantum mechanics. Thankfully I haven't had to teach this course yet. I mean thankfully from both my and the students' perspectives! She had some good general points, though. One was an example I was reminded of that I want to use in my P102 course. I think it's called the Wason selection test, and it's a really great way of demonstrating how important the context of a problem is to the way students will view it. You can read the link...I think the version she used was F, K, 3, 7 (instead of colored cards) and Coke, Beer, 25 years, 16 years. I could use this exercise in my first day of P102, combined with also the "Traxoline" lecture that Ed Prather gave.
Modeling, Coaching, Practicing
Kenneth Heller gave a good lecture (the irony of this workshop is that half of it was lectures convincing us that lectures are not an effective learning technique...but only a pseudo-irony if that's even a word) about viewing learning physics from the perspective of modeling, coaching, and practicing. (This page maybe describes what he said, I'm not sure, I didn't read it.) It was a good analogy between teaching someone to play golf and to do physics problems. If we were to teach someone to play golf the same way we often teach physics, it'd be something like this: Tiger Woods hits a 6 iron 220 yards straight and high in front of his students. He then says, "OK, that's how you do it. Your homework is to do it. Also, your homework is to also figure out how to hit 2 through 9 irons. On the exam, we'll surprise you with a 3-wood." But the way you would actually teach golf, of course, is to show them how it looks when done well, and then have them try it out while coaching them on, and then iterating the process. I liked this way of thinking about learning problem solving.
Subscribe to:
Posts (Atom)