I'm pretty excited about having a blog about teaching (my field is physics) and connecting with others out there to exchange ideas with. So far I've taught two courses at U. New Mexico: Physics 102 (conceptual physics, aka physics without math, aka physics for non-scientists, aka why the sky is blue) and Physics 307L (Junior Lab, aka modern physics lab). These courses are quite different in terms of the student population, what we do in class, and how I spend my time. I have really loved teaching both courses, much more than I think I would have predicted before I started as an assistant prof. in 2006. I'm looking forward to sharing (hopefully over the next few weeks) many of the things I have tried out that I think have been successful. For example, I've been really happy with the "open science" aspect of Junior Lab (see the course site on OpenWetWare). Another thing I'm looking forward to talking about is a really successful experience collaborating with the course TA via private wiki for the conceptual physics course.
So far, all of my "evidence" for success in teaching is anectdotal or non-scientific, which leads me to what I thought would be most appropriate to talk about in my first teaching blog. A couple months ago, I was really fortunate to attend the "New Faculty Workshop" for physics and astronomy faculty. I cannot recommend this workshop strongly enough: if you're an assistant professor of physics or astronomy in your first couple years, you absolutely should have your chair nominate you for this workshop. Ironincally, in my case, I had to miss a class and got way behind in grading due to the workshop...but it was very much worth it. (By the way: Thank you to everyone who contributed their time to leading this workshop! I won't try to list all the names for fear of leaving someone out.)
One of the main things I learned at this workshop is that anectdotal evidence for good teaching techniques isn't a good measure of student learning. For example, I sort of took seriously the end-of-semester student evaluations without ever stopping to think about them scientifically: what do they tell me about student learning or any other goals I have set for the course (besides student happiness)? Furthermore, even the exams aren't really assessing student learning since I don't have a baseline pre-test. A pre-test is pretty common sense as far as scientific thinking goes...but it never really occurred to me before this workshop, I don't think. The pre-test is one of the main things I want to implement this coming semester (conceptual physics)...choosing the pretest will be the subject of future blogs, hopefully.
In addition to learning about assessment in physics, I learned a bunch of other stuff. I actually took a lot of notes on my private wiki (back in November), and below I'm going to transfer over a lot of those thoughts. Yeah, I am breaking some rule about long blogging, but I'll throw some headings in there to increase the chance of anyone reading parts of this post. I'll also edit it slightly.
Older Notes from New Faculty Workshop
The number one thing I want to implement is real assessment: Pre and Post-testing so I can assess the effectiveness of my teaching. There are all sorts of existing standardized tests, one of the more popular being the FCI (force concept inventory), although that may not be relevant for my version of 102. Probably for Junior Lab, I will have to use my own free-response questions, since the goals are not easily tested by multiple choice, and there are few enough students to assess what they say and try to measure learning. If you care about learning, then assessment really should be a precursor to any kind of new educational thing you're going to try. Many of the results are counter-intuitive. For example, strategies that have a major, significant affect on learning seem to decrease student's end-of-semester attitudes towards physics. That is to say: the student feedback scores (while important for tenure) are actually not a good indicator of learning. Now, attitudes is of course an important outcome sometimes, and long-term attitudes have less data. But the point is you can't rely on student's opinions. A corollary of this is that there is no correlation at all of learning outcomes with lecturing "ability." Basically all lecturing is ineffective, whether it's from award-winning lecturers or really crappy lectures. The thing to google here is the "hake plot" and Eric Mazur at Harvard. Above and to the right is a schematic of the "hake plot." It takes a minute to understand it, so I recommend going to Redish's site about the research. What the plot shows is that no matter the student background, or lecturer ability, only about 25% of the possible learning occurs with plain-old lecture. There were a number of stunning things to me from Eric Mazur's talk (which in addition to being very informative was very entertaining as well). I was really surprised when I saw this plot, for two reasons. First, I was embarrassed to realize that there is a lot of good research (with real data) out there about how to effectively teach. Second, I was surprised that the data very strongly indicated that any kind of passive lecture is really not very effective. This was fun to see, because I intuitively had already been thinking this.
Some active learning tidbits
It turns out a lot of the things I was already doing in P102 are similar to some effective strategies, although perhaps not implemented correctly. For example, JiTT (Just in Time Teaching), peer-instruction, think-pair-share (I don't remember the definitions of all of these), interactive lecture demos. I should read more about these, and decide whether to implement them "correctly" based on the existing research, and also to choose assessment that will really measure the learning.
Setting effective course goals
I don't remember who, but someone asserted that 3 goals is pretty much the maximum number of goals a course should have. Actually I'm not really sure if someone said this directly, but it sticks in my mind. I realized, that my list of goals for Junior Lab is way too long--and thus none of the students could possibly remember it while actually working in lab. So, I need to reduce these to 3 or 4 clear and measureable goals. Plus, of course, I need to work these into a pre- and post-test for assessment.
Ideas about a Biophysics 101 or "Conceptual Biophysics" course
I am continually asked by faculty here what I think the biophysics curriculum should be here at UNM. I have consistently resisted implementing courses willy-nilly, thinking we don't yet have critical mass here, and it's not clear what is an effective course to implement. My own bias is our graduate students definitely do not need another required course. But I now finally I have an idea for what "biophysics" course should be taught, or at least one that I would like to develop. It fits in with what I've been realizing that it's really not any more strange having "biophysics" in the department than it is having "astronomy" (aka astrophysics). Many departments (such as ours) are "Physics and Astronomy", and there's usually (as far as I can tell) not any discussion over whether astronomy is really physics etc. In contrast, there is frequent discussion and / or discomfort over how biophysics fits in our department. But when I step back and think about it, Astronomy is really just physics applied to many non-physics things. Consider optical biophysics versus observational astronomy: both are physicists who have to know a lot about light, spectroscopy (atomic physics), optics, etc. in order to get the information they want. Anyway, I think this is maybe a valuable parallel so that other faculty in the department don't get so confused about biophysics. One of the best sessions was taught by Ed Prather from Arizona (I think), who teaches Astronomy 101. He pointed out how they really teach the students physics, and the students love it -- they just don't tell them that they're teaching them physics. The physics is so they can understand the astronomy--so they learn physics because they like astronomy. I think there could be a perfect parallel by teaching a "biophysics" course that was all about biology, but really required learning a bunch of physics to understand the biology. I am thinking first at the conceptual level, but it could possibly extend to algebra or even calculus physics I suppose, but the audience would start with non-physics majors. I already do a little of this in 102. For example, talking about the applications of fluorescence in biology, which fits in with the atomic physics, light, etc. topics. Brownian motion is also dominant, and of course, all kinds of forces inside cells. When talking about this with people at dinner, it occurred to me that it would be funny learning about forces first in cell biology world, because instead of having to imagine friction-less surfaces, you instead have to get used to "massless" systems w/ tons of friction. Why would that be any worse than assuming no friction? In fact, one misconception students usually have is Newton's first law, because we all grow up in a world where things stop pretty quickly when the external force apparently goes away. Anyway, this is the idea, and it seems completely achievable to me, perhaps even without too much more effort as an evolution of P102. However, I would be very smart to wait until post-tenure to try this.
Resources I don't want to forget about
- Resource that look very promising
- COMPADRE -- the leader of this is also very receptive to ideas for improvement
- PHET (very nice applets)
- Physlets (and related) -- this would be for more advanced students, as it allows changing the code at various levels (I'm not sure if this is the correct link: http://webphysics.davidson.edu/applets/DownLoad_Files/default.html)
- I really should have a simulation component as much as possible in Junior Lab
I saw an example of Oregon State physics education system, which was really impressive (by Kenneth Krane). Their classrooms had computers for groups of students, and personal whiteboards (you can get this at Home Depot for like $12, but I forget the name of it...not called whiteboard, though...and then cut it into little pieces). They ask the students something like "write down a feature of vector dot products", for example, and then walk around and collect whiteboards and then bring them to the front. (This would seem to parallel our P102 "brainstorming" sessions.) (Paradigms in Physics wiki at OSU.)
A book I want to read
Somebody recommended a book, "How People Learn," (Actually checking Amazon, it's probably How the Brain Learns, judging by popularity.) which is somewhat popular I guess (or it might be "How People Learn (2000)" by the NRC, N. Acad. Press)...I think this book maybe had the recommendation to tell students to pause and write down something that they just learned. I should use something like this in P102 (and maybe even Junior Lab)...all students are going to want to take notes. In this case, I'll tell them to spend time thinking about what they learned, and there (supposedly?) is data that this has an effect on retention.
Interactive lecture demos
I think it was during the interactive lecture demo talk (which was run by two physics profs from different universities) that they said, "The physical world is the authority" (that is: the instructor is not the authority, but the actual way the physical world operates is). I like this, and it's the point of doing interactive lecture demos. I think in my first day of P102 I use a quote from Feynman saying that theory without experiment is useless, which is similar.
- Also, I thought at this point (and other times during the workshop) how it would probably be quite effective if I could figure out a way of having the students "bet" or "invest" in the clicker questions. This is biased by my own love of gambling and investing, of course. But I feel like it could be quite effective...it seems to me that people really change their perspective when they have money riding on it. This relates to a quote Eric Mazur had when he first gave his Harvard students the FCI exam (which they did not so hot on, after scoring very well on his exams)...a frustrated "A" student asked, "Professor Mazur...are we supposed to answer the way you taught us, or the way we really think about the problem???"
We learned a lot about this method throughout the workshop, but my notes are too scattered to copy over for the most part. Peer instruction techniques were at the heart of the Hake plot above (first presented by Mazur to us), with research showing this simple technique significantly improved student learning. Both Mazur and Prather were adamant that you need good peer instruction questions (such that half the students know the answer to begin with), but I am not convinced. I still believe a question in which 0% of the students know the correct answer can result in improved long-term learning via the peer instruction method. I don't feel like I was shown data showing that long-term learning was only improved by these "50%" questions. While thinking of this, I was also reminded of very cool education research I saw in Science Magazine earlier this year (The Critical Importance of Retrieval for Learning).
During both the Oregon State session and the Prather astronomy session, I was very impressed by the power of kinesthetic learning. Unlike some of the other things, this is something in which I haven't dabbled, but which I think I really want to try to implement. Here's a brief wikipedia article about kinesthetic learning. The idea is to thave the students act out physics concepts that are otherwise challenging to visualize. In Prather's session, he showed how to use several student actors to demonstrate light traveling 50 million light years (or whatever) from a supernova, and when events happen relative to observation due to the finite speed of light. In the OSU session, we tried acting out unit vectors in spherical coordinates with our arm, and I easily saw the benefit of a group of students doing this together. Two ideas that occurred to me for Physics 102: having students act out the photoelectric effect (perhaps even picking the people with blue versus red shirts) and also having students jump between seats in different rows to model electrons in energy shells.
Context is important to learning
Chandralekha Singh gave a talk about improving teaching of quantum mechanics. Thankfully I haven't had to teach this course yet. I mean thankfully from both my and the students' perspectives! She had some good general points, though. One was an example I was reminded of that I want to use in my P102 course. I think it's called the Wason selection test, and it's a really great way of demonstrating how important the context of a problem is to the way students will view it. You can read the link...I think the version she used was F, K, 3, 7 (instead of colored cards) and Coke, Beer, 25 years, 16 years. I could use this exercise in my first day of P102, combined with also the "Traxoline" lecture that Ed Prather gave.
Modeling, Coaching, Practicing
Kenneth Heller gave a good lecture (the irony of this workshop is that half of it was lectures convincing us that lectures are not an effective learning technique...but only a pseudo-irony if that's even a word) about viewing learning physics from the perspective of modeling, coaching, and practicing. (This page maybe describes what he said, I'm not sure, I didn't read it.) It was a good analogy between teaching someone to play golf and to do physics problems. If we were to teach someone to play golf the same way we often teach physics, it'd be something like this: Tiger Woods hits a 6 iron 220 yards straight and high in front of his students. He then says, "OK, that's how you do it. Your homework is to do it. Also, your homework is to also figure out how to hit 2 through 9 irons. On the exam, we'll surprise you with a 3-wood." But the way you would actually teach golf, of course, is to show them how it looks when done well, and then have them try it out while coaching them on, and then iterating the process. I liked this way of thinking about learning problem solving.