Yesterday was the 4th week of Junior Lab, and the third lecture (no class Labor Day week). We started discussing some of the fundamentals that underlie a lot of data analysis. To do so, we started with a group exercise that involved flipping coins and recording the number of heads and tails. One of the main goals of the group exercise was to get students involved and contributing to the discussion. In my opinion, that goal was successful. Out of 16 students, I can recall at least six students who were actively contributing to the discussion for the whole hour.

The other goal was to use the coin flip as a launching point to discuss probability distributions, probability functions, parent distributions, the law of large numbers, central limit theorem(s), independent measurements, random error, systematic error, etc. I think this goal was achieved as well. Below are two photos of the chalkboard, following our 30 or 40 minute discussion.

Almost all of the words / equations on the left chalkboard were contributed by students. To spur discussion, I collected the results of their coin flip trials, and then asked an open-ended question "what can we say about these data? What should we do with them?" Mean and standard deviation were suggested by Alex. I used this to define Xi, Xbar as we will talk about them during this semester. One group of students recorded ten heads in a row. I pointed out that set of Xi and asked, "how come you were reluctant to report this? Who cares that it was 10 heads in a row?" This got students to mention things like "unlikely" and "probability" and even mentioning binomial distributions. One student, I think Kirstin, described in words how the probability function for 10 coin flips would look, which led to the drawing in the right photo above.

My memory is hazy at this point, but we started talking about how to test whether the coins were actually fair. I asked what the measurements would look like if I asked them to measure the widths of the coins. Students' intuition, not surprisingly was that most sets of observations we thought of would have a bell-shaped distribution. I asked if anyone knew why this was, and there were many good intuitive explanations. Alex, brought up the Central Limit Theorem, which delighted me. At that point, I wrote down in words / symbols a version of the central limit theorem, and we ended class pretty much on that note. I then showed them a Google spreadsheets example of the Central Limit Theorem in action for uniformly distributed random numbers from 0-1, see below.

Central Limit Theorem Spreadsheet

All in all, I think the class was successful, but I don't have any real measurement of that. Many students were very engaged, and almost all of the terminology and principles were contributed by students as opposed to me. I told them my goal for them is not to memorize any formulas or theorems, but rather to gain an understanding of them and an intuition so that when they encounter data analysis tasks in the future, they will know that there is an underlying theoretical framework that they can go read about and relearn. I feel like the kind of discussions we had yesterday will likely achieve that goal for most of them.

Next week, we'll continue along these lines. We'll look at their coin-flip data and approach the question, "how do we test whether the coins are fair coins?" Or, we may do another group exercise that generates new measurements that we can look at. Or, a third option is to use data that students generate during the lab sessions. My gut is telling me to continue trying to do small group exercises at the beginning of class, since it does seem to be boosting student engagement quite a bit relative to prior years. Thanks to TA Katie Richardson for suggesting I do the group exercises!

FriendFeed thread:

The other goal was to use the coin flip as a launching point to discuss probability distributions, probability functions, parent distributions, the law of large numbers, central limit theorem(s), independent measurements, random error, systematic error, etc. I think this goal was achieved as well. Below are two photos of the chalkboard, following our 30 or 40 minute discussion.

Almost all of the words / equations on the left chalkboard were contributed by students. To spur discussion, I collected the results of their coin flip trials, and then asked an open-ended question "what can we say about these data? What should we do with them?" Mean and standard deviation were suggested by Alex. I used this to define Xi, Xbar as we will talk about them during this semester. One group of students recorded ten heads in a row. I pointed out that set of Xi and asked, "how come you were reluctant to report this? Who cares that it was 10 heads in a row?" This got students to mention things like "unlikely" and "probability" and even mentioning binomial distributions. One student, I think Kirstin, described in words how the probability function for 10 coin flips would look, which led to the drawing in the right photo above.

My memory is hazy at this point, but we started talking about how to test whether the coins were actually fair. I asked what the measurements would look like if I asked them to measure the widths of the coins. Students' intuition, not surprisingly was that most sets of observations we thought of would have a bell-shaped distribution. I asked if anyone knew why this was, and there were many good intuitive explanations. Alex, brought up the Central Limit Theorem, which delighted me. At that point, I wrote down in words / symbols a version of the central limit theorem, and we ended class pretty much on that note. I then showed them a Google spreadsheets example of the Central Limit Theorem in action for uniformly distributed random numbers from 0-1, see below.

Central Limit Theorem Spreadsheet

All in all, I think the class was successful, but I don't have any real measurement of that. Many students were very engaged, and almost all of the terminology and principles were contributed by students as opposed to me. I told them my goal for them is not to memorize any formulas or theorems, but rather to gain an understanding of them and an intuition so that when they encounter data analysis tasks in the future, they will know that there is an underlying theoretical framework that they can go read about and relearn. I feel like the kind of discussions we had yesterday will likely achieve that goal for most of them.

Next week, we'll continue along these lines. We'll look at their coin-flip data and approach the question, "how do we test whether the coins are fair coins?" Or, we may do another group exercise that generates new measurements that we can look at. Or, a third option is to use data that students generate during the lab sessions. My gut is telling me to continue trying to do small group exercises at the beginning of class, since it does seem to be boosting student engagement quite a bit relative to prior years. Thanks to TA Katie Richardson for suggesting I do the group exercises!

FriendFeed thread: