Category Archives: modeling

Fluids Paradigm Lab

I taught a one-semester Advanced Physics class that cumulated in the AP Physics B exam my first five years of teaching. For the past two years, I taught an official AP Physics B course. Both of these courses were packed with content. Despite being a proponent of [Modeling Instruction](http://modelinginstruction.org) and incorporating it into other courses, I never felt I could make it fit in these courses.

This year, I’m teaching the new AP Physics 2 course. The focus on inquiry, deep understanding of physcs, and science practices (and less content) aligns wonderfully with Modeling Instruction.

We just started the first major unit, fluids. I guided my students through a paradigm lab to model the pressure vs. depth in a fluid. We started by watching [this video](https://www.youtube.com/watch?v=fqWL5FsQXRI) of a can being crushed as it descends in a lake. I was worried students would find the phenomenon demonstrate too simple, but that definitely wasn’t the case. Like any paradigm lab, we started by making observations:

* the can gets crushed
* the can gets crushed more as it gets deeper
* the top of the can appears to be sealed
* the can must be empty (student commented that if full, it wouldn’t be crushed)

Students then enumerated variables that may be related to the crushing of the can:

* water pressure
* volume of water above the can
* strength of can
* air pressure inside of can
* gravitational field strength (student said “gravity” and I went on a tangent about fields…)
* temperature of water
* atmospheric pressure
* type (density) of fluid
* water depth
* speed of decent
* dimensions, surface area, shape of can
* motion of water

Students readily agreed that it was the water pressure that crushed the can and it is the dependent variable. In hindsight, I could have better focused the discussion by directing students to focus on the water pressure rather than the can itself. They had a lot of good ideas about what properties of the can would affect it being crushed, which I didn’t expect. I had to admit that I didn’t have any cans and we would have to focus on the fluid instead…. I was amazed that no one in my first class proposed that the depth of the fluid would play a role. Everyone in that class phrased it as the volume of the fluid in the container above the can was a variable to measure. This was fascinating to me and led to a surprising result for the students as the experiment was conducted. I think this illustrates the power of the modeling cycle and guided inquiry labs.

We next determined which of the above variables we could control (independent variables) and measure in the lab given the resources available at the moment:

* volume of water above the can
* type (density) of fluid
* water depth
* speed of decent

The materials we planned on using were Vernier LabQuest 2 interfaces, pressure sensors with glass tube attachments, three different sized beakers (for the volume variable), graduated cylinders, fluids (water, canola oil, saturated salt water).

We then defined the purpose of our experiment:

To graphically and mathematically model the relationship between (TGAMMTRB) pressure, volume of fluid above, depth below surface of fluid, decent rate, and type of fluid (density).

We divided these various experiments among the lab groups, and groups started designing their particular experiment.

At the start of class the next day, groups shared their results. I was particularly impressed with the groups investigating pressure vs. volume of fluid above a point. While they measured a relationship between pressure and volume, their experimental design was sufficiently robust that they also noticed that the same volume above the measurement point resulted in different pressures in different beakers! That is, the pressure with 400 mL of water above the sensor in the 600 mL beaker is different than in the 1000 mL beaker and different again from that in the 2000 mL beaker. After further investigation they concluded that the relationship was based on depth, not volume.

The groups investigating pressure vs. depth in fluid were confident that the pressure at a point depended on the depth below the surface of the fluid, and they had sufficient data that they were also confident that there was a linear relationship between pressure and depth.

The groups that investigated pressure vs. fluid density at constant depth/volume had inconclusive results. The pressure they measured varied by less than 1% between the three types of fluids. This provided an opportunity to discuss how the experimental technique can affect the uncertainty of the measurement. We discussed that with the new understanding of the relationship between pressure and depth, these groups could gather several measurements at various depths in each of the three fluids and compare the slopes of the resulting graphs to see if density has an effect. While we were discussing measurement uncertainty, we also discussed how the depth is defined not by the position of the bottom of the glass tube, but the water level within the glass tube. I learned of this important experimental technique in the article “[Pressure Beneath the Surface of a Fluid: Measuring the Correct Depth](http://scitation.aip.org/content/aapt/journal/tpt/51/5/10.1119/1.4801356)” in The Physics Teacher. While the groups investigating the effect of fluid density on pressure applied their new experimental technique, the rest of the groups repeated gathering pressure vs. depth data while carefully examining the fluid level in the glass tube.

After a second day of measurements, students confirmed the linear relationship between pressure and depth. In addition, with the improved experimental design, students confirmed a relationship between pressure and fluid density. The results were not as accurate as I had expected. We identified a couple of additional errors that may have contributed. One, a couple of groups lost the seal between the glass tube and the plastic tube connected to the pressure sensor when the glass tube was in the fluid. This results in the fluid filling the glass tube and future measurements are incorrect if the glass tube is reconnected without removing it from the fluid.

I asked my TA to minimize the known sources of measurement uncertainty, perform the experiment, and determine how accurately pressure vs. depth could be measured. The slope of his pressure vs. depth graph was within 3.16% of the expected value. This is quite a reasonable result. If we used a taller graduated cylinder, I expect the error could be reduced further.

I’ll definitely do this paradigm lab again next year!

AP Physics 1 Unofficial Pilot

This past school year, my colleagues and I restructured our Honors Physics course to unofficially pilot the AP Physics 1 course. This was motivated by several factors. We wanted to get a jump on the new AP Physics 1 course so that this summer we would only have to revise the course since we also have to create the new AP Physics 2 course. We wanted to create a pipeline of students prepared for the AP Physics 2 course. We also were dissatisfied with the current structure and emphasis of our existing Honors Physics course.

We’ve structured our course around Standards-Based Assessment and Reporting (a.k.a. Standards Based Grading) for many years, and we continued to do so this year. We did make some changes to the specifics. We transitioned from a binary mastery / developing mastery system to a 1-5 scoring system. All of the details are captured in [my syllabus](https://docs.google.com/document/d/196vqlKb3J6SzFSGo5JqNTssJynBj3iXXxxUr-D5hr0c/pub).

A vast majority of the units follow [Modeling Instruction](http://modelinginstruction.org) and leverage a combination of the official Modeling Instruction materials and derived versions. A notable exception is the electric circuits unit for which we leveraged a combination of [Physics by Inquiry](http://depts.washington.edu/uwpeg/pbi) materials and the [Modeling Instruction CASTLE](http://www.pasco.com/prodCatalog/EM/EM-8624_castle-kit/#overviewTab) materials. The current model is based on the Physics by Inquiry investigations and the electric pressure (voltage) model is based on the Modeling Instruction CASTLE materials.

Below are our [AP Physics 1 standards](https://docs.google.com/document/d/1iZUjDYGAIrKrTkCSv1N7m2I5RA4HlF73nGiG0x4k7bs/pub) for the 2013-2014 school year. Standards that we felt were more significant were weighted twice as much and are designated by the “B” suffix as opposed to the “A” suffix. We will certainly revise these somewhat for next year after reviewing the College Board materials, attending AP workshops, and integrating our new textbook.

Overall, I am extremely pleased with how the AP Physics 1 pilot class was and what our students learned. The incorporation of Modeling Instruction; focus on in-depth, guided inquiry-based experiments; peer instruction-style discussion and debate of conceptual questions; and a great team of teachers with which to collaborate were the keys for the successful year.

Whiteboard Holders

I’m very excited that we are replacing the individual desks in one of the classrooms in which I teach physics with tables. I’m anticipating much more effective collaboration among students with the tables.

However, one of my projects for this summer was to build something that would discourage “collaboration” during exams. So, I build some very simple whiteboard holders that can serve that function as well as, well, hold whiteboards for display.

I cut 24 holders from an eight foot long 4″ x 4″ using a chop saw. I then used a table saw to cut a kerf in the middle of each block just slightly wider than a whiteboard is thick and almost 2″ deep. (I actually cut the 4″ x 4″ into 2′ sections, cut the kerf in the 2′ sections, and then chopped them into 4″ blocks to be more efficient.) Finally, I cleaned up the rough edges with a belt sander. It only took a couple of hours to make enough holders for 36 whiteboards (a set of 12 for each of our three physics classrooms). Here are two individual blocks to illustrate how they are constructed:

whiteboard holder blocks

Here they are holding a whiteboard:

whiteboard holder

One summer project completed; many more to go!

Questions to Ask During Whiteboarding

For me personally, developing productive Socratic dialogs when whiteboarding is my biggest challenge. I printed three posters to prompt students to contribute effectively to whiteboard discussions and to help them prepare excellent whiteboards. One of these posters ended up in the background of a photo I tweeted and generated a couple of questions. The content of the posters is directly from the article [Engaging students in conducting Socratic dialogues: Suggestions for science teachers](http://www.phy.ilstu.edu/pte/publications/engaging_students.pdf) by Carl Wenning, Thomas Holbrook, and James Stankevitz. (Jim was the leader for the Modeling Instruction workshop that I attended.)

Based on this article I created three posters: Questions to Ask (prompts for questions), Lab Presentation (tips for whiteboarding labs), Whiteboard Presentation (tips for whiteboarding problems).

Download (PDF, 29KB)

Mechanics Modeling Instruction Reflection

I just finished my second year of Modeling Instruction for mechanics in my regular physics class.

While I attended a mechanics modeling workshop a few years ago, I remember when I first decided to jump into modeling with both feet. I was looking at a problem involving electromagnetic induction that required use of the equation F = BIl. All students had to do was to find three numbers, one in units of tesla, one in amps, one in meters and multiply them together without any understanding of physics. This was reinforced when I saw students in the next question trying to solve for resistance using Ohm’s Law and plugging in a velocity instead of a voltage. Many of my students weren’t understanding physics, they were learning to match variables with units and plug-and-chug. Our curriculum was much wider than deep and I felt that I had to make a change.

Fortunately, my desire to change the emphasis of the curriculum coincided with a county-wide effort to define a core curriculum for physics. While it wasn’t easy, the team of physics teachers at my school agreed that we had to at least cover the core curriculum as defined by the county effort. This was the opportunity to reduce the breadth of the curriculum, focus on understanding and critical thinking, and use Modeling Instruction for mechanics.

I felt that the first year of Modeling Instruction was a huge improvement in terms of student understanding. This past semester was even better. While just one measure, FCI gains reinforce my beliefs. In 2009, the year before introducing Modeling Instruction, my students’ average FCI Gain was .33. In 2010, the first year of Modeling Instruction, it was .43. This year, the FCI gain was .47. While I don’t credit Modeling Instruction as the sole factor that produced these improvements in students’ conceptual understanding, it is probably the most significant. We also started standard-based assessment and reporting in 2010 and, hopefully, I’m improving as a teacher in other ways. For me, the most important confirmation that I was on the right path was that I couldn’t imagine going back to the way that I was teaching before.

The three most important changes that I made this year were: [goalless problems](http://quantumprogress.wordpress.com/2010/11/20/goal-less-problems/), sequencing of units (CVPM, BFPM, CAPM, UBFPM, PMPM), and [revised Modeling Worksheets](http://kellyoshea.wordpress.com/physics-materials/) based on the work of [Kelly O’Shea](http://kellyoshea.wordpress.com/), [Mark Schober](http://science.jburroughs.org/mschober/physics.html), and Matt Greenwolfe.

There is still plenty of room for improvement, however. Pacing was a big issue. We still have to finish mechanics in one semester. As a result of the time spent in other units, I really had to rush energy and momentum. While students could connect to many concepts in the momentum unit with previous models, energy was completely different. However, this experience had a silver lining in that it may provide hope for other teachers who want to adopt Modeling Instruction but are concerned that they won’t have time to cover their curriculum. I decided at the beginning of the semester that I would spend the time I felt was needed on each unit to develop the underlying skills of critical thinking, problem solving, and conceptual understanding. When I got near the end of the semester and had to fly through energy, I didn’t introduce it as another modeling unit. Instead, I presented it to the students as another representation of mechanics. I encouraged them to apply their critical thinking and problem solving skills to this different approach. I was pleasantly surprised when they did as well as previous years’ classes on the energy summative exam despite the incredible short amount of time we spend on the unit. I think this supports the idea that students versed in Modeling Instruction will have a strong foundation that will allow them to readily understand unfamiliar topics as well as, if not better, than students who covered those topics in a traditional fashion.

Whiteboarding continues to be an area that requires improvement. I made a couple of changes that improved the level of discourse among students. When whiteboarding labs, I either explicitly jigsawed the lab activities or guided groups to explore different areas such that each group had unique information to present to the class. This variety improved engagement and discussion. When whiteboarding problems, we played the mistake game on several occasions. This too increased engagement and discussion. However, I feel that I still have a long way to go to achieve the socratic dialog that I believe is possible.

Next fall, I will dramatically shorten the first unit which focuses on experimental design and analysis. I will probably still start with the bouncing ball lab but then immediately move onto the constant-velocity buggies. That should allow enough time to explore energy and momentum in a more reasonable time frame.

At least I feel like I’m on the right path.

The Danger of Misapplying Powerful Tools

When I was a software engineer, I frequently used powerful tools such as C++ and techniques such as object-oriented analysis and design to implement software that performed complex operations in an efficient and effective manner. I also spent a lot of time sharing these with others. However, I learned to provide a caveat: if misapplied, these tools and techniques can result in a much more significant problem than would result when applying less powerful ones. That is, if you are not skilled in the deployment of these tools and techniques, the risk is much larger than the benefit.

Other engineers didn’t always appreciate this caveat. So, I would try to communicate with an analogy. You can build a desk with a saw, hammer, screwdriver, and drill. You can build a desk more efficiently using a table saw, drill press, and nail gun. If you make a mistake with the hammer, you may loose a fingernail. If you make a mistake with the table saw, you may loose a finger. If you are not adept at deploying the tools and techniques, maybe you should stick with the hand tools until you are.

In reality, the risk of misapplying these tools and techniques is more significant than the impact on the immediate project. The broader risk is that others who observe the troubled project associate the failure with the tools and techniques instead of the application of those tools and techniques. People get the impression, and share their impression, that “C++ and object-oriented analysis and design is a load of crap. Did you see what happened to project X?” Rarely do people, especially people not skilled with these tools and techniques, have the impression that the problem is the application of the tools and techniques rather than the tools and techniques themselves. This, in fact, is a much more serious risk that threatens future applications of the tools and techniques in a proficient manner due to their now tarnished reputation.

A series of articles and posts recently reminded me of my experience writing software and this analogy. I feel compelled to start with a disclaimer since this post has the potential to come across as arrogant, which is certainly not my intention. I have not performed any longitudinal studies that support my conclusions. My conclusions are based on few observations and my gut instinct. I tend to trust my gut instinct since it has served me well in the past. So, if you find this post arrogant, before you write me off, see if these ideas resonate with your experience.

**SBAR**

Let’s start with Standards-Based Reporting and Assessment (SBAR) (a.k.a., Standards-Based Grading (SBG)). Last year, my school started [adapting SBAR school-wide](https://pedagoguepadawan.net/23/growingsbarschoolwide/). SBAR is a powerful methodology that requires proficient deployment. It is not easy to adapt and effectively apply SBAR to a classroom in an effective way that resonates with parents, students, teachers, and administrators. Proper deployment requires a fundamental change in the teacher’s and students’ philosophy of learning. While the effect of a failed deployment on the individual classes is unfortunate, the larger problem is that teachers and parents attribute the problems to SBAR and not its application. It takes much less effort to convince a parent confused about SBAR of its value than it does to convince a parent livid about SBAR due to a poor experience in another class. At my school, one early SBAR adopter stopped referencing SBAR or SBG at all in his class to distance his methodology from the problematic applications. Fortunately, my school has pulled back a bit this year. This is the risk of mandating application of a powerful tool by those not proficient in its deployment. This is not [a unique experience](http://t-cubed-teaching.blogspot.com/2011/10/sbg-goes-up-in-smoke.html).

Two years ago, another teacher and I decided to try to apply SBAR to our Honors Physics class. We mitigated the risk by limiting deployment to six sections of a single class taught just by the two of us. We sent letters to parents, talked to parent groups, discussed the system with students during class. Only after gaining a year of experience, did we attempt to adapt SBAR to our General Physics class which contained ten sections and was taught by four different teachers. The risk of trying to deploy SBAR on this scale initially was too great given our proficiency.

**Technology**

Someone recently shared [this New York Times article](http://www.nytimes.com/2011/09/04/technology/technology-in-schools-faces-questions-on-value.html?_r=2&pagewanted=all) that questions the value of technology in the classroom. In general, a given piece of technology on its own isn’t effective or not effective. Whether technology is effective or not depends as much on its application as the technology itself. It depends on the teacher and the students and the class. Personally, I’ll stick with my [$2 interactive whiteboards](http://fnoschese.wordpress.com/2010/08/06/the-2-interactive-whiteboard/). This isn’t because SMART Boards are inherently ineffective. It is because they aren’t effective for me and my students given my classroom and my expertise. I expect there are teachers out there who use SMART Boards quite effectively. They are probably sick of hearing how they are a complete waste of money.

I hope to have a class set of iPads at some point this year. My school isn’t going to buy iPads for every student. Instead, we’ll put iPad in the hands of 25 General Physics students in my classroom and see what we can do together. Start small, reflect, adjust, expand.

**Modeling**

I participated in a [Modeling Instruction Physics](http://modeling.asu.edu/) workshop in the summer of 2008. I didn’t dare to really start modeling in my classroom until last fall. Why? I believed that the potential risk to my students due to a misapplication of the modeling methodology was tremendous. I decided that it was better for my students to learn what they could via more traditional instruction than what I foresaw as a potential disaster if I misapplied the deployment of modeling. Even more importantly, I was concerned that I could put Modeling Instruction at risk of never being adopted if my failed deployment was interpreted as a failure of Modeling Instruction itself. Only after more research, practice of Modeling Instruction techniques, and discussions with others, did I feel comfortable deploying Modeling in my class last fall. In an attempt to shield modeling from my potential deployment failures, this is the first year that I’ve associated the label “Modeling Instruction” to my class.

I used to be surprised at how adamantly some Modelers warned teachers not to do Modeling Instruction unless they had taken a workshop. I now believe they are worried about the same potential risk that I am. Modeling Instruction is a collection of powerful tools and techniques. Done well, by a skilled practitioner, Modeling Instruction can be incredibly effective. Applied ineffectively, Modeling Instruction can be a disaster and tarnish its reputation. I think students are better served by traditional instruction than by Modeling Instruction applied ineffectively. Traditional instruction may result in a lost fingernail. Ineffective modeling instruction may result in a lost finger. There, I said it. Disagree in the comments. Just don’t take that quote out of context.

While not directly related to modeling, I believe [this recent article](http://www.palmbeachpost.com/news/schools/science-teachers-at-loxahatchee-middle-school-strike-back-1916851.html?viewAsSinglePage=true) supports my conclusions. The problem isn’t that hands-on labs are ineffective, it is that ineffective deployment of hands-on labs is ineffective.

**Conclusion**

I don’t want my thoughts that I’ve shared here to paralyze you into inaction. Rather, I hope that I’ve encouraged you to make sure that you have sufficient expertise so you can apply your powerful tools and techniques in an effective manner. Your students will benefit and the reputation of these powerful tools and techniques will benefit as well.

How do you do this?

* Attend professional development opportunities (e.g., [Modeling Instruction Workshops](http://modeling.asu.edu/MW_nation.html)) that increase your skill with these powerful tools and techniques.
* Apply these powerful tools and techniques in a limited manner as you gain experience and expertise.
* Participate on Twitter, start a blog, read a bunch of blogs, participate in online discussions (e.g., [Global Physics Department](http://globalphysicsdept.posterous.com/#!/)), and subscribe to email lists to accelerate your knowledge of these powerful tools and techniques.
* Observe [skilled practitioners](http://quantumprogress.wordpress.com/2011/08/25/my-grading-sales-pitch/) of these tools and techniques, [find a coach](http://quantumprogress.wordpress.com/2011/10/06/taking-my-pln-to-the-next-level—virtual-coaching/) to observe you, welcome feedback from everyone.

CV Buggy Lab

Last week, I participated in a great discussion on Twitter about the various ways Modelers perform the Constant-Velocity Buggy Lab in their classrooms. The CV Buggy Lab is the paradigm lab for constant-velocity and, as a result, Modeling classrooms are filled with toy cars in the fall. I’m not sure why, but it seems that the red cars are always configured to go “fast” and the blue cars configured to go “slow”.1

CV buggies

We’ve always done a CV buggy lab, even before I started modeling, but this year we did something different. To provide some context, before we do the CV buggy lab, students have already completed a mini-modeling cycle involving the bouncing ball and explored non-linear relationships with the sliding box of mass and rubber bands. We have also briefly discussed the concept of position in terms of specifying the location of something relative to a commonly defined point. For example, “my chair is 5 floor tiles from the south wall and 10 floor tiles from the west wall.” Another teacher and I were discussing that since students were rocking these labs, our typical buggy lab that involves only one car might not be as engaging or beneficial. She decided to have students start with both cars from the start. I thought this was a great idea and decided that I also wanted each group to analyze a different scenario which would make the post-lab whiteboards discussion more interesting.

As a class, we go through the usual process of making observations, determining what we can measure, and, eventually, coming up with the purpose for the lab:

To graphically and mathematically model the relationship between position and time for two buggies traveling at different speeds.

At this point, I had to constrain the lab more than I usually would by specifying the starting position and direction for each car. I assigned each lab group a scenario (this allowed some degree of differentiation in terms of difficulty):

1. red positive direction, blue negative direction; red at 0 m, blue at 2 m
2. red positive direction, blue negative direction; red at -1 m, blue at 1 m
3. red negative direction, blue positive direction; red at 2 m, blue at 0 m
4. red positive direction, blue positive direction; red at 0 m, blue at 0.5 m
5. red positive direction, blue positive direction; red at -1 m, blue at -0.5 m
6. red negative direction, blue negative direction; red at 2 m, blue at 1.5 m

Their homework was to draw a picture of their scenario and brainstorm on how they would design the experiment.

The next day, groups designed their experiment. I didn’t provide any additional restrictions. I only verified that their pictures matched the scenarios that I had specified. Some groups decided that their independent variable would be time; others, position; others, distance. One group decided to gather data from both cars at the same time! Another group taped a marker to the back of the cars which traced their paths on butcher paper and allowed them to make more accurate measurements of the actual distance traveled.

When groups started graphing their data, I requested that they plot time on the horizontal axis. Some objected and remarked that if time was their dependent variable it should be plotted on the vertical axis. I explained that I wanted all the groups to be able to share their results which would be easier if we used a common set of axes. I reassured them that the graph police would not come and get them for plotting their dependent variable on the horizontal axis. (Anyone know why this is the convention?)

Some expected and unexpected issues emerged as students began to graph their data. As expected, those groups who chose to measure distance instead of position soon realized that their graph wasn’t going to convey everything they wanted. They went back, and using their picture, calculated positions corresponding to each distance. We use LoggerPro for graphing, and those groups who made time their independent variable, simply added a new column for the position of the second buggy. LoggerPro makes it super simple to graph multiple sets of values on the vertical axis (click on the vertical axis label and choose More…). However, those groups that made position their independent variable had more trouble since LoggerPro only allows one column to be plotted on the horizontal axis. These groups required more assistance and, in the end, I discovered that it was best to create two data sets and name the time columns identically for each. LoggerPro would then plot this “common” time column on the horizontal axis and the two position columns on the vertical axis. Not super simple, but doable.

2 data sets in LoggerPro

Each group drew their picture, graph, and equations on a whiteboard. We did a “circle whiteboard” discussion rather than having each group formally present their results. At first, the discussion focused on how the graph described the motion of the buggies. As students became more comfortable with those ideas, the discussion shifted to comparing and contrasting the different whiteboards. This was the best whiteboard discussion for the CV Buggy Lab that I have ever had. At the end of class, I confidently shared that their whiteboards captured everything that we would learn about constant velocity. We just needed more time to digest, appreciate, and refine what they had already created.

I’ll definitely do this again next year, but I hope to find a way to not assign each group a scenario and yet still end up with a variety of initial positions, directions, and relative motion. Perhaps, if I ask each group to design their own scenario, I can subtly encourage small changes to ensure the variety still exists. Plus, students usually create scenarios that I never would consider!

1 There are many ways to make the blue buggy slow. I have used wooden dowels wrapped in aluminum foil and wooden dowels with thumbtacks and wire. Others have shared that they use dead batteries, electrical tape, and aluminum foil. This year, I tried something completely different. I found these wires with magnetic ends while cleaning last spring (I have no idea who sells them). While in previous years, it seems that in every class someone’s blue buggy has an intermittent connection, I had no problems at all this year.

making a slow car

Teaching Energy

For the last couple of years, I’ve approach teaching energy from a conservation of energy perspective, deemphasized work, and focused on energy storage modes and transfer mechanisms. I think this has been very helpful for students, at least compared to starting with work and the work-energy theorem like I used to do. They understand the analogy as I pour water from the gravitational potential energy beaker into the kinetic energy beaker as the cart rolls down ramp. Students seem to more readily appreciate the idea that energy is always conserved, and, if a system doesn’t have as much energy as it used to have, we simply need to find to where it was transferred. It’s like a mystery.

This year, I’m trying to leverage as much of the [modeling methodology](http://modeling.asu.edu/) as I possibly can which includes energy pie charts and bar charts. As usual, I started conceptually and avoid numbers. We drew energy pie charts for various scenarios. Here’s an example from the Modeling curriculum:

energybarchart.png

Students readily understood and easily created these visual models and seemed to appreciate that they could actually handle real-world aspects like friction. If an object was sliding across the floor, we would include the floor in our system so that the total energy in our system, and, therefore the size of the pie chart, would remain constant as energy is transferred from kinetic energy storage mode to the internal energy storage mode. No problems here.

We then moved to energy bar charts but continued to postpone introducing numbers in Joules calculated from equations. Students had little trouble with this visual representation. For the object sliding across the floor scenario, most groups continued to include the “surface” as part of their system such that the total energy in the system remained constant and no energy flowed out of their system. For a scenario where someone pushes a box up a ramp, some groups wanted to include the person in their system, but after a discussion of the complex energy transfers that occur within the human body, they decided to keep people out of the system and include energy flowing into the system.

We started having problems when we started calculating specific energies. Students continued to want to account for energy being transferred to the internal energy storage mode. So, for example, when asked to calculate “the average force exerted by a ball on a glove,” they would get stuck trying to calculate how much of the kinetic energy of the ball is transferred to the internal energy of the ball and how much is transferred out of the system by working. I felt like an idiot when my response was, “well, since we don’t have a model that can help us calculate how much energy is transferred to the internal energy of the ball and how much energy is transferred outside of the system, we’ll have to assume that all of the energy is transferred outside of the system.” The students looked at me with that expression of, “you have gotta to be kidding me; if that is the case, why have we been including internal energy all this time?”

Basically, we stopped including internal energy in our quantitative energy bar charts and always had energy be transferred out of the system. With the aid of this visual model, students would consistently solve relatively complicated roller coaster problems without making the typical common mistakes. I could honestly tell my classes, “those of you who drew the energy bar charts, solved this problem correctly, and those of you who didn’t bother, didn’t.” Despite this clear improvement over previous years, not having a clear rationale for why why we handled internal energy differently in the quantitative bar charts compared to the conceptual visual models was disappointing. I’m sure the students were confused by this.

Suggestions for next year?