Category Archives: lab activities

Physical Representations of Variables, Objects, and References in APCSA

Historically, my AP Computer Science A students have struggled with the concept that the value of a variable in Java can be a reference to an object. Actually, even before I started teaching APCSA, students would find me the week of the AP exam and ask for help on this concept. Over the years, I’ve improved both my instruction and student understanding in several ways. I’ve utilized tools like BlueJ’s debugger and the Java Visualizer to help students see the relationship between these concepts. I’ve created practice programming activities that create the cognitive dissonance necessary to spur reconsideration of understanding. I’ve woven these concepts throughout the curriculum to provide students with multiple and increasing complex opportunities to explore these relationships. Finally, I’ve tried to keep my language purposeful, precise, and consistent as, too often, we refer to variables as if they are objects. Despite these actions, students still struggled with these concepts. Last year, I had a new idea, and this year I implemented it.

I decided that these abstract concepts are further abstracted by the computer, and that’s too much abstraction for students at this point. Therefore, I removed the computer, and we explored these concepts via an extended analogy in a CS Unplugged manner:

  • variables are represented by small sticky notes
  • objects are represented by full sheets of paper
  • full sheets of paper are numbered in their upper-right corner; these numbers are references

IMG 9412

Variables When a variable is declared, a new small sticky note is used and the variable’s name is written on the top of the sticky note. The value of the variable is written below the name. There isn’t a lot of room on a small sticky note. There’s only enough room for values that don’t take too much space to store such as primitive values and references to objects. You cannot write all of the properties of a Turtle object on a small sticky note! When the value of a variable changes, the previous value is crossed off, and the new value is written.

Objects When a new object is created, a new sheet of paper is used. To keep track of all the sheets of paper, they are numbered in their upper-right corners. The name of the class of the object is written at the top of the sheet. The names and values of the object’s properties are written below the class name. There is plenty of room on a full sheet of paper to write all the properties of a Turtle object. As the value of an object’s property is changed, the previous value is crossed off, and the new value is written.

References References are the numbers on the upper-right corner of the sheets of paper. They are used to find the desired sheet of paper from among all the sheets of paper. When a variable is assigned the reference to a newly created object, the value of the variable is the number on the upper-right corner of the new sheet of paper for that object.

Methods When a method is invoked on a variable that references an object, the value of the variable is used to find the desired sheet of paper. Once the sheet of paper is found, the method can operate on the object by potentially changing the object’s properties by updating a value on the sheet of paper or returning a value.

In one lesson, I modeled this extended analogy for the class in the context of a simple example involving two Turtle objects, two variables that reference Turtle objets (turtle1, turtle2), and two variables of type int (number1, number2). In the next lesson, pairs of students, with my assistance, traced through a couple of other examples using small sticky notes and sheets of paper.

Based on a short formative assessment, student understanding appears to be improved compared to previous years. I will continue to use this extended analogy as we revisit these concepts throughout the year. I may extend this analogy when students start implementing their own methods to include a paper form that represents a method invocation. Values are copied to complete the form. When the form is returned after the method returns, the returned value is copied as well.

If you have any feedback, please share!

Contextual Feedback Using GitHub Pull Requests

After reading @dondi’s workflow for using pull requests to provide feedback to students, I wanted to try it this semester. I wasn’t exactly sure what steps were involved, but I found a workflow that worked for me and I wanted to share it. I decided that a screencast would be an easier way to illustrate the steps rather than trying to type every step.

In general, the key is to edit one of the student’s files (the edit is simply to provide an opportunity to comment in the pull request) so a branch and pull request can be created. At this point, comments can be left where each change was made.

In the past, I’ve provided feedback as comments via Canvas’ SpeedGrader. This new approach is much better in that the code on which I’m providing feedback is adjacent to the specific comment. Once students see that the assignment is marked complete in Canvas, they check the “feedback” pull request to review my comments. If they have questions or if they have answers to questions that I’ve asked, they can continue this conversation in the pull request. While this isn’t a traditional use for pull requests, it works well and it’s good for students to be familiar with participating in conversations for pull requests.

Please comment if you have any suggestions to improve this workflow or if you have any questions!

Polar Bears Dice Activity in AP Computer Science

A few of years ago, computers weren’t ready for AP Computer Science the first day of school. So, on the first day of class, I resurrected an opening-day activity I had used in Physics: the Polar Bears around an Ice Hold Puzzle. There are a lot of similarity between my physics and computer science classes; so, I thought the activity would be a good fit.

dice

I was surprised that not only was the activity a good fit, it was more successful in the context of computer science than it was in physics. I’ve done it every year since, even as the computers are working on the first day.

Our discussion after everyone solved the puzzle focused on how this activity was an analogy for how our class will solve computer science challenges throughout the year:

  • You may feel frustrated as you try to figure computer science concepts and challenges out. That’s okay.
  • Computer science concepts and challenges are hard to understand until you know the “rules of the game.”
  • But, once you discover the rules, computer science concepts and challenges often seems easy and you may be surprised that others don’t understand.
  • However, remember that you didn’t always understand.
  • When you discover the rules and understand without someone just telling you the “answer”, you are excited.
  • The journey to understanding is very important. So, no one is going to tell you the answer, but we’re all here to support each other on our journeys.
  • Being told the “answer” at most gives you one answer that you didn’t know.
  • Learning to think critically and arrive at the answer with support develops a skill that you will use to find many answers.
  • Students who solve the challenge should offer suggestions as to how to reframe the game that helped others solve the problem without directly telling them the answer.

The reason that this activity worked even better in the context of computer science is that it also serves as an excellent example of best practices for algorithm design and debugging. As we continued to try and solve the puzzle, students introduced several important ideas:

  • if you just keep guessing, you may never solve the puzzle
  • reduce the scope of the problem (roll three dice instead of six)
  • test special cases (make all dice show five)
  • change a variable and see the effect (change one die from a five to a one).

I still end the activity by “assigning” grades based on how quickly a student solved the puzzle as described in the original post.

It was a great first day, and we didn’t spend any time on the computers!

Electronic Lab Portfolios Aligned to AP Physics Science Practices

*[Updated 15/7/2016, 10:54 PM: added links to two student lab portfolios.]*

As I mentioned briefly in [my reflection](https://pedagoguepadawan.net/435/ap-physics-2-reflection/) of the 2014-2015 school year, this past year, students created electronic lab portfolios for AP Physics 2. In summary:

* many students demonstrated deeper metacognition than I have ever observed
* several students struggled and their portfolios were incomplete
* providing feedback and scoring consumed a huge amount of my time
* structural changes made in the spring semester helped considerably

Structure
—-

I was inspired to have students create electronic lab portfolios based on [Chris Ludwig’s work](http://see.ludwig.lajuntaschools.org/?p=1197) and his presentation and our discussion at NSTA last year.

Before the start of the school year, using [siteMaestro](https://sites.google.com/a/newvisions.org/scripts_resources/add-ons/sitemaestro), I created a Google Site for each student based on [a template](https://sites.google.com/a/naperville203.org/nnhsapp2portfolio/) that I created. I made both myself and the student owner of the site and kept the site otherwise private. The template consisted of two key portions of the site: a Lab Notebook, which provides a chronologically accounting of all labs; and a Lab Portfolio, which is the best representation of the student’s performance. I [shared a document](https://docs.google.com/document/d/19aUFLSk93LIJUuKWwJh1IHHNDFUDt8U4QMtcfMImI1k/edit) with the students that explained the purpose and distinction between the Lab Notebook and Lab Portfolio.

The lab portfolios were structured around the [seven AP Physics Science Practices.](https://docs.google.com/document/d/1bcIO-B8RT73DM99zMC7R53SstuWF-MjrPz3OhAUdWG0/edit) I wanted students to evaluate and choose their best work that demonstrated their performance of each Science Practice. I also wanted the most critical and significant labs to be included; so, [some labs were required](https://docs.google.com/document/d/1bcIO-B8RT73DM99zMC7R53SstuWF-MjrPz3OhAUdWG0/edit#bookmark=id.u7kv4o1dcpux) to be in the lab portfolio. In the fall semester, I required that each student publishes at least two examples of their demonstration of each of the seven Science Practices.

I wanted students to think more deeply about the labs then they had in the past, and I didn’t want the lab portfolio to just be a collection of labs. So, in addition to the necessary lab report to demonstrate a given Science Practice, students also had to write a paragraph in which they reflected on why this lab was an excellent demonstration of their performance on the specific Science Practice.

The lab portfolio comprised 40% of the coursework grade for each semester. For the fall semester, the lab portfolio was scored at the end of the semester. I provide a few formal checkpoints throughout the fall semester where students would submit their portfolio (just a link to their site) and I would provide feedback on their labs and paragraphs.

Fall Semester
—-

Many students wrote excellent paragraphs demonstrating a deeper understanding of Science Practices than anything I had previously read. Other students really struggled to distinguish between writing a lab report and writing a paragraph that provided evidence that they had performed a given Science Practice. I did [create an example](https://sites.google.com/a/naperville203.org/nnhsapp2portfolio/lab-portfolio/science-practice-4) of both a lab report and lab portfolio reflection paragraph based on the shared experiment in first-year physics of the Constant Velocity Buggy Paradigm Lab. However, several students needed much more support to write these reflection paragraphs.

In general, those students who submitted their site for feedback had excellent portfolios by the end of the students; those who didn’t, underestimated the effort required and ended up with incomplete or poor-quality portfolios.

What I liked:

* The metacognition and understanding of Science Practices demonstrated by many students.
* Students deciding in which labs they most strongly performed each Science Practice.

What I Didn’t Like:

* Several students struggled to distinguish a lab report from a paragraph providing evidence of performing a Science Practice.
* Several students didn’t have enough support to complete a project of this magnitude and ended up with incomplete lab portfolios.
* Providing feedback and scoring all of the lab portfolios over winter break consumed a huge amount of time.

Spring Semester
—-

The spring semester has some different challenges and constraints:

* We focus more on preparing for the AP exam and less on lab reports.
* I don’t have the luxury of a two-week break to score lab portfolios at the end of the semester.

Based on these constraints and our experience during the fall semester, I made some changes for the spring semester. I selected seven required labs in the spring semester, one for each Science Practice. Each lab and reflection paragraph was due a few days after performing the lab, not at the end of the semester.

This had some advantages:

* the portfolio was scored throughout the semester
* students had more structure, which helped them stay current

and disadvantages:

* no student choice in selection of labs to include in portfolio
* no opportunity to revise a lab or reflection paragraph (the feedback could help them in labs later in the semester)

With these changes *and* students’ experience from the fall semester, the lab portfolios in the spring semester were largely successful. I think it is important to emphasize that both the changes *and* the students’ experience contributed to this success. I do not believe that the structure for the spring semester would lead to a more successful fall semester. The feedback I received from students at the end of the year was much more favorable concerning the structure in the spring semester than the structure in the fall semester.

Next Fall
—-

I had the wonderful experience of being coached this year by [Tony Borash](https://about.me/tborash). Tony provided guidance in many areas, one of which was making these adjustments for the spring semester and, more importantly, planning for next year. Together we were able to come up with a structure that will hopefully combine the strengths of the structure in the fall semester with the structure in the spring semester. My goals for these changes are to:

* provide more structure for students
* provide student choice
* incorporate peer feedback

Here’s the plan for next fall:

1. I choose the first lab. Students complete and submit the lab and the reflection paragraph. I provide feedback. Students make revisions and re-submit the lab and reflection paragraph. We review the best examples as a class.
2. I choose the second lab. Students complete the lab and the reflection paragraph. Students provide peer feedback to each other. Students make revisions and submit the lab and reflection paragraph.
3. Students choose the next lab to include in the portfolio. Students complete the lab and the reflection paragraph. Students provide peer feedback to each other. Students make revisions and submit the lab and reflection paragraph.
4. Students choose some of the remaining labs, and I choose some of the remaining labs. Students complete the labs and reflection paragraphs. Students specify a subset of Science Practices on which they want formal feedback from me and on which they want feedback from their peers. Students make revisions and re-submit.

This past year, students included a link to their lab report in their lab portfolio and shared the lab report (as a Google Doc) with me. Next year, I will have students embed their lab report into the Google site. This will facilitate peer feedback and enable everyone to use comments within the Google site to provide feedback. I may still have students share the actual doc with me, as well as include a link, so I can provide more detailed suggestions directly within the document.

Student Examples
—-

* [Nicole’s AP Physics 2 Lab Portfolio](https://sites.google.com/a/naperville203.org/nicoles-ap-physics-2-lab-portfolio-1-1/)
* [Vincent’s AP Physics 2 Lab Portfolio](https://sites.google.com/a/naperville203.org/vincents-ap-physics-2-lab-portfolio/)

Conclusion
—-

I’m pleased that my students and I are heading down this path and believe my students will gain a much deeper understanding of Science Practices as a result. While I shared this with my colleagues this past year, I also cautioned them that I didn’t have it figured out, and it wasn’t a smooth ride. I think electronic lab portfolios are an excellent way to assess student performance, and I hope that they will be used in other science courses in the future as they are a natural fit to the NGSS Science and Engineering Practices. I hope that after this next year, I will have something that will provide my colleagues with a stronger framework to adapt to their classes.

Fluids Paradigm Lab

I taught a one-semester Advanced Physics class that cumulated in the AP Physics B exam my first five years of teaching. For the past two years, I taught an official AP Physics B course. Both of these courses were packed with content. Despite being a proponent of [Modeling Instruction](http://modelinginstruction.org) and incorporating it into other courses, I never felt I could make it fit in these courses.

This year, I’m teaching the new AP Physics 2 course. The focus on inquiry, deep understanding of physcs, and science practices (and less content) aligns wonderfully with Modeling Instruction.

We just started the first major unit, fluids. I guided my students through a paradigm lab to model the pressure vs. depth in a fluid. We started by watching [this video](https://www.youtube.com/watch?v=fqWL5FsQXRI) of a can being crushed as it descends in a lake. I was worried students would find the phenomenon demonstrate too simple, but that definitely wasn’t the case. Like any paradigm lab, we started by making observations:

* the can gets crushed
* the can gets crushed more as it gets deeper
* the top of the can appears to be sealed
* the can must be empty (student commented that if full, it wouldn’t be crushed)

Students then enumerated variables that may be related to the crushing of the can:

* water pressure
* volume of water above the can
* strength of can
* air pressure inside of can
* gravitational field strength (student said “gravity” and I went on a tangent about fields…)
* temperature of water
* atmospheric pressure
* type (density) of fluid
* water depth
* speed of decent
* dimensions, surface area, shape of can
* motion of water

Students readily agreed that it was the water pressure that crushed the can and it is the dependent variable. In hindsight, I could have better focused the discussion by directing students to focus on the water pressure rather than the can itself. They had a lot of good ideas about what properties of the can would affect it being crushed, which I didn’t expect. I had to admit that I didn’t have any cans and we would have to focus on the fluid instead…. I was amazed that no one in my first class proposed that the depth of the fluid would play a role. Everyone in that class phrased it as the volume of the fluid in the container above the can was a variable to measure. This was fascinating to me and led to a surprising result for the students as the experiment was conducted. I think this illustrates the power of the modeling cycle and guided inquiry labs.

We next determined which of the above variables we could control (independent variables) and measure in the lab given the resources available at the moment:

* volume of water above the can
* type (density) of fluid
* water depth
* speed of decent

The materials we planned on using were Vernier LabQuest 2 interfaces, pressure sensors with glass tube attachments, three different sized beakers (for the volume variable), graduated cylinders, fluids (water, canola oil, saturated salt water).

We then defined the purpose of our experiment:

To graphically and mathematically model the relationship between (TGAMMTRB) pressure, volume of fluid above, depth below surface of fluid, decent rate, and type of fluid (density).

We divided these various experiments among the lab groups, and groups started designing their particular experiment.

At the start of class the next day, groups shared their results. I was particularly impressed with the groups investigating pressure vs. volume of fluid above a point. While they measured a relationship between pressure and volume, their experimental design was sufficiently robust that they also noticed that the same volume above the measurement point resulted in different pressures in different beakers! That is, the pressure with 400 mL of water above the sensor in the 600 mL beaker is different than in the 1000 mL beaker and different again from that in the 2000 mL beaker. After further investigation they concluded that the relationship was based on depth, not volume.

The groups investigating pressure vs. depth in fluid were confident that the pressure at a point depended on the depth below the surface of the fluid, and they had sufficient data that they were also confident that there was a linear relationship between pressure and depth.

The groups that investigated pressure vs. fluid density at constant depth/volume had inconclusive results. The pressure they measured varied by less than 1% between the three types of fluids. This provided an opportunity to discuss how the experimental technique can affect the uncertainty of the measurement. We discussed that with the new understanding of the relationship between pressure and depth, these groups could gather several measurements at various depths in each of the three fluids and compare the slopes of the resulting graphs to see if density has an effect. While we were discussing measurement uncertainty, we also discussed how the depth is defined not by the position of the bottom of the glass tube, but the water level within the glass tube. I learned of this important experimental technique in the article “[Pressure Beneath the Surface of a Fluid: Measuring the Correct Depth](http://scitation.aip.org/content/aapt/journal/tpt/51/5/10.1119/1.4801356)” in The Physics Teacher. While the groups investigating the effect of fluid density on pressure applied their new experimental technique, the rest of the groups repeated gathering pressure vs. depth data while carefully examining the fluid level in the glass tube.

After a second day of measurements, students confirmed the linear relationship between pressure and depth. In addition, with the improved experimental design, students confirmed a relationship between pressure and fluid density. The results were not as accurate as I had expected. We identified a couple of additional errors that may have contributed. One, a couple of groups lost the seal between the glass tube and the plastic tube connected to the pressure sensor when the glass tube was in the fluid. This results in the fluid filling the glass tube and future measurements are incorrect if the glass tube is reconnected without removing it from the fluid.

I asked my TA to minimize the known sources of measurement uncertainty, perform the experiment, and determine how accurately pressure vs. depth could be measured. The slope of his pressure vs. depth graph was within 3.16% of the expected value. This is quite a reasonable result. If we used a taller graduated cylinder, I expect the error could be reduced further.

I’ll definitely do this paradigm lab again next year!

Projectile Motion Lab Practicum and Computational Modeling

In my AP Physics B class, I’m reviewing all of the material on the AP exam even though all of the students studied some of this materials last year in either Physics or Honors Physics. When we do have a review unit, I try to keep it engaging for all students by studying the concepts from a different perspective and performing more sophisticated labs.

When reviewing kinematics, I took the opportunity to introduce computational modeling using [VPython](http://www.vpython.org/index.html) and the [physutils](https://per.gatech.edu/wiki/doku.php?id=projects:hscomp:physutil) package. I started with [John Burk’s](http://quantumprogress.wordpress.com/) [Computational Modeling Introduction](http://quantumprogress.wordpress.com/2011/08/17/computational-modeling-assignment-draft-3/) and extended it with my experiences at Fermilab where computational modeling plays a role in everything from the optics of interferometers to the distribution of dark matter in the galaxy. I then provided students with a working example of a typical projectile motion model and let them explore. I encouraged them to extend the model to have the projectile launched with an initial vertical displacement.

Later that unit, I introduced the lab practicum which was based on a lab shared by my counterpart at our neighboring high school. The goal of the lab was to characterize the [projectile launcher](http://www.pasco.com/prodCatalog/ME/ME-6800_projectile-launcher-short-range/index.cfm) such that when the launcher is placed on a lab table, the projectile will hit a constant velocity buggy driving on the floor, away from the launcher, at the specified location. The location would not be specified until the day of the lab practicum. No procedure was specified and students decided what they needed to measure and how they wanted to measure it. I also used this as practice for writing clear and concise lab procedures like those required on the free response section of the AP exam.

All groups figured out that they needed to determine the velocity of the car (which some had done the previous year) and the initial velocity of the projectile. Some groups used a technique very similar to the previous year’s projectile motion lab where a marble is rolled down a ramp and launched horizontally. These groups fired the projectile horizontally from atop the table and measured the horizontal displacement. Groups that calculated the flight time based on the vertical height were more accurate than those that timed the flight with a stopwatch. Another group fired the projectile straight up, measured the maximum height, and calculated the initial velocity. This group was particularly successful. Another group attempted to use a motion sensor to measure the initial velocity of the ball as they fired it straight up. The motion sensor had trouble picking up the projectile and this group’s data was suspect. A couple of other groups fired the projectile at a variety of angles, timed the flight, and measured the horizontal displacement. Some of these groups later realized that they didn’t really need to perform measurements at a variety of angles. After gathering data and calculating the initial velocity of the projectile as a group, I asked the students to practice calculating their launch angle based on a sample target distance. I hadn’t really thought this lab through and didn’t appreciate how challenging it would be to derive an equation for the launch angle as a function of horizontal displacement when the projectile is launched with an initial vertical displacement. It wasn’t until that night that I appreciated the magnitude of this challenge and then realized how this challenge could be used to dramatically improve the value of this lab.

Most students returned the next day a bit frustrated but with an appreciation of how hard it is to derive this equation. One student, who is concurrently taking AP Physics B and AP Physics C, used the function from his AP Physics C text successfully. Another student amazed me by completing pages of trig and algebra to derive the equation. No one tried to use the range equation in the text, which pleased me greatly (the [found candy](http://www.wired.com/wiredscience/2011/06/dont-eat-candy-you-find-on-the-ground/all/1) discussion must have made an impact on them). As we discussed how challenging it was to solve this problem, I dramatically lamented, “if only there was another approach that would allow us to solve this complex scenario…” The connection clicked and students realized that they could apply the computational model for projectile motion to this lab. Almost all of the groups chose to use the computational model. One student wrote his own model in Matlab since he was more familiar with that than Python. With assistance, all groups were able to modify the computational model and most were successful in hitting the CV buggy. One group dressed for the occasion:

students ready to launch

Students’ reflections on this lab were very positive. They remarked how they appreciated learning that there are some physics problems that are not easily solved algebraically (they are accustomed to only being given problems that they can solve). They also remarked that, while they didn’t appreciate the value of computational modeling at first, using their computational model in the lab practicum showed them its value. I saw evidence of their appreciation for computational modeling a couple of weeks later when a few of the students tried to model an after-school Physics Club challenge with VPython. For me, I was pleased that an oversight on my part resulted in a much more effective unit than what I had originally planned.

Updated Measurement Uncertainty Activities

Like [last year](https://pedagoguepadawan.net/124/measurementuncertaintyactivities/), we started Honors Physics with Measurement Uncertainty activities. Based on last year’s experience, last fall’s Illinois Science Education Conference, and this summer’s QuarkNet workshop “[Beyond Human Error](http://quarknet.fnal.gov/fnal-uc/human-error.html),” we made some minor modifications.

With the popularity of the LHC’s five sigma result, there was more of a context in which to introduce the concept of measurement uncertainty. I mentioned how [calculus and Monte Carlo techniques](http://www.wired.com/wiredscience/2011/08/measurement-and-uncertainty-smackdown/all/1) could be used, but we stuck with the Crank-Three-Times method for this algebra-based class.

What was really missing in last year’s activities was how to estimate the measurement uncertainty when performing computer-based experiments. There are so many factors that contribute much more significantly to the measurement uncertainty than the computer-based measurement devices. David Bonner presented on “Learning Physics Through Experiments: Significance of Students’ Interpretation of Error” at the Illinois Science Education Conference last fall. One great idea I took away from his session was a simple and effective approach to addressing this challenge where students perform many trials to establish a range of values from which the measurement uncertainty is determined.

We rewrote the fifth station to introduce students to this method. Rather than using stopwatches, we setup two daisy-chained photo gates connected to a LabQuest 2 to measurement the elapsed time as a cart travels from the first gate to the second. The uncertainty of the LabQuest 2 is insignificant compared to other factors that affect the motion of the cart. Students performed ten trials and determined the measurement uncertainty from the range of values that they measured. We will use this technique throughout the year to estimate the measurement uncertainty.

Download (PDF, 35KB)

Holography Resources

This post is primarily for those teachers attending the Summer 2012 QuarkNet Workshop at Fermilab. However, other teachers interested in making holograms may find it useful; if you have questions, please contact me as you won’t have the experience of making your own hologram during the workshop.

Teachers at my school and, most recently, myself have learned how to make holograms in the classroom from Dr. Tung H. Jeong, a recipient of the Robert Millikan Medal from the American Association of Physics Teachers for his work in holography. After attending an AAPT workshop led by Dr. Jeong, I refined our techniques for making holograms and we started making transmission holograms in addition to reflection holograms.

When introducing holography to students, I start with a video from the [How It’s Made TV show](http://www.schooltube.com/video/be196dd21b70b927aea6/) about holography.

I then introduce the holography and advise students how to select objects from which to make a hologram. The slides I use are below.

Download (PDF, 810KB)

We order all of our supplies from Integraf, which is associated with Dr. Jeong. Integraf has [several tutorials](http://www.integraf.com/newsarticles.htm) on their web site which are essential reading:

* [Simple Holography](http://www.integraf.com/a-simple_holography.htm) should be read first. It describes all of the basics of making reflection holograms with many aspects applicable to transmission holograms as well.
* [How to Make Transmission Holograms](http://www.integraf.com/a-make_transmission_hologram.htm). I prefer to make transmission holograms as they have several advantages over reflection holograms. The only disadvantage is that they require laser light to view. However, given how affordable laser pointers (green are best) have become, this disadvantage is becoming less significant.
* [Instructions for JD-4 Processing Kit](http://www.integraf.com/Downloads/JD-4.pdf). I believe this PDF file is the most recent version of the instructions. Similar directions are on the website, but the timings in this document are slightly different.

Supplies

* PFG-03M Holographic Plates (2.5″ x 2.5″, 30 plates, $105, Item #[S3P-06330](http://www.integraf.com/hologram_film_pfg-03m.htm))
* JD-4 Processing Kit ($17, Item #[JD4](http://www.integraf.com/hologram_developer.htm))
* Holography Diode Laser (650nm (red), 4mW, $36, Item #[DL-4B](http://www.integraf.com/hologram_diode_laser.htm))

A [previous post](https://pedagoguepadawan.net/49/holography/) on holography describes how I setup a station and has several pictures.

Reflection and Refraction Activities

We are currently in the midst of the geometric optics unit in my honors physics class and just finished waves, which includes reflection and refraction, in my regular physics class.

My colleagues and I have developed a series of reflection and refraction activities that provide a shared experience that can be leveraged as we explore reflection and refraction of light. In addition, students find these activities engaging and they generate a lot of great questions.

I hope you find a new activity that you can use in class.

Here are the handouts.

Download (PDF, 41KB)

Download (PDF, 38KB)

I don’t have photos of the reflection activities, but I think they are pretty self explanatory. If not, ask, and I’ll clarify.

I do have photos of the refraction activities. I need to give credit for the first activity which is a recreation of an AAPT Photo Content winner from a few years ago.

Colored paper behind glasses

Colored Paper behind Water Glasses

Pencil in air oil water

Pencil in Air, Oil, and Water

Toy car in beaker 1

Toy Car in Round Beaker

Masses Hiding in Fish Tank (Total Internal Reflection)

Resources for Middle School Science Activities

When I visited National Instruments and [shared my experiences with STEM in high school](https://pedagoguepadawan.net/164/stem-talk-at-ni/), a talked to a few friends who were involved in various types of science programs for middle school youth. They were interested in activities they could use to help develop fundamental scientific understandings (such as scale) as well as be engaging and provide an opportunity to learn about various phenomena. I don’t have any experience at the middle school level, but I reviewed the various projects that I’ve done (or hope to do) with high school students either in class or as part of Physics Club.

If you have a few favorite activities that would be appropriate for these students, please leave a comment. I’ll pass along a link to this post to my friends back in Austin.

Science and Engineering Projects
———-

* [Naked Egg Drop](http://noschese180.posterous.com/day-68-naked-egg-drop)
* [Blinkie LED kits (learn to solder)](http://www.2dkits.com/zencart/)
* [Compressed Air Rockets](http://blog.makezine.com/archive/2011/10/how-to-compressed-air-rockets.html)
* [Hovercraft](http://blog.makezine.com/archive/2011/06/some-assembly-required-leaf-blower-hovercraft.html)
* [Brushbots](http://blog.makezine.com/archive/2011/05/in-the-makershed-brushbots.html)
* [Bristlebot](http://blog.makezine.com/archive/2007/12/how-to-make-a-bristlebot.html)
* [Camerphone spectrometer](http://www.wired.com/gadgetlab/2010/10/in-high-school-chem-labs-every-camera-phone-can-be-a-spectrometer/)
* [Glow sticks](http://blog.makezine.com/archive/2010/07/how-to_make_glow_sticks.html)
* [Vortex cannon](http://www.make-digital.com/make/vol15/#pg116)
* [Simple Laser Communicator](http://www.make-digital.com/make/vol16/#pg1)
* [Paper Plate Speakers](http://www.josepino.com/circuits/index.php?howto-speaker.jpc)
* [Tweet-a-Watt](http://blog.makezine.com/archive/2009/01/tweetawatt_our_entry_for_the_core77.html?CMP=OTC-0D6B48984890)
* [Homopolar Motor](http://blog.makezine.com/archive/2009/07/homopolar_motor_from_make_volume_01.html?CMP=OTC-0D6B48984890)
* [Holograms](http://www.integraf.com/a-simple_holography.htm)
* [Cosmic Ray Cloud Chamber](http://quarknet.fnal.gov/resources/QN_CloudChamberV1_4.pdf)

Scale
—–

* [Secret Worlds: The Universe Within](http://micro.magnet.fsu.edu/primer/java/scienceopticsu/powersof10/)
* [Building a sense of scale in the classroom](http://quantumprogress.wordpress.com/2011/09/14/building-a-sense-of-scale-in-the-classroom/)
* [The Scale (and Limits) of the Universe](http://scienceblogs.com/startswithabang/2010/10/the_scale_and_limits_of_the_un.php)
* [Scale of the Universe](http://primaxstudio.com/stuff/scale_of_universe/)
* [The Universe by Orders of Magnitude](http://freshphotons.tumblr.com/post/1255372595/natarie-badass)

Citizen Science
———

* [scistarter](http://scistarter.com/index.html)
* [Scientific American’s Citizen Science Initiative](http://www.scientificamerican.com/blog/post.cfm?id=welcome-to-scientific-americans-cit-2011-05-02)

Great resources:
———-

* [MAKE Magazine](http://www.makezine.com/)
* [Howtoons](http://www.howtoons.com/)
* [QuarkNet](http://quarknet.fnal.gov/)
* [Science Olympiad](http://soinc.com/)