Tag Archives: physics

Reflection and Refraction Activities

We are currently in the midst of the geometric optics unit in my honors physics class and just finished waves, which includes reflection and refraction, in my regular physics class.

My colleagues and I have developed a series of reflection and refraction activities that provide a shared experience that can be leveraged as we explore reflection and refraction of light. In addition, students find these activities engaging and they generate a lot of great questions.

I hope you find a new activity that you can use in class.

Here are the handouts.

Download (PDF, 41KB)

Download (PDF, 38KB)

I don’t have photos of the reflection activities, but I think they are pretty self explanatory. If not, ask, and I’ll clarify.

I do have photos of the refraction activities. I need to give credit for the first activity which is a recreation of an AAPT Photo Content winner from a few years ago.

Colored paper behind glasses

Colored Paper behind Water Glasses

Pencil in air oil water

Pencil in Air, Oil, and Water

Toy car in beaker 1

Toy Car in Round Beaker

Masses Hiding in Fish Tank (Total Internal Reflection)

The Physics of Art and the Art of Physics

At the end of the year, we make time for a final project in our General Physics class. We purposefully define a very nebulous standard to provide the ultimate flexibility in this project:


Understand the relationships among science, technology, and society in historical and contemporary contexts.

Last year, due to the topical nature of the Fukushima nuclear disaster, we choose the [topic of nuclear energy](https://pedagoguepadawan.net/45/nuclearphysicsproject/).

This year, a colleague had the fantastic idea to choose a cross-discipline topic: the Physics of Art. I suggested extending the topic to include the Art of Physics. This topic: The Physics of Art and the Art of Physics will allow students to explore one of their passions and explore the physics and artistic elements of that passion. I expect some fantastic projects.

My colleague created the following introduction document:

Download (PDF, 55KB)

Another created the rubic:

Download (PDF, 50KB)

I created an exemplar:

Download (PDF, 251KB)

I’m using the new (at least to me) feature of [WikiSpaces](http://www.wikispaces.com/) where I can define a project and teams. Each class is its own team, but they can view and comment on other classes’ projects. This will make maintenance of the wiki manageable over multiple years.

I’ll share some of my favorites and let everyone know how this year’s project goes. I have high expectations!

Mechanics Modeling Instruction Reflection

I just finished my second year of Modeling Instruction for mechanics in my regular physics class.

While I attended a mechanics modeling workshop a few years ago, I remember when I first decided to jump into modeling with both feet. I was looking at a problem involving electromagnetic induction that required use of the equation F = BIl. All students had to do was to find three numbers, one in units of tesla, one in amps, one in meters and multiply them together without any understanding of physics. This was reinforced when I saw students in the next question trying to solve for resistance using Ohm’s Law and plugging in a velocity instead of a voltage. Many of my students weren’t understanding physics, they were learning to match variables with units and plug-and-chug. Our curriculum was much wider than deep and I felt that I had to make a change.

Fortunately, my desire to change the emphasis of the curriculum coincided with a county-wide effort to define a core curriculum for physics. While it wasn’t easy, the team of physics teachers at my school agreed that we had to at least cover the core curriculum as defined by the county effort. This was the opportunity to reduce the breadth of the curriculum, focus on understanding and critical thinking, and use Modeling Instruction for mechanics.

I felt that the first year of Modeling Instruction was a huge improvement in terms of student understanding. This past semester was even better. While just one measure, FCI gains reinforce my beliefs. In 2009, the year before introducing Modeling Instruction, my students’ average FCI Gain was .33. In 2010, the first year of Modeling Instruction, it was .43. This year, the FCI gain was .47. While I don’t credit Modeling Instruction as the sole factor that produced these improvements in students’ conceptual understanding, it is probably the most significant. We also started standard-based assessment and reporting in 2010 and, hopefully, I’m improving as a teacher in other ways. For me, the most important confirmation that I was on the right path was that I couldn’t imagine going back to the way that I was teaching before.

The three most important changes that I made this year were: [goalless problems](http://quantumprogress.wordpress.com/2010/11/20/goal-less-problems/), sequencing of units (CVPM, BFPM, CAPM, UBFPM, PMPM), and [revised Modeling Worksheets](http://kellyoshea.wordpress.com/physics-materials/) based on the work of [Kelly O’Shea](http://kellyoshea.wordpress.com/), [Mark Schober](http://science.jburroughs.org/mschober/physics.html), and Matt Greenwolfe.

There is still plenty of room for improvement, however. Pacing was a big issue. We still have to finish mechanics in one semester. As a result of the time spent in other units, I really had to rush energy and momentum. While students could connect to many concepts in the momentum unit with previous models, energy was completely different. However, this experience had a silver lining in that it may provide hope for other teachers who want to adopt Modeling Instruction but are concerned that they won’t have time to cover their curriculum. I decided at the beginning of the semester that I would spend the time I felt was needed on each unit to develop the underlying skills of critical thinking, problem solving, and conceptual understanding. When I got near the end of the semester and had to fly through energy, I didn’t introduce it as another modeling unit. Instead, I presented it to the students as another representation of mechanics. I encouraged them to apply their critical thinking and problem solving skills to this different approach. I was pleasantly surprised when they did as well as previous years’ classes on the energy summative exam despite the incredible short amount of time we spend on the unit. I think this supports the idea that students versed in Modeling Instruction will have a strong foundation that will allow them to readily understand unfamiliar topics as well as, if not better, than students who covered those topics in a traditional fashion.

Whiteboarding continues to be an area that requires improvement. I made a couple of changes that improved the level of discourse among students. When whiteboarding labs, I either explicitly jigsawed the lab activities or guided groups to explore different areas such that each group had unique information to present to the class. This variety improved engagement and discussion. When whiteboarding problems, we played the mistake game on several occasions. This too increased engagement and discussion. However, I feel that I still have a long way to go to achieve the socratic dialog that I believe is possible.

Next fall, I will dramatically shorten the first unit which focuses on experimental design and analysis. I will probably still start with the bouncing ball lab but then immediately move onto the constant-velocity buggies. That should allow enough time to explore energy and momentum in a more reasonable time frame.

At least I feel like I’m on the right path.

Honors Physics Ãœber Review Problem

Honestly, I never look forward to reviewing before exams. We have a dedicated review day at our school and I have never found it particularly engaging or effective for students. A few students have a list of specific questions to ask, and they benefit from the answers and discussions, but many do not.

This year, in Honors Physics, the calendar was such that we ended up having three days to review for the semester exam. My colleague had a great idea: create the Ãœber Physics problem (also known as the problem that never ends). Our goal was to review every one of our twelve [more-challenging standards](https://pedagoguepadawan.net/119/honorsphysicsstandards/). We brainstormed on a sequence of events that could be woven into a story. At the start of class, we introduced the story for that day and then left students to work through the problems with each other, ask questions about needed information, and check answers. The next day, we would summarize the previous day’s events, associated standards, and solutions before introducing the next “chapter” of the story. For the past three days, students were the most engaged during review that I have ever witnessed. They were interested in the story and excited by what the next “chapter” might bring. These problems were challenging which I believe also contributed to the interest.

Some simplifying assumptions were made but the students weren’t too critical. Unfortunately, I made a calculation error that affected the third day’s problems. When the error was corrected, the final coefficient of friction was ridiculous. I’ll have to adjust the story if I do this again next year.

While much of the story was conveyed verbally, I’ll share the rudimentary pictures that I drew and some of the specified variables. Each page corresponds to one day’s part of the story. The perspective of the diagram changes at times to show the necessary information. The answers are written in green or red and were provided one day after that part of the story was presented.

Download (PDF, 315KB)

The Danger of Misapplying Powerful Tools

When I was a software engineer, I frequently used powerful tools such as C++ and techniques such as object-oriented analysis and design to implement software that performed complex operations in an efficient and effective manner. I also spent a lot of time sharing these with others. However, I learned to provide a caveat: if misapplied, these tools and techniques can result in a much more significant problem than would result when applying less powerful ones. That is, if you are not skilled in the deployment of these tools and techniques, the risk is much larger than the benefit.

Other engineers didn’t always appreciate this caveat. So, I would try to communicate with an analogy. You can build a desk with a saw, hammer, screwdriver, and drill. You can build a desk more efficiently using a table saw, drill press, and nail gun. If you make a mistake with the hammer, you may loose a fingernail. If you make a mistake with the table saw, you may loose a finger. If you are not adept at deploying the tools and techniques, maybe you should stick with the hand tools until you are.

In reality, the risk of misapplying these tools and techniques is more significant than the impact on the immediate project. The broader risk is that others who observe the troubled project associate the failure with the tools and techniques instead of the application of those tools and techniques. People get the impression, and share their impression, that “C++ and object-oriented analysis and design is a load of crap. Did you see what happened to project X?” Rarely do people, especially people not skilled with these tools and techniques, have the impression that the problem is the application of the tools and techniques rather than the tools and techniques themselves. This, in fact, is a much more serious risk that threatens future applications of the tools and techniques in a proficient manner due to their now tarnished reputation.

A series of articles and posts recently reminded me of my experience writing software and this analogy. I feel compelled to start with a disclaimer since this post has the potential to come across as arrogant, which is certainly not my intention. I have not performed any longitudinal studies that support my conclusions. My conclusions are based on few observations and my gut instinct. I tend to trust my gut instinct since it has served me well in the past. So, if you find this post arrogant, before you write me off, see if these ideas resonate with your experience.

**SBAR**

Let’s start with Standards-Based Reporting and Assessment (SBAR) (a.k.a., Standards-Based Grading (SBG)). Last year, my school started [adapting SBAR school-wide](https://pedagoguepadawan.net/23/growingsbarschoolwide/). SBAR is a powerful methodology that requires proficient deployment. It is not easy to adapt and effectively apply SBAR to a classroom in an effective way that resonates with parents, students, teachers, and administrators. Proper deployment requires a fundamental change in the teacher’s and students’ philosophy of learning. While the effect of a failed deployment on the individual classes is unfortunate, the larger problem is that teachers and parents attribute the problems to SBAR and not its application. It takes much less effort to convince a parent confused about SBAR of its value than it does to convince a parent livid about SBAR due to a poor experience in another class. At my school, one early SBAR adopter stopped referencing SBAR or SBG at all in his class to distance his methodology from the problematic applications. Fortunately, my school has pulled back a bit this year. This is the risk of mandating application of a powerful tool by those not proficient in its deployment. This is not [a unique experience](http://t-cubed-teaching.blogspot.com/2011/10/sbg-goes-up-in-smoke.html).

Two years ago, another teacher and I decided to try to apply SBAR to our Honors Physics class. We mitigated the risk by limiting deployment to six sections of a single class taught just by the two of us. We sent letters to parents, talked to parent groups, discussed the system with students during class. Only after gaining a year of experience, did we attempt to adapt SBAR to our General Physics class which contained ten sections and was taught by four different teachers. The risk of trying to deploy SBAR on this scale initially was too great given our proficiency.

**Technology**

Someone recently shared [this New York Times article](http://www.nytimes.com/2011/09/04/technology/technology-in-schools-faces-questions-on-value.html?_r=2&pagewanted=all) that questions the value of technology in the classroom. In general, a given piece of technology on its own isn’t effective or not effective. Whether technology is effective or not depends as much on its application as the technology itself. It depends on the teacher and the students and the class. Personally, I’ll stick with my [$2 interactive whiteboards](http://fnoschese.wordpress.com/2010/08/06/the-2-interactive-whiteboard/). This isn’t because SMART Boards are inherently ineffective. It is because they aren’t effective for me and my students given my classroom and my expertise. I expect there are teachers out there who use SMART Boards quite effectively. They are probably sick of hearing how they are a complete waste of money.

I hope to have a class set of iPads at some point this year. My school isn’t going to buy iPads for every student. Instead, we’ll put iPad in the hands of 25 General Physics students in my classroom and see what we can do together. Start small, reflect, adjust, expand.

**Modeling**

I participated in a [Modeling Instruction Physics](http://modeling.asu.edu/) workshop in the summer of 2008. I didn’t dare to really start modeling in my classroom until last fall. Why? I believed that the potential risk to my students due to a misapplication of the modeling methodology was tremendous. I decided that it was better for my students to learn what they could via more traditional instruction than what I foresaw as a potential disaster if I misapplied the deployment of modeling. Even more importantly, I was concerned that I could put Modeling Instruction at risk of never being adopted if my failed deployment was interpreted as a failure of Modeling Instruction itself. Only after more research, practice of Modeling Instruction techniques, and discussions with others, did I feel comfortable deploying Modeling in my class last fall. In an attempt to shield modeling from my potential deployment failures, this is the first year that I’ve associated the label “Modeling Instruction” to my class.

I used to be surprised at how adamantly some Modelers warned teachers not to do Modeling Instruction unless they had taken a workshop. I now believe they are worried about the same potential risk that I am. Modeling Instruction is a collection of powerful tools and techniques. Done well, by a skilled practitioner, Modeling Instruction can be incredibly effective. Applied ineffectively, Modeling Instruction can be a disaster and tarnish its reputation. I think students are better served by traditional instruction than by Modeling Instruction applied ineffectively. Traditional instruction may result in a lost fingernail. Ineffective modeling instruction may result in a lost finger. There, I said it. Disagree in the comments. Just don’t take that quote out of context.

While not directly related to modeling, I believe [this recent article](http://www.palmbeachpost.com/news/schools/science-teachers-at-loxahatchee-middle-school-strike-back-1916851.html?viewAsSinglePage=true) supports my conclusions. The problem isn’t that hands-on labs are ineffective, it is that ineffective deployment of hands-on labs is ineffective.

**Conclusion**

I don’t want my thoughts that I’ve shared here to paralyze you into inaction. Rather, I hope that I’ve encouraged you to make sure that you have sufficient expertise so you can apply your powerful tools and techniques in an effective manner. Your students will benefit and the reputation of these powerful tools and techniques will benefit as well.

How do you do this?

* Attend professional development opportunities (e.g., [Modeling Instruction Workshops](http://modeling.asu.edu/MW_nation.html)) that increase your skill with these powerful tools and techniques.
* Apply these powerful tools and techniques in a limited manner as you gain experience and expertise.
* Participate on Twitter, start a blog, read a bunch of blogs, participate in online discussions (e.g., [Global Physics Department](http://globalphysicsdept.posterous.com/#!/)), and subscribe to email lists to accelerate your knowledge of these powerful tools and techniques.
* Observe [skilled practitioners](http://quantumprogress.wordpress.com/2011/08/25/my-grading-sales-pitch/) of these tools and techniques, [find a coach](http://quantumprogress.wordpress.com/2011/10/06/taking-my-pln-to-the-next-level—virtual-coaching/) to observe you, welcome feedback from everyone.

N3L Activity Stations

While the [Newton’s 1st Law activities](https://pedagoguepadawan.net/147/n1lactivitystations/) serve as a fun and short introduction, the Newton’s 3rd Law activities provide a shared experience that spans several classes. The activities that the students explore are selected to highlight the most common preconceptions that students have about Newton’s 3rd Law. I stress how important free-body diagrams are as a tool in their physics toolbox and that, once they are adept at drawing free-body diagrams and once they actually trust their free-body diagrams, they will be able to explain a number of counter-intuitive situations. I introduce these activities by stating that Newton’s 3rd Law is one of the most easily recited laws of physics and yet is least understood. Here are the activities:

Download (PDF, 35KB)

**Sequential Spring Scales**

Sequential Spring Scales

The spring scales are initially hidden under the coffee filters. Only after students make their prediction are the coffee filters removed. Most students do not predict that the spring scales will read 10 N. Some predict 5 N (the spring scales split the weight). Some predict 20 N (10 N each way adds up to 20 N). In addition to drawing the free-body diagrams, this scenario can be explored further by asking students to predict the reading on the scales if one of the weights is removed and the string is tied to a clamp instead.

**Bathroom Scale**

This station provides an important shared experience that we will refer back to when discussing the elevator problems later in the unit. This station also generates a number of excellent questions such as “would the scale work on the moon?” and “how could you measure mass on unknown planet?”

**Twist on Tug-of-War**

tug-of-war

Students were very interested in this station this year since they were in the midst of Homecoming Week and inter-class tug-of-war competitions were being held. It may have been the first time free-body diagrams were used in the planning of the tug-of-war team’s strategy. The dynamics platform in the photo is a cart build from plywood and 2x4s with rollerblade wheels and has little friction. Most students claim that whoever wins the tug-of-war pulls harder on the rope than the person who loses. Only after drawing the free-body-digram and trusting it, do they realize this is not the case.

**Medicine Ball Propulsion**

Medicine Ball Propulsion

This is a fairly straight-forward station. I often wander by and ask the students exploring it why they don’t move backwards when playing catch under normal situations. I also check at this point and see if they are convinced that the force on the ball by them is equal to the force on them by the ball.

**Computerized Force Comparison**

*This is the most important station in that it helps students truly appreciate Newton’s Third Law.* I setup several of these stations to make sure that everyone has an opportunity to watch the graph in real-time as they pull on the force sensors. This is the standard Modeling activity for Newton’s 3rd Law. For students still struggling to accept Newton’s 3rd Law while working through this activity, I challenge them to find a way to pull on the two sensors such that the forces are not equal in magnitude and opposite in direction. This activity also counters the misconception promoted by some textbooks (perhaps unintentionally) that the “reaction” force follows the “action” force. Students can clearly see that both forces occur at the same time. (We refer to paired forces according to Newton’s 3rd Law, not action-reaction forces.)

**WALL-E and the Fire Extinguisher**

Who doesn’t love WALL-E? I repeatedly loop through a clip from the [WALL-E trailer](http://youtu.be/ZisWjdjs-gM?t=2m26s). In addition to the questions on the handout, I ask students what is incorrect about the physics in the scene. This year, I also showed students this clip that [Physics Club](http://physicsclub.nnscience.net/) filmed several weeks ago:

Next-Time Questions

One of my favorite resources for developing conceptual understanding of physics are Paul Hewitt’s Next-Time Questions. Older ones are [hosted by Arbor Scientific](http://www.arborsci.com/Labs/CP_NTQ.aspx) and every month a new one is published in [The Physics Teacher](http://tpt.aapt.org/).

These questions often appear deceptively simple. However, a student’s first impression is often incorrect. I find that these are a great way to discuss and refine preconceptions. These questions are intended to be presented during one class and not discussed until the next. I always have students who are so excited to share their answer they are practically bouncing in their seats. I have to remind them that these are “next-time” questions and, therefore, we will discuss them the next-time we meet. I encourage them to discuss them with their friends over lunch or after school.

Hewitt implores us to use them as he intends:


Although these are copyrighted, teachers are free to download any or all of them for sharing with their students. But please, DO NOT show the answers to these in the same class period where the question is posed!!! Do not use these as quickie quizzes with short wait times in your lecture. Taking this easy and careless route misses your opportunity for increased student learning to occur. In my experience students have benefited by the discussions, and sometimes arguments, about answers to many of these questions. When they’d ask for early “official” answers, I’d tell them to confer with friends. When friends weren’t helpful, I’d suggest they seek new friends! It is in such discussions that learning takes place.

Here is one that I recently used during the Balanced Force Particle Model unit.

Next-Time Question

The next time my class met, the discussion of this question consumed almost the entire class time. The discussion started with a review that the forces must be balanced since the book is at rest (the special kind of constant velocity where the velocity is zero). We practiced drawing the free-body diagram for the book which was a good review of the force of friction and the normal force. We were just beginning to explore vector components, and this was a great introduction since the force from the woman’s hand is directly both upward and to the right. We then debated if the force of friction should be directed upward or downward. Students had valid arguments for each. Another student asked if there was a force of friction at all. Eventually, we drew three different free-body diagrams for the cases where there is no friction, where there is friction directed upward, and where there is friction directed downward. A fantastic discussion all centered around a single drawing and simple question.

Some time ago, I reviewed every next-time question, downloaded those that aligned with concepts we cover, and copied them into unit folders so I would remember to use them when the time was appropriate. Now, I just review each month’s next-time question in The Physics Teacher and file it appropriately.

Give one a try in class. I think you and your students will love it.

Physics Club and the Row-Bot Challenge

Three years ago my instructional coordinator encouraged myself and another physics teacher to start an after school club for students to “do cool physics stuff.” That first year, we focused on building small projects related to physics. We built candle-powered steam engines, homopolar motors, LED throwies, vibrobots, and styrofoam plate speakers. Two years ago, we started with the small projects, but then the students were inspired to launch a near-space balloon. Once the students set their minds to lauching their own near-space balloon, the club transitioned from a primarily teacher-led organization to a student-led one.

Last year, we started with a ping pong ball launcher challenge. After this kickoff, students decided to build a large hovercraft in the fall and then take it on tour to share with the community and excite people, especially younger students, about STEM. In the spring, we [launched our second near-space balloon](https://pedagoguepadawan.net/60/nearspaceballoon/).

While Physics Club has increased in popularity and size in the past three years, we were amazed when over fifty students stayed after school on Friday to join Physics Club. We’re still figuring out how to keep this many students engaged and what our big project will be for the fall. To keep everyone active while we figure this out, we introduced the 2011 Physics Club Row-Bot Challenge:

The club will document this project on [its web site](http://physicsclub.nnscience.net/rowbots). I’ll let you know how it goes.

Why the Row-bot Challenge? Well, we are considering building some sort of remote-controlled craft that can film video hundreds of feet underwater. This challenge may be a good precursor for that.

In addition to kicking off the challenge, the students had a great time filming with the high-speed camera. They are still trimming the footage and preparing the website, but here’s one of my favorites:

We also borrowed a thermal imaging camera that is normally used to diagnose computer hardware issues. While we don’t let the students use this camera, we still found some interesting things to image. One of my favorite was this comparison of an incandescent, CFL, and LED light bulb:

thermal images of light bulbs

While not planned, we also debunked those ghost TV shows. One student noticed that the camera was picking up what appeared to be a thermal ghost inside the adjacent room. This was puzzling until another student realized that the “ghost” was simply my infrared reflection off the glass door in the adjacent room. Science for the win!

Holometer: Computer-Based Measurements

As I described in the [last post](https://pedagoguepadawan.net/94/holometercorrelatedinterferometers/), the holographic noise can be detected if we measure the correlated noise between two adjacent interferometers. In order to do this effectively, the following requirements were specified:

* Digitize two analog signals at 50 MS/s (50 mega-samples per second) from each interferometer. (This is faster than is actually required since we will be looking or the holographic noise near 4 MHz.)

* stream all data to disk for offline analysis (2 channels at 50 MS/s = 200 MB/s). (This is critical for traceability and external verification of results.)

* perform spectral analysis in real-time for experiment tuning. (Eliminating sources of noise and tuning the experiment will take considerable time and is only feasible if the effect of changes can be observed in near real-time.)

This last requirement is the kicker. Calculating power spectra and cross power spectra for multiple signals in software is time consuming. Today’s computers simply do not have the processing power to perform these calculations at a rate of 50 MS/s. In the past, if a software solution wasn’t possible, the only option was designing custom hardware which wasn’t feasible for most people and most projects. However, there is now a middle ground between software and hardware: Field Programmable Gate Arrays (FPGA). An FPGA is a hardware device that can take on multiple “personalities.” It consists of a variety of logic resources and the ability to interconnect these resources based on the specified description. While the tools to develop FPGAs have come a ways, they are still beyond the grasp of most scientists and engineers.

Let’s take a step back and look at this in a historical context. Fifteen years ago I was designing integrated circuits using VHDL (VHSIC (very-high-speed integrated circuit) description language) in a graduate class. If software wasn’t fast enough, you designed your own hardware. You would simulate and test your design and then send it off to be manufactured. Shortly thereafter, I started working in the field of computer-based measurements when I started at [National Instruments](http://ni.com/) developing driver software for DAQ (Data AcQuisition). At that time we just had started supporting the PCI bus along with NuBus (Mac), PCMCIA, and AT (Windows) as well as some other buses I would rather not mention. Our high-end DAQ card was a multifunction-card that could acquire analog signals at 20 kS/s.

Fast forward to the present. A lot of instrumentation is used in the development of this experiment:

instrumentation

However, almost all that is required to satisfy the above requirements in contained in just part of this PXIe chassis:

PXI Chassis

Slots 2 and 3 contain R-Series devices that are used to run control loops to keep the laser locked. While an incredibly interesting and sophisticated application, it is not related to the above requirements. Slot 6 contains an NI PXIe-5122 digitizer. It is a 2-channel, 14-bit digitizer that can sample at 100 MS/s. While we only need to sample at 50 MS/s, we actually run at 100 MS/s because that allows us to leverage the built-in 35 MHz antialiasing filter. The binary data is streamed from the digitizer, to the controller (slot 1), and then to the NI HDD-8265 12-drive RAID array (not pictured). These devices satisfy the first two requirements quite well. As I mentioned, the most challenging and interesting part of this application is computing the power spectra and cross power spectrum at 50 MS/s. In slot 7 is an NI PXIe-7965R FlexRIO device which contains the largest FPGA available from National Instruments. Often this device is used in conjunction with an analog front-end module. However, since we already had the 5122 and wanted to take advantage of the 5122’s calibration, filtering, and synchronization features; we used the 5122 as the analog front end for the FlexRIO device. The 5122 and the 7965R support peer-to-peer streaming. The controller isn’t involved in this streaming and the data doesn’t even leave the bus segment. I programmed the FPGA on the 7965R using the FPGA Toolkit for LabVIEW. This enabled me to write familiar looking LabVIEW code (within certain constraints) and then leverage the LabVIEW compiler and Xilinx tools to produce the FPGA bitfile. The FPGA calculates and accumulates the power spectra for the two channels and the cross power spectrum between them using an optimized and sophisticated algorithm based on the approach used for the GEO 600 experiment. We hope to publish this IP through National Instruments in some way to share it with the community. The accumulated power spectra and cross power spectrum are streamed from the FlexRIO to the controller for normalization, display, and logging.

This past Friday was my last day at Fermilab working on the Holometer experiment. It was quite satisfying to watch the noise floor of the cross power spectrum of the two photodiodes drop as the application ran. While I didn’t finish everything I wanted and expect there are certainly bugs left to find and fix, I at least left them with a solid application that satisfies the requirements. I look forward to stopping in over the next year and seeing their progress!

*Disclaimer. I obviously used to work at National Instruments. I have a lot of friends who work at NI. As a shareholder, I want NI to be successful. This post may seem a bit evangelical, but you have to admit that it is pretty amazing what a high school teacher can do in six weeks with off-the-shelf hardware and software.*


This post is one in a series about The Holometer experiment and my work at Fermilab in the Summer of 2011:

* [Holometer: Holographic Noise](https://pedagoguepadawan.net/66/holographicnoise/)
* [Holometer: Interferometer](https://pedagoguepadawan.net/68/holometerinterferometer/)
* [Holometer: Spectral Analysis](https://pedagoguepadawan.net/81/holometerspectralanalysis/)
* [Holometer: Transverse Jitter](https://pedagoguepadawan.net/83/holometertransversejitter/)
* [Holometer: Correlated Interferometers](https://pedagoguepadawan.net/94/holometercorrelatedinterferometers/)
* Holometer: Computer-Based Measurements


Holometer: Correlated Interferometers

In the [previous post](https://pedagoguepadawan.net/83/holometertransversejitter/), I attempted to explain why an interferometer is susceptible to the holographic noise. However, this holographic noise is just one of many sources of noise that would be detected by the photodiode and digitizer. Some of these other sources of noise are more powerful than the holographic noise. Given that, how do we measure the holographic noise with an interferometer? We don’t. We measure the correlated noise between two adjacent interferometers. I [previously explained](https://pedagoguepadawan.net/81/holometerspectralanalysis/) how correlated noise that would otherwise be undetected can be measured by calculating a cross power spectrum. That is exactly how the Holometer, which is a pair of adjacent interferometers, measures the holographic noise.

The elephant in the room and, in my limited experience, the most challenging idea related to this experiment, is: why would the holographic noise from two adjacent interferometers be correlated? I’ve struggled trying to answer this question more than I’ve struggled with anything else this summer. I finally realized that one reason I was struggling so much is that I set myself up to fail from the start. I’ve been trying to explain this correlation from the perspective of quantum mechanics and general relativity. That approach is a nonstarter because **quantum mechanics and general relativity don’t explain the holographic principle**. This is New Physics, Planckian Physics! Another reason that I’ve struggled trying to answer this question is, quite simply, this is crazy stuff. Think about double-slit experiments with electrons, Bell Inequalities, the Einstein–Podolsky–Rosen experiment, or the Brown and Twiss interferometer. All of these are crazy. Most people wouldn’t believe them except for the fact that they have been demonstrated experimentally. The same can be said of the Holometer, most people may not believe that the holographic noise is present or is correlated until we measure it.

Let’s take a step back and take a look at light cones. A light cone contains the volume that defines the possible path of light, originating at some event, through space time. Two of the dimensions of the cone are spacial and the third is time. The upward opening cone is the future light cone that describes the potential paths of the light after the event and the downward opening cone is the past light cone. Only events within the past light cone can affect the event. This is called causality.

LightCones

(*figure 1: past and future light cones (source: Wikipedia*))

The following diagram illustrates what is referred to as the causal diamond for an interferometer. The top half of the diamond is the past light cone of the beamsplitter reflection. The bottom-half of the diamond is the future light cone of another reflection from the beamsplitter. The causal diamond is the intersection of these light cones. The red lines are the arms of the interferometer.

inteferometer space-time diagram

(*figure 2: interferometer causal diamond (source: Professor Craig Hogan*))

This is the most important diagram of the experiment. Someone inscribed it on the concrete slab that will support one end of the interferometers:

causal diamond inscription

(*figure 3: interferometer causal diamond preserved in concrete*)

We are going to focus on the “wedge” of the causal diamond defined by the arms of the interferometer. The greater the overlap of these causal-diamond wedges for a pair of interferometers, the greater the correlated holographic noise. Therefore, these two adjacent interferometers would exhibit uncorrelated holographic noise:

uncorrelated interferometers

(*figure 4: interferometer pair with non-overlapping causal diamonds*)

While these two adjacent interferometers would exhibit highly correlated holographic noise:

correlated interferometers

(*figure 5: interferometer pair with overlapping causal diamonds*)

Why? The key idea is explained in the [Holometer Proposal](http://holometer.fnal.gov/presentations/holometer-proposal-2009.pdf).

In the holographic effective theory built on light sheets, time and longitudinal position are identified. Measurement of a position at one point on a light sheet collapses the wavefunction at other points on the wavefront, even though they have spacelike separation. The apparent motion is thus in common across a significant transverse distance— not only across a macroscopic beamsplitter, say, but even between disconnected systems.

Let’s look at a top-view perspective of the two overlapping interferometers to examine this idea:

correlated wavefronts

(*figure 6: wavefront in pair of correlated interferometers*)

As the wavefront travels to the right in interferometer #1, the collapsing wave function results in a common motion of the two interferometers where their two causal diamonds overlap. The same occurs with the adjacent arms in the perpendicular direction. Therefore, these two interferometers will have highly correlated holographic noise. However for these two interferometers:

uncorrelated wavefronts

(*figure 7: wavefront in pair of uncorrelated interferometers*)

As the wavefront travels to the right in interferometer #1, the collapsing wave function doesn’t affect interferometer #2. There would not be any common motion between the arms in the x direction and, therefore, the two interferometers will have uncorrelated holographic noise.

The Holometer experiment tests the hypothesis illustrated by figures 6 and 7. We build a pair of interferometers as illustrated in figure 6. We then isolate the correlated holographic noise between the two interferometers. Then to prove that the noise is truly due to the Holographic Principle, we adjust the beam splitter to send the light down a fifth arm such that the experiment is arranged as illustrated in figure 7. We then are unable to identify correlated holographic noise between the two interferometers. QED.

All that is left for me to present is how we are going to measure the correlated holographic noise. That will be the topic of the final post. Ah, finally something in which I have some expertise!


This post is one in a series about The Holometer experiment and my work at Fermilab in the Summer of 2011:

* [Holometer: Holographic Noise](https://pedagoguepadawan.net/66/holographicnoise/)
* [Holometer: Interferometer](https://pedagoguepadawan.net/68/holometerinterferometer/)
* [Holometer: Spectral Analysis](https://pedagoguepadawan.net/81/holometerspectralanalysis/)
* [Holometer: Transverse Jitter](https://pedagoguepadawan.net/83/holometertransversejitter/)
* Holometer: Correlated Interferometers
* [Holometer: Computer-Based Measurements](https://pedagoguepadawan.net/111/holometercomputerbasedmeasurements/)