The Force Concept Inventory (FCI) is currently

Similar documents
Context sensitivity in the force concept inventory

Comparison of FCI gains for small (ASU honors) and large (FC) sections of Modeling UP Academic Year. Figure 2.

THE EFFECT OF DISTRACTERS ON STUDENT PERFORMANCE ON THE FORCE CONCEPT INVENTORY

Relations between representational consistency, conceptual understanding of the force concept, and scientific reasoning

The effect of distracters on student performance on the force concept inventory

TEACHERS TOPICS A Lecture About Pharmaceuticals Used in Animal Patients

King Fahd University of Petroleum & Minerals College of Industrial Management

Mendelian Genetics Using Drosophila melanogaster Biology 12, Investigation 1

Relationship Between Eye Color and Success in Anatomy. Sam Holladay IB Math Studies Mr. Saputo 4/3/15

Bergen Community College Division of Health Professions Department of Veterinary Technology Large Animal Nursing Summer 2011

JEFFERSON COLLEGE COURSE SYLLABUS VAT114 PRINCIPLES OF CLINICAL MEDICINE II. 4 Credit Hours

CRITICALLY APRAISED TOPICS

JEFFERSON COLLEGE COURSE SYLLABUS VAT113 PRINCIPLES OF CLINICAL MEDICINE I. 4 Credit Hours. Prepared by: Dana Nevois, MBA, BS, RVT

JEFFERSON COLLEGE COURSE SYLLABUS VAT113 PRINCIPLES OF CLINICAL MEDICINE I

Modeling: Having Kittens

Grade 4: Too Many Cats and Dogs In-Class Lesson Plan

JEFFERSON COLLEGE COURSE SYLLABUS VAT265 FOOD ANIMAL TECHNOLOGY. 3 Credit Hours. Prepared by: Dana Nevois, RVT, BS, MBA Revised August 2012

Welcome! Your interest in the veterinary technology program at ACC is greatly appreciated. AS a recently AVMA accredited program there are many

Chapter 13 First Year Student Recruitment Survey

VMC 905: Advanced Topics in Small Animal Dermatology

Comparative Evaluation of Online and Paper & Pencil Forms for the Iowa Assessments ITP Research Series

Grade Level: Four, others with modification

Grade 4: Too Many Cats and Dogs In-Class Lesson Plan

Infectious Diseases of Cattle, Buffaloes, Calves, Sheep and Goats

Dog Years Dilemma. Using as much math language and good reasoning as you can, figure out how many human years old Trina's puppy is?


PARADE COLLEGE Mathematics Methods 3&4-CAS Probability Analysis SAC 2

OBJECTIVE: Students work as a class to graph, and make predictions using chicken weight data.

BIOL 2900 D 4.00 Microbiology in Health/Disease

Critically Appraised Topics in the Radiodiagnosis Curriculum

SECTION 5.0 OPERATION OF THE NATIONAL GRADING SYSTEM

COURSE SYLLABUS. Course name: "ENVIRONMENT, ANIMAL BEHAVIOUR AND WELFARE" Academic year

For more information, see The InCalf Book, Chapter 8: Calf and heifer management and your InCalf Fertility Focus report.

Course: Microbiology in Health and Disease Office Hours: Before or after Class or by appointment

Adaptations of Turtles Lesson Plan (Level 1 Inquiry Confirmation)

Catapult Project (Quadratic Functions)

INTRODUCTORY ANIMAL SCIENCE

University of Arkansas at Monticello. ANIMAL CARE AND USE POLICY Effective September 6, 2006

Part I Measuring Resistance

CONTINUING EDUCATION AND INCORPORATION OF THE ONE HEALTH CONCEPT

HERPETOLOGY BIO 404 COURSE SYLLABUS, SPRING SEMESTER, 2001

Animal Health POPM*4230 Fall Course Outline

JEFFERSON COLLEGE COURSE SYLLABUS VAT265 FOOD ANIMAL TECHNOLOGY. 3 Credit Hours. Prepared by: Dana Nevois, RVT, BS, MBA Revised August 2012

INTRODUCTORY ANIMAL SCIENCE

Project Protocol Number UNIVERSITY OF HAWAII INSTITUTIONAL ANIMAL CARE &USE COMMITTEE 2002 VERTEBRATE ANIMAL USE PROTOCOL FORM

Course # Course Name Credits

Section: 101 (2pm-3pm) 102 (3pm-4pm)

DEPARTMENT OF LICENSING AND REGULATORY AFFAIRS DIRECTOR'S OFFICE VETERINARY MEDICINE - GENERAL RULES

Pilot study to identify risk factors for coprophagic behaviour in dogs

The Role of Academic Veterinary Medicine in Combating Antimicrobial Resistance

STAT170 Exam Preparation Workshop Semester

Evaluating the quality of evidence from a network meta-analysis

Course: Canine Massage and Bodywork Certification Course Part A Cranial Trunk and Thoracic Appendicular System. Movers of the Forelimb, Neck, and Head

Reducing Time to Initial Antibiotic Dose in Pneumonia Patients

Our class had 2 incubators full of eggs. On day 21, our chicks began to hatch. In incubator #1, 1/3 of the eggs hatched. There were 2 chicks.

Veggie Variation. Learning Objectives. Materials, Resources, and Preparation. A few things your students should already know:

4-H Dog Poster Project

MCB 301- BACTERIOLOGY COURSE PARTICULARS COURSE INSTRUCTORS COURSE DESCRIPTION

STATE UNIVERSITY OF NEW YORK COLLEGE OF TECHNOLOGY CANTON, NEW YORK

COURSE SYLLABUS. Academic year

LOUDOUN CAMPUS ADMISSION APPLICATION VETERINARY ASSISTANT PROGRAM

Veterinary Assistant Course Curriculum

Course: Microbiology in Health and Disease

COURSE SYLLABUS. Course name: Animal Breeding and Production (3 rd semester) Academic year

OIE STANDARDS ON VETERINARY SERVICES ( ), COMMUNICATION (3.3), & LEGISLATION (3.4)

Good Health Records Setup Guide for DHI Plus Health Event Users

Free Ebooks The Small Animal Veterinary Nerdbook

Multiclass and Multi-label Classification

Hetta Huskies- A Veterinary Experience? (Written by pre- vet volunteer, Emmanuelle Furst).

Essential Principles of Horseshoeing

MSc in Veterinary Education

Conflict-Related Aggression

Veterinary Science Preparatory Training for the Veterinary Assistant. Floron C. Faries, Jr., DVM, MS

Rabbit Activity Sheet Level 3 Grades 9 & Up

LABORATORY ANIMAL SCIENCE FOR RESEARCHERS PLANNING ANIMAL EXPERIMENTS AND PERSONS PERFORMING PROCEDURES BASIC GENERAL MODULE (LAS 301)

AKC. Evaluator s AKC S.T.A.R.

SOAR Research Proposal Summer How do sand boas capture prey they can t see?

Course: Animal Production. Unit Title: Mating Systems TEKS: 130.3(C)(6)(C) Instructor: Ms. Hutchinson. Objectives:

SATS. An Explanation of Working Trials Exercises. Plus how to get started/ What to expect for Newcomers to the sport of Working Trials

2013 Holiday Lectures on Science Medicine in the Genomic Era

Teacher Edition. Lizard s Tail. alphakids. Written by Mark Gagiero Illustrated by Kelvin Hucker

CURRICULUM VITAE Susan C. Hodge, DVM, Diplomat ACVS-SA

YEARS 1-2 ACTIVITY ABSORBENCY OF DIFFERENT CAT LITTERS (QUALITATIVE) HERE KITTY KITTY...

Let s Talk Turkey Selection Let s Talk Turkey Expository Thinking Guide Color-Coded Expository Thinking Guide and Summary

Course Offerings: Associate of Applied Science Veterinary Technology. Course Number Name Credits

International Veterinary Acupuncture Society

LONG RANGE PERFORMANCE REPORT. Study Objectives: 1. To determine annually an index of statewide turkey populations and production success in Georgia.

Answers to Questions about Smarter Balanced 2017 Test Results. March 27, 2018

Antibiotic usage in the British sheep industry. Dr Peers Davies

Book Cats Test Year 7 Sample Paper Quantitative Epub

A CAREER IN VETERINARY MEDICINE

JEFFERSON COLLEGE COURSE SYLLABUS INTRODUCTION TO VETERINARY TECHNOLOGY

Thursday 23 June 2016 Morning

The Year of the Dog. thank them for their loyalty, the Buddha gave each one of these animals their own year in the Chinese zodiac cycle.

VetBact culturing bacteriological knowledge for veterinarians

Lab Developed: 6/2007 Lab Revised: 2/2015. Crickthermometer

Interventions for children with ear discharge occurring at least two weeks following grommet(ventilation tube) insertion(review)

The Role of Academic Veterinary Medicine in Combating Antimicrobial Resistance

Teaching Assessment Lessons

One Trait, Two Traits Dominant Trait, Recessive Trait Sarah B. Lopacinski Rockingham County

Transcription:

Common Concerns About the Force Concept Inventory Charles Henderson The Force Concept Inventory (FCI) is currently the most widely used assessment instrument of student understanding of mechanics. 1 This 30-item multiple-choice test has been very valuable to the physics education community by helping to show that students can solve common types of quantitative problems without a basic understanding of the concepts that are involved. 2 Since the test is so easy and quick to administer, many physics instructors have given it to their classes and have been surprised by the low scores of their students. This has, in part, helped to fuel the growing interest in physics education research. Although there have been some concerns about how to interpret FCI scores in terms of relating them to deepseated student misconceptions or coherent ideas of force, 3 the value of the test for helping to evaluate the effect of curricular changes is widely accepted. One of the most useful things about the FCI is that there are national data available that show test results do depend, to a large degree, on instructional practices and give curricular reformers a scale upon which to measure their success. 4 At the University of Minnesota we have been using the FCI since 1993 to gather data about our introductory calculus-based physics courses. The University of Minnesota is in a rather unique position in that we have been modifying the curriculum of all five sections of introductory calculus-based physics (rather than a single experimental section, as is commonly done at other universities). Because of the large scale and duration of our reform efforts, we are accountable to many faculty members and we frequently make use of FCI scores in describing course outcomes. We find that faculty members often have questions concerning the administration and interpretation of the FCI. We are in a unique position to address many of these questions since we have an unusually large amount of student FCI data that come from students who were all given fairly similar instruction. This paper will address the following questions: Charles Henderson Charles Henderson is an assistant professor of physics at Western Michigan University. He has taught physics in a variety of settings including high school, community college, four-year college, and research university. Henderson s research interests lie in the area of physics education, particularly in understanding physics teachers beliefs and values about teaching and learning. Such research will help improve physics teaching by facilitating communication between the physics education research community and the larger community of physics teachers. The data for this paper were collected while he was a doctoral student with the Physics Education Research Group at the University of Minnesota. Western Michigan University Kalamazoo, MI 49008-5252; Charles.Henderson@wmich.edu 542 THE PHYSICS TEACHER Vol. 40, December 2002

1. Can the FCI be used as a placement test? 2. How do we know that students take the FCI seriously when it is not graded? 3. Doesn t giving the FCI as a pre-test influence the post-test results? All of the data gathered for this paper are from the introductory calculus-based physics courses at the University of Minnesota, and it is important for readers from other institutions to keep in mind that results obtained in their instructional settings might not be similar to those presented in this paper (as mentioned previously, it is well known that FCI scores depend on instructional practices). Setting About 850 students enroll in introductory calculus-based physics each fall semester at the University of Minnesota. The goal of our course is to have students learn physics by solving problems. We attempt to accomplish this goal through an instructional practice known as cooperative group problem solving that has been described elsewhere. 5 Students select one of five lecture sections meeting at various times during the day. Each lecture section has between 80 and 250 students. The course follows a traditional model of three lecture hours, two lab hours, and one recitation hour each week. The focus of the lectures varies somewhat with the lecturer; however, all labs and recitation sessions use cooperative group problem solving. Each lecturer is responsible for writing quizzes during the semester, and all five lecturers collaborate to write a common final exam. FCI Testing The first thing that students do during the first lab session is to take the FCI as a pre-test. The test is offered to all students who come to the first lab. Students are told that the test is voluntary and that their participation in the testing will not affect their course grade. The post-test has been given on the final exam (1997 and 1998) or during the last lab session (1999). When it was given on the final exam, it counted toward the student s final exam grade. When it was given in the lab, it was voluntary. At the end of the semester, students pre-test and post-test scores are matched. Student scores are eliminated from this study if they left 20% (six questions) or more of the FCI items blank on either the pre- or post-test. There are two versions of the FCI. We began using the revised 30-item version of the FCI 6 in 1997. To avoid possible problems involved with comparing scores on the original and revised FCI, only scores from the revised FCI are used in this paper. Can the FCI Be Used as a Placement Test? Some universities have placement tests that physics students must take in order to decide what level of physics course would be most appropriate for them. The goal of giving a placement test is to identify students who are very likely to do poorly in a given class and suggest alternative or supplementary classes for them to take. A placement test is effective if it is able to distinguish between students who will do well in the class and students who will do poorly in the class. In order to determine whether this is an appropriate role for the FCI in our introductory calculus-based physics course, we looked at the relationship between FCI pre-test scores and success in the course. Based on their final grades, students were put into one of six grade categories (A, B, C, D, F-I-W, drop), where students who fail the class, take an incomplete, or withdraw are lumped together. In addition to the letter grades, there were also some students who took the FCI pre-test but subsequently dropped the course (dropping is different from withdrawing in that it occurs earlier in the term and there is no record placed on a student s transcript). These students who dropped the course could have done so because they were doing poorly or for other reasons. For the purposes of interpreting our data, we have assumed that a student who gets an A, B, or C in the class has been successful and that a student who gets a D, F, I, or W, or drops the class, has been unsuccessful. Figure 1 breaks the class into 10 groups based on the FCI pre-test score. In each FCI pre-test group, the percentage of students who fall into each of the six grade categories is shown. As you can see from the graph, the FCI pre-test can do a reasonable job of predicting success in the class (almost all students, 94%, who scored 19 or higher on the pre-test were successful in the class, most getting an A or B), but does not THE PHYSICS TEACHER Vol. 40, December 2002 543

Percentage of Students in each category 100% 80% 60% 40% 20% 0% Course Performance vs. FCI Pre-Test Score Score N=37 N=264 N=432 N=444 N=382 N=260 N=153 N=101 N=80 N=25 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24 25-27 28-30 FCI Pre-Test Score (30 (30 Max) Max) Drop F,I,W D C B A Winter 1998 (Ungraded in Lab) Winter 1998 (Ungraded in Lab) 30 25 20 15 10 5 95% Confidence Interval Test-Retest Comparison 0 0 5 10 15 20 25 30 Fall 1997 (On Final Exam) Fall 1997 (On Final Exam) Fig. 1. Student course performance as a function of FCI pretest score. All student pre-test data from fall 1997, fall 1998, and fall 1999 (N = 2178). Fig. 2. The relationship between each student s (N = 500) score on the graded FCI on the fall 1997 final exam and his/her score on the ungraded FCI at the beginning of winter 1998 (three weeks later). The number of students in each group is represented by the dot size. do a good job of predicting failure in the class (even in the lowest FCI pre-test score group, more than 60% of the students were successful in the class). Regardless of what cutoff score might be chosen, if we were to use the FCI as a placement test, many students would be inappropriately advised not to take our introductory calculus-based physics course. How Do We Know that Students Take the FCI Seriously When It Is Not Graded? Our FCI pre-tests are always ungraded and sometimes the FCI post-tests are ungraded. Since students are not penalized for not taking the test seriously, the question naturally arises as to how meaningful these test scores are. If we are to treat these ungraded scores as meaningful, we must determine to what extent an ungraded FCI test represents a student s best work. This question was examined using two different methods. Examining student answer patterns for signs of lack of seriousness There are several types of student answer patterns that may indicate that a student is not taking the FCI seriously. The five types of patterns we looked for were (1) refusing to take the test; (2) drawing a picture on the Scantron answer sheet; (3) answering all A s, B s, etc.; (4) leaving six or more blanks; and (5) other patterns (such as ABCDE, EDCBA, AAAAA, BBBBB, etc.) any place in their responses. Table I shows the percentage of students who fall into each of these categories based on the conditions under which the student took the FCI. We might expect to find differences between the pre-test and the post-test based on students different knowledge, but the chart shows that for the ungraded pre-test and the ungraded post-test, the percentage of students in each of these categories is very similar. The main differences on the chart depend on the conditions under which the test was administered i.e., whether or not it was graded. To determine the maximum percentage of students not taking the test seriously when the FCI is given ungraded, we found the difference between the percentages of students in each group when the FCI was given graded and when it was given ungraded. Using this method, we can estimate that the maximum percentage of students who don t take the FCI seriously when it is ungraded is 2.8%. Thus, the response patterns of our students on the FCI indicate that almost all of them are taking the test seriously. Further, since it is relatively easy to identify students who refuse to take the test (0.5%) or leave a lot of blanks (1.4%), these 1.9% of students can be eliminated from the sample, leaving at most 0.9% of students who might have lower scores on ungraded tests due to lack of seriousness. 544 THE PHYSICS TEACHER Vol. 40, December 2002

Table I. Signs of lack of seriousness in student answer patterns on the FCI. Maximum % of students not taking test seri- Pre-Ungraded Post-Ungraded Post-Graded ously as a result of grading option N = 1856 N = 524 N = 1332 1997, 1998, 1999 1999 1997, 1998 Refuses to take test 0.5% 0.5% 0.0% 0.5% Draws a picture 0.2% 0.2% 0.0% 0.2% Answers all A s, B s, etc. 0.0% 0.0% 0.0% 0.0% Leaves a lot of blanks (six or more) 1.5% 1.5% 0.1% 1.4% Other Patterns ABCDE, EDCBA 0.8% 1.0% 0.5% 0.5% Six A s, B s, etc. 0.2% 0.2% 0.0% 0.2% Total 3.2% 3.4% 0.6% 2.8% Giving the same group of students the FCI twice, in a graded and ungraded situation, where it is unlikely that they learned new physics in between The students identified in the previous section were ones who obviously did not take the test seriously. There may, however, be students who, when the test is ungraded, just don t bother to think about their responses as carefully as they would have had the test been graded. It is plausible that such a lapse in careful thought would decrease a student s test score. We wanted to find out if this effect exists and, if so, how large an effect it is. In the fall of 1997, students took the FCI on the final exam of their first quarter of introductory physics. Three weeks later, during the first week of the second quarter of physics, in the winter of 1998, students were asked to take the FCI as an ungraded test during the first lab session. Since students were done with their physics course and on winter break during the intervening three weeks, it is unlikely that they attempted to learn any new physics in between the two administrations of the test. Figure 2 shows the relationship between the graded and ungraded tests. As you can see there is a high correlation (r = 0.88) between the two sets of scores. Some students had higher scores when the test was given ungraded, some students had lower scores, and some students received the same score. There is a line on the graph that shows the expected result that a student s score on both tests would be the same. Surrounding this line are two parallel lines representing the 95% confidence interval of an individual student s FCI score (4.0 items) as determined by a separate analysis of FCI scores based on a measurement of the reliability of the test. 7 Since most of the data points fall within this band, it is clear that most of the deviation in test scores can be attributed to measurement error rather than to the conditions of testing. There does, however, appear to be a small effect due to the conditions of testing. Comparing the average of each group, we find a difference of about half of an FCI item between the graded test (21.4 0.2) and the ungraded test (20.9 0.2). This difference is statistically significant at the 5% level on a matched sample t-test. This half of an FCI item can then be considered to be the maximum difference in FCI scores that can be attributed to lack of seriousness. We think of this as a maximum since other factors, such as forgetting material over winter break, might also lead to a decrease in score on the ungraded test. We don t consider this potential half-item decrease in the class average to be of much concern since it is approximately the same as the statistical uncertainty in the FCI average for a class of 100 students. 8 Combining these two methods of looking for lack of seriousness on ungraded tests, we can say that there is strong evidence that almost all of our students take THE PHYSICS TEACHER Vol. 40, December 2002 545

Post -Test Score 22 20 18 16 14 12 10 8 6 4 2 0 FCI Post-Test Scores with and without a Pre-Test N=440 1998 N=161 No significant difference on a pooled variance t-test (P=.29) Pre-Test No Pre-Test Pre-Test No Pre-Test N=355 1999 N=170 No significant difference on a pooled variance t-test (P=.63) Conclusions Because of the large number of students who take the FCI each year at the University of Minnesota, we have been able to address some common concerns about using the FCI. Data presented have shown that for students at the University of Minnesota: 1. The FCI is not appropriate for use as a placement test. 10 2. There is little difference between FCI scores when the test is given graded versus ungraded. 3. Giving the FCI as a pre-test does not affect the post-test results. Fig. 3. A comparison of FCI post-test scores for groups of students who did and did not take the pre-test. the test seriously when it is not graded and that there do not appear to be substantial problems in comparing results of graded and ungraded tests. Doesn t Giving the FCI as a Pre-Test Influence the Post-Test Results? A common concern about giving the FCI as both a pre-test and a post-test is that students post-test scores might be inflated because students have already been exposed to the material on the pre-test. There are a number of possible reasons why this might occur. By taking the pre-test, for example, students may be sensitized to certain topics and then pay closer attention to these topics when they come up in the class. On the other hand, the pre-test is taken very early in the semester and students have no idea that they will ever see the same test again. We decided to see if taking the FCI as a pre-test has any influence on post-test scores. In two years (1998 and 1999), approximately one-quarter of the students were not given the FCI as a pre-test. In 1998 these students were given a different conceptual test (the TUGK), 9 and in 1999 these students were not given any conceptual test. As you can see from Fig. 3, there are no statistically significant differences in post-test scores between these two groups. Thus, taking a pre-test does not appear to bias post-test results. References 1. Lillian C. McDermott and Edward F. Redish, Resource Letter PER-1: Physics education research, Am. J. Phys. 67, 755 767 (Sept. 1999). 2. See, for example, Eric Mazur, Peer Instruction: A User s Manual (Prentice Hall, Upper Saddle River, NJ, 1997). 3. Several papers have been published in The Physics Teacher regarding the interpretation of the results of the FCI: Richard Steinberg and Mel Sabella, Performance on multiple-choice diagnostics and complementary exam problems, Phys. Teach. 35, 150 155 (March 1997); David Hestenes and Ibrahim Halloun, Interpreting the Force Concept Inventory: A response to March 1995 critique by Huffman and Heller, Phys. Teach. 33, 502, 504 506 (Nov. 1995); Pat Heller and Doug Huffman, Interpreting the Force Concept Inventory: A reply to Hestenes and Halloun, Phys. Teach. 33, 503, 507 511 (Nov. 1995); Doug Huffman and Pat Heller, What does the Force Concept Inventory actually measure? Phys. Teach. 33, 138 143 (March 1995). 4. Richard Hake, Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses, Am. J Phys. 66, 64 74 (Jan. 1998). 5. See P. Heller, R. Keith, and S. Anderson, Teaching problem solving through cooperative grouping. Part 1: Group versus individual problem solving, Am. J Phys. 60, 627 636 (July 1992), and P. Heller and M. Hollabaugh, Teaching problem solving through cooperative grouping. Part 2: Designing problems and structuring groups, Am. J Phys. 60, 637 644 (July 1992). 546 THE PHYSICS TEACHER Vol. 40, December 2002

6. The Force Concept Inventory was originally published by David Hestenes, Malcolm Wells, and Gregg Swackhamer, Force Concept Inventory, Phys. Teach. 30, 141 158 (March 1992). It was revised in 1995 by Ibrahim Halloun, Richard Hake, Eugene Mosca, and David Hestenes and is available online at http:// modeling.la.asu.edu/r&e/research.html. 7. The standard measurement error of a test is related to the reliability of the test and the standard deviation of the obtained test scores by: e = t 1, where e is the measurement error of the test, t is the standard deviation of the distribution of obtained test scores, and is the reliability of the test (measured with Chronbach s Alpha). For our students, t is 5.17 items, is 0.85, making the standard error of measurement 2.0 items. This means that the 95% confidence interval around a student s FCI score (2 standard errors of measurement) would be 4.0 items. Details are available in many statistics books, or see R.L. Thorndike and R.M. Thorndike, Reliability in Educational Research, Methodology, and Measurement: An International Handbook, 2nd ed., edited by John P. Keeves (Cambridge University Press, Cambridge, UK, 1997), pp. 775 790. 8. The standard error of measurement of the average FCI score for a class of 100 students would be given by: Standard Deviation Standard Error = = N u m b er o f St u d en ts 1 5.71 = 0.520 FCI Items. 9 9 9. Robert J. Beichner, Testing student interpretation of kinematics graphs, Am. J Phys. 62, 750 762 (August 1994). 10. This supports the recommendation of the FCI authors see Ref. 6. THE PHYSICS TEACHER Vol. 40, December 2002 547