How to calculate percent error/deviation?
✓ Look at the front of the ESRT
✓ You will see a formula for calculating “percent deviation” or otherwise called “percent error”.
The most difficult part is to figure out what is the “accepted value in a problem” “the amount it is suppose to be” Vs the amount measured or given. If you are able to read the problem and differentiate between those two, then everything else is easy. You just look at your reference table and plug in the numbers into the equation and solve the problem.
It is really simple!

Lets see why we do this:
Students often assume that each measurement that they make in the laboratory is true and accurate. Likewise, they often assume that the values that they derive (get) through experimentation are very accurate.
However, sources of error often prevent students from being as accurate as they would like. Percent error calculations are used to determine how close to the true values, or how accurate, their experimental values really are.
The value that the student comes up with (data from an experiment or measurement) is usually called the “observed value”, or the “experimental value”.
A value that can be found in reference tables is usually called the true value, or the accepted value. (That is the value that is correct! What your measurement should be!)
The percent error can be determined when the true value is compared to the observed value according to the equation below:
(Observed Value - True Value)
Percent Error = -------------------------------------------------- x 100%
True Value

Let's look at an example of how the formula would be used in a real-life situation.
Ex. 1 A student measures the mass and volume of a piece of copper in the laboratory and uses his data to calculate the density of the metal. According to his results, the copper has a density of 8.37 g/cm3. Curious...

No comments