Averages
- Feb 2, 2015
- 4 min read
Okay, so this probably is not actually as exciting as I thought when I decided to write this, but here goes:
there are multiple types of average
The usual way we average things is by adding them up and dividing by the number of entries. Sigma x over Sigma n if you do stats. The most obvious generalisation of this principle is the concept of weighting, which if you couldn't work out any more elegant way you could do simply by repeating an element in the summation step. Here's an example:
Average the data set: "1,2,3,10" What is the average if you weight the 3 twice as much as all other numbers?
Answer: mean=(1+2+3+10)/4=4; weighted mean=(1+2+3+3+10)/5=3.8
Weighting the average towards the 3 dragged the average down from 4 to 3.8 This kind of repetition is I think quite an intuitive approach to weighting. However, it does get a little tricky if you want to use your average for something useful where the numbers aren't wuite so clear cut. Consider the example:
A driver drives at a velocity of 10m/s for one minute, 20m/s for 5 minutes and 30m/s for 3 minutes. While you could write the average velocity is given by:
V=(10+20+20+20+20+20+30+30+30)/9=22.2
It's much quicker (especially for larger numbers) to assign weights of 1/9, 5/9 and 3/9 to the three speeds. You can then work out a weighted average by multiplying the speed by the weight and simply summing:
V=(10/9)+(100/9)+(90/9)=22.2
So that's how to take weighted averages. That's not really the exciting part though- the exciting part is when we think about what we actually mean when we say 'mean'. The mean of two numbers has some very interesting properties which is why it's so useful a brief summary of which are:
1) It always selects a number greater than or equal to the lowest number, and lesser than or equal to the highest number. That is, it lies between the least and greatest value.
2) It preseves the value if all the data points are equal. The mean of the set {2,2,2,2,2,2} is 2 without having to calculate anything.
3) It doesn't matter which order you input your data.
The obvious question to ask then, is what other functions do all of these? You should have met one already when doing AC electricity- the root mean square. Squaring all of your data, taking the ordianary mean (called an arithmetic mean) and the square rooting the answer is another way to get all of the above properties. But it's a different number from the arithmetic mean (usually).
When do we use an RMS mean? Usually when our data is part positive and part negative and we only care about the absolute value (i.e. the sign doesn't matter). This makes sense for AC power. Another good reason to use RMS current is that this provides the correct average for power- because P=I^2R, the best average is the one which averages I^2, which is just what an RMS does. So what we've learned is that sometimes one average is better than another.
Interestingly, there are a few more ways to take a mean as well. The most famous other two are called the geometric and harmonic means. Geometric means aren't that useful in physics so far as I know but I expect to be proved wrong on that soon enough. Google them though if you want.
Harmonic means, however, are incredibly useful. What's a harmonic mean? Well, you take the number of elements that you have and divide by the sum of the recipricals of you data. That sounds complicated, so I'll break down what you need to do:
1) Convert all your data to recipricals. 2 goes to 1/2, 9/2 goes to 2/9 and so on.
2)Sum your new data just like in an arithmetic mean.
3)Instead of dividing by the number of pieces of data, you do the division the other way around. I.e. you compute n/Sigma y (where y=1/x and your data are called x1, x2 etc). This partially 'undoes' the taking of the recipicals earlier.
This new mean is always lower than all the other means we've looked at so far. But in some cases it's actually the correct average to use. The classic example is the speed example- above we averaged speeds weighted on their times. You can prove that the average we wanted was the arithmetic mean, which is the one we used by noting that the average speed is distance/time:
v=s/t
v=(v1t1+v2t2)/(t1+t2)
This is one way of representing a weighted arithmetic mean. The product of the weights and the data over the sum of the weights.
But what if we don't know the times, but the distances? If we make the two distances equal for simplicity then we get:
v=s/t
v=2s/(s/v1 +s/v2 )
You can factorise the s out of the denominator and it cancels with the one in the numerator, to give
v=2/(1/v1+1/v2)
Which is an unweighted harmonic mean. If we had used two more general distances s1 and s2, then we would have derived a weighted harmonic mean, which is much uglier when you can't typeset Sigma signs.
Anyway, the reason this is so exciting is that I had a test on materials today that was mainly focussed on Young's Modulus. If you attach two wires of different Young Modulus together, what do you guess will be the resulting total Young Modulus?
Well, the equation is E=stress/strain, or to write in all variables:
E=(FL)/(Ax), where x is extension because I can't get a \Delta symbol either :-(
This post is already getting very long, so I'll leave it to you to prove that if F and A are the same for both pieces of wire, then the combined E of the system is the harmonic mean of the E's for the different metals, weighted by their length.















Comments