Coastdown Testing Revisited

 

In my previous post about measuring changes in drag, I discounted coastdown testing as an unreliable method. I called it that because I had tried coastdowns several times and never got usable results. There were several reasons for this that I have been thinking about recently as I search for a method of measuring drag changes on my car. First, I was not good about keeping test parameters consistent; second, the tests themselves were not constructed properly; and third, I was trying to use the test results to calculate things (like drag coefficient) that relied too much on assumptions and were too small to reliably figure from this sort of testing on public roads, which naturally has high variability.
 
I decided to revisit coastdown testing and see if I could design and execute a test in such a way that the results could be trusted and could show me whether the aerodynamic drag of my car has changed. Here's how I did it, and how you can too.
 
Concept
 
The concept behind coastdown testing is simple, and can be illustrated with a simple engineering method. In engineering dynamics, we can calculate the forces and moments acting on a body by using diagrams and relating them to Newton's Second Law of Motion (F = ma). Basically, the left side (a "free body diagram") represents all the external forces and moments acting on a body as vectors (free body diagrams do not include internal forces! So we don't draw, for example, the sprung weight of the car acting on the suspension). The right side (a "kinetic diagram") represents masses, inertial moments, and accelerations:

Here, "FD" is aerodynamic drag force, "FR" is rolling resistance, "T" is the traction force pushing the car forward, "W" is the car's weight (not mass—weight is a force), "NA" is the normal force pushing the car's front tires up, and "NB" is the normal force at the rear tires. To figure out a dynamics problem, then, simply set both sides equal (and pay attention to the vectors' directions along your axes. Here I've set positive to the right/up and negative, left/down):
 
{x: -FD – FR + T = ma
{y: NA + NB – W = 0
 
In y, of course, the car isn't moving (in this simplified picture)—so we set those forces equal to zero (you can use this to calculate, for example, the position of the center of gravity of your car by also setting the moment around one wheel to 0, if you have measured axle weights separately). In x, however, the car is moving, and we can rearrange that equation to move the negative signs around:
 
{x: m(-a) = FD + FR – T
 
The concept of coastdown testing is to put the vehicle in neutral so that there is no traction force moving it, i.e. T = 0, and the vehicle decelerates, flipping the sign of a.
 
{x: ma = FD + FR
 
You can estimate acceleration by measuring the time it takes for your car to slow from one speed to another:

acceleration = (final velocity - initial velocity) / (elapsed time)

When I calculate a by subtracting initial from final velocity, it will come out with the correct sign, negative, since final v is less than initial v.

If we then make a change to the car that alters its aerodynamic characteristics and assume that rolling resistance doesn't change (another simplification), this should show up as a change in its acceleration. If the car takes longer to slow, the magnitude of acceleration (its size regardless of sign) is smaller and drag has gone down—and vice versa. Easy peasy.
 
Reality
 
Not so fast. In the real world, a lot of things can completely mess up coastdown tests. Wind, changes in temperature, subtle changes in grade of the road (up or down), passing cars, irregularities in the road surface, structures and wind blocks along the road, etc. What's more, these changes compound the longer a test is, and a long test that allows the car to slow to a low speed also means that for much of that test, aerodynamic drag isn't the predominant force acting on it.
 
To get a reliable coastdown test, I think we have to keep things as consistent as possible, leave as little room for changing conditions (an errant gust of wind, temperature going up or down, etc.) as possible, and ensure that aerodynamic drag is the largest force acting on the car. This means we must:
·         keep the test as short as possible
·         test at the highest (legal) speed possible
·         test on the flattest road
·         test in the calmest conditions
·         start the test at exactly the same place
·         start the test at exactly the same speed
 
I used a short, flat section of a road I've tested on before, in one direction only, on a day with calm winds, entering the test section at 115 kph before shifting to neutral, and then using my phone's stopwatch and lap timer function to measure the deceleration time from 110 to 100 kph and 100 to 90 kph. I did six tests in each configuration, which ended up taking about two hours. Why six? So that I would have enough data to statistically analyze it—the results of which you can find below. You can do more, but I wouldn't do fewer.
 
Why not both directions, for wind- and grade-averaged results? That's a good question. My thinking is, in minimizing or eliminating as many variables as possible, that we should keep wind direction (if any) and road grade (if any) the same. These two can have much more effect on the resistance of the car than even large changes in drag. For example, some quick back-of-the-napkin calculations show that a grade of just 2% exerts a force an order of magnitude larger than even a 10% change in aerodynamic drag at highway speeds on a typical modern car (
CD = 0.25-0.30, 3500 lbs).

Keep everything as consistent as possible. As I see it, the best way to get that consistency is to test in the same direction over the same exact stretch of road (the flattest road you can find!) in a short period of time. Additionally, I waited for a calm day to ensure that any wind effects are minimized or eliminated (Aerodynamics of Road Vehicles recommends testing with no wind as the best method, better than wind-averaging). Next time I do this, I might try running tests in opposite directions, in sets (i.e. standard/windows down in one direction, then standard/windows down in the other direction) on this same road and see what the difference is.
 
Did it work?
 
My question going into this was, would controlling as many factors as I could and asking the right questions get me reliable results this time? I first tested the car in "standard" configuration ("nothing added, nothing taken away"), then rolled down all the windows; this is a good check since we know it will increase drag, by 6-12% on most cars. Then I taped on the Hellcat spoiler I've been testing (with the windows rolled back up). My estimate of drag change from pressure measurements indicates that this spoiler should increase drag by 15% or more—significantly more than rolling down the windows. That will serve as an additional check; if my results are trustworthy, they should show that the increase in drag from windows down or the Hellcat spoiler compared to standard are measurable and consistent, and they should also show that the spoiler increases drag compared to windows down.

I really hate the test results I keep getting for this spoiler because I think it looks damn cool.

Here are the numbers, measured in seconds:
 

 

Standard (110-100kph)

Windows (110-100kph)

Hellcat (110-100kph)

Standard (100-90kph)

Windows (100-90kph)

Hellcat (100-90kph)

Standard (110-90kph)

Windows (110-90kph)

Hellcat (110-90kph)

1

7.51

7.72

7.15

9.48

8.71

8.37

16.99

16.43

15.52

2

8.35

7.85

7.15

9.67

9.37

8.52

18.02

17.22

15.67

3

8.16

8.01

7.20

9.95

8.96

8.04

18.11

16.97

15.24

4

8.48

7.57

7.19

9.94

8.94

8.40

18.42

16.51

15.59

5

8.31

8.02

6.94

9.89

9.25

8.12

18.51

17.27

15.06

6

8.62

8.03

6.98

 

8.90

8.14

 

16.93

15.12

mean

8.24

7.87

7.10

9.79

9.02

8.27

18.01

16.89

15.37

a (est.)

-0.34 m/s2

-0.36 m/s2

-0.39 m/s2

-0.27 m/s2

-0.31 m/s2

-0.34 m/s2

-0.31 m/s2

-0.33 m/s2

-0.36 m/s2

 
My technique got better as the test went on; you can see this in the Hellcat data, which have a smaller standard deviation than the Windows data, which have a smaller standard deviation than the Standard data. In the first few runs, one time my finger slipped when I tried to press the "lap" button on the phone, so I didn't get data for that Standard run (100-90kph and 110-90kph, the blank cells in the table). And the very first run, I think I was late to start the timer since that one seems like an outlier (I'm including it in my analysis, though, rather than toss it out on that assumption—just in case I'm wrong).
 
Even with those caveats, the simple averages of each data set are clearly different, and we can see that rolling down the windows increases drag but not as much as fitting the Hellcat spoiler. Success, right?
 
Not quite. As I've explained before, comparing simple averages of a bunch of tests with high natural variability may or may not tell you that something has actually changed. These eighteen tests represent samples drawn from populations--all possible tests of this car in these conditions at this location. I want to know if the population the Standard samples came from is the same as or different from the Windows population and Hellcat population; if the populations are different, it means the tests show that something changed. To claim that the population averages here have changed, we must confidence test the data—that is, I'll compare the data and their spread using the normal distribution. I used a graphing calculator to do this (I believe all the TI graphing calculators have functions for statistical testing; look for a "STAT" button. On my TI-84 it's a hard button; on my TI-nspire CX, it's under Menu>>Statistics), comparing the samples using a test that returns a p-value. P-value tells us "the probability of getting a sample at least as extreme as the one we got assuming the null hypothesis is true" (this definition comes from my statistics class notes. "Null hypothesis" always defaults to "there is no change in the population means"). In other words, p-value tells us how likely it is that our sample mean from the second data set comes from the same population as the first set. So, a low p-value (usually, the limit is set at 0.15 or lower, and commonly 0.05 for rigorous testing) indicates that there is sufficient evidence to conclude that a change in sample mean reflects a change in the population mean and the null hypothesis (that there is no difference in the population means that produced each sample set) must be rejected.
 
Every single comparison here—Standard/Windows, Standard/Hellcat, Windows/Hellcat, at each speed interval—returns a p-value less than 0.05 (p = 0.036 was the highest, for Standard/Windows 110-100kph). Because of this, I can state, "There is sufficient evidence to conclude that the coastdown tests show an increase in drag with the windows down, and a further increase with the Hellcat spoiler fitted" and be confident that this reflects reality. This method of coastdown testing, it turns out, was reliable in identifying whether a change to my car has increased its drag, decreased its drag, or had no effect, since the expected change from known alterations showed up in the data and is statistically significant.

Okay, you might be saying, but what about changes in drag smaller than opening all the windows? In the case that the change in drag, if any, is too small to measure this way, the statistical test will return a p-value greater than 0.05; there will be insufficient evidence to claim that the population means are different. There will be a lower limit to what these coastdown tests can show, and when that happens I'll resort to other methods such as pressure measurement to determine what's happening to the airflow over the car and extrapolate the changes to drag and lift as I've been doing. Additionally, when I coastdown test in the future I'll always start with a "windows down" test to verify that my results are trustworthy in that session.
 
Conclusion
 
In the past, I tried to use coastdown measurements to do things like calculate my car's drag coefficient, following a popular Instructable technique. That does not work. There's too much inconsistency in the real world to put any sort of stock in those numbers, especially when it relies on assuming things such as frontal area, air density, and rolling resistance, and depends on a long coast. The first time I tried, the graph spit out CD = 0.21 and the second time, with no change to the car, CD = 0.30. Don’t bother.
 
By changing the questions you ask and ensuring consistency as much as possible, it looks like you can get usable information from coastdown testing. These will be questions that can be answered "yes" or "no." Did this change to my car increase/decrease drag? Yes or no. Did Change #1 increase/decrease drag more/less than Change #2? Yes or no. Those questions can be answered by coastdown testing, as long as the change is large enough.
"What's the drag coefficient of my car?" cannot, at least not without sophisticated measuring equipment to account for wind, road grade, barometric pressure, and other factors.
 
To conclude, you should try this yourself if you're looking to identify changes in drag and your car doesn't work with throttle-stop testing or another method of measuring drag changes. Just remember to:
·         test from a high speed to a high speed (so that aero drag predominates and test time is kept short)
·         test on as flat and smooth a road as you can find and wait for a calm day
·         enter the test section at the same spot and same speed each time
·         do it lots of times
·         be realistic about what the test can show you
·         be consistent—as exactly consistent as you can!
·         confidence test your results to ensure that what you think you see is really there

Good luck!

Comments

Popular Posts

A Practical Guide to Aerodynamic Modification

Optimizing Aerodynamics of a Truck: Part 5

How Spoilers Work