Thursday, May 10, 2012

How much water do the trees use? How do you believe the data? (Floodwater Spreading part 2)

In the middle of the desert, floodwaters are diverted into terraces so that more water can seep into the ground and be used by people later. Immediately adjacent to scrubby wastelands, these experimental plots support a thriving eucalyptus forest. But how much water do the trees use? The answers are not always easy. Near the end I write about the subjective (and sometimes controversial) process that scientists use to question data.

Read part one of this series for more background on the Kowsar Floodwater Spreading Experimental Research Station

To show us how deep these eucalyptus tree roots go, research scientist Mojtaba Pakparvar brought us to a metal door on the ground. He lifted the lid to a large hole, a hand-dug well wide enough to climb down, but so deep we couldn’t hear something when dropped to the bottom. The darkness was impenetrable. The walls of the well were lined with layers upon layers of fine roots. This is how I imagined Rapunzel’s hair to look. Mojtaba has found roots at the bottom of this 30 meter (90 foot) well, so who knows how deep they eventually go?


Metal door to well under the forest


Hairy roots that got thicker with depth

The experimental station has many different configurations, some using floodwater spreading with trees and some without. When the floods come through they carry fine silt that settles into the ground. For the plots without trees, this silt clogs the soil’s pores making it harder for the water to pass through. However, the ground beneath the trees is churning and changing so much (because of bug burrows) that this clogging doesn’t happen.

Of course there’s a downside- trees use water and the whole purpose of floodwater spreading is to get the water to soak into the ground as much as possible. At the end of the day, is the water taken from the river just being used to grow a forest and there is no extra water going into the aquifer? There’s debate about how much water is truly saved, making this technology somewhat controversial.

The supporters think that eucalyptus may not be the best trees to use, perhaps something else would use less water but still have the same benefit. The experimental station just picked eucalyptus because that’s what the inventors of floodwater spreading used back in Australia. Maybe something different would be better for Iran. More research is required.


Trench in the experimental plot

Regardless, Mojtaba contends that the evaporation is going to be less than from a dam- that’s just open water exposed to the sky. Furthermore, the benefits of the experimental station to the environment are undeniable. “Before the project, you couldn’t stand here because of the sandstorms” Mojtaba indicated. Now the area was a thriving and complex ecosystem.

To show us how the water-use by trees is quantified, Mojtaba led us to a stand of eucalyptus where one had a gouge in its trunk and some metal wires. He was measuring its sap-flow, the rate of water flowing up or down the trunk. The rate is different from one tree to another and varies by time of the day and seasonally, according to the weather and the amount of water in the soil.

20120301_093446small Mojtaba pointing to his sap-flow sensor. 

Basically there are three needles inserted into the trunk, lined up vertically. The middle prong sends out pulses of heat that make the sap warmer. If the sap is rapidly going up then the upper prong will get warmer than the lower prong. It is called the “heat ratio method”. Imagine three friends are in the ocean and the middle friend wets his suit. We can tell the direction and speed of the current based on how quickly one of the neighboring friends feels warm.

When asked how much the tree was using, Mojtaba said he doesn’t know for sure. The sensor was measuring way too high (100 mm per day in summertime) and he was going to send it back to Australia for repairs. Our conversation went like this (paraphrased):

But how do you know it’s too high? How do you know that you’ve installed it correctly? How do you know it’s not a little too high? It’ll always be either too high or too low, these things are not exact… So you mean you don’t just go out there and measure truth?

We never measure truth, we just make observations and then try and guess which ones are bad. They’re all bad, of course, it’s more a matter of more bad or less bad. This is just one tree in a forest. The others could give a different answer. 

Mojtaba compared his results with what most other scientists measured for this species of eucalyptus and it was well out of that range. Maybe nobody else had measured this exact tree, or even this type of tree in this type of climate, but by comparing to others he guessed it was a factor of 10 too high.

There are so many things that could have caused the bad readings. It could have been a bad sensor from the factory. It could have got damaged on its way to the site. Mojtaba has years of experience so he is skilled at installing sensors like this and knows how to interpret the results, but what about someone that was doing it for the first time?

When I was an early grad student, I thought installing environmental sensors in the field was simple and that all data was created equal. It turns out that putting a sensor out in the wild and having it collect good data is about as difficult and individual as taking a proper photograph. Sure, sometimes the automatic settings are enough to get something passable. And of course, like photography, some scientists get caught up in a lust for expensive equipment. But there are also natural talents like composition, lighting, retouching that separate the professionals from the amateurs. There is a long processing of learning from mistakes. As Niels Bohr said “An expert is a person who has found out by his own painful experience all the mistakes that one can make in a very narrow field”.

Lets just consider for a second that the high readings were real and the tree itself was unusual, an outlier?   

The hardest thing about outliers it that they are sometimes real. No theory can survive if it denies the existence of strange things that have actually been found. Case in point: roly-poly bugs caused a four-fold increase in infiltration at the experimental site in recent years. Scientists would have never guessed the ground could suck down water so quickly if they were just looking at maps of soils and geology.

As much as they are scorned like unpopular teenagers in high school, outliers are the ones that put up the biggest challenges to our theories and therefore teach us the most. I say, if you don’t fit in, then happily let your freak flag fly; you’re one of the most valuable people out there.

That said, every outlier can’t be right, not every neighborhood crank can be a prophet. Otherwise, it would be total chaos. Someone has to keep the trains running on time.

Scientists collecting data are extremely reluctant to declare that they have found something new and different from what everyone else has found. The potential for embarrassment is so high and nobody wants to say “I’m right and everyone else is wrong” when they have simply miscalibrated a sensor or installed something upside down. The problem is all that harder when the measurements are out in the environment where factors can’t be controlled like they can in the lab. Most of the field-scientists I know medicate themselves with extremely heavy doses of self-doubt and are never completely satisfied with their work.

Yet, breakthrough discoveries in science are the baseball equivalent of home-runs. There are powerful personal and professional incentives to be the one that finds that thing that everyone else has overlooked. This causes some scientists to swing for the fences, often vigorously and dramatically striking out in the process. One can’t help but dream of knocking one out of the park and popping the celebratory champagne. That World Series victory ring is going to look great on my finger. Good science doesn’t work like baseball, however, where the individual personalities are larger than life. A team sport like soccer is a better model… but that’s a different story.

Steven Goldman, lecturer “The Science Wars: What Scientists Know and How They Know It” has a great description of this phenomenon of questioning data and declaring results to be “true” or not. He discusses a book written by two sociologists that watched scientists in a medical research lab as they were making a Nobel-Prize winning discovery. They documented the scientists’ behaviors and interactions with each other like anthropologists would observe a tribe living in the wilderness. To quote Goldman (my emphasis added),

“What they show in the book [Laboratory Life] is the way in which doing science is a process that is not merely a function of reasoning about data. There are all kinds of complexities associated with, "Is the instrument working correctly? How do we know that that's working correctly? I don't think that data's reliable; I didn't like the way the meter was fluctuating there. Or somebody put the reagent in too quickly." Then the way they talk about, "What's Schally doing today? Does anybody know what Schally is planning to publish? I hear he's giving a paper here." Then, on the phone and saying, "Look, we have some interesting results. I want you to know about it first. I want you to know about it second." …[Scientists] create a framework of allies who, by the time you make your announcement, are already committed to say, "That's good work!"

“So, what [the sociologists] argued, was that …individual scientists working within the community, in some sense make scientific knowledge…Truth is determined by the scientific community standing up and saying, "Yes! Yes, they did it!” Then we give them the Nobel Prize for that, which guarantees that what they did was correct, as it were—it doesn't of course; the Nobel Prize has sometimes been given for work that subsequently was decided was not correct, but they don't take back the Nobel Prize.”

Mojtaba’s case was a no-brainer; he thought the data was obviously bad, completely implausible. But what about more subtle problems with data? Instead of 1000% errors, what about 10% errors? What about sensors that started fine but slowly drifted and decayed?

Not far away there was a case of that where the technicians were struggling with measuring the flow of the river…

[Hang on for part 3!]

No comments:

Post a Comment