[Stoves] Advocacy action: ask the GACC to stop promoting the WBT

Crispin Pemberton-Pigott crispinpigott at outlook.com
Sun Jan 22 15:36:34 CST 2017


Dear Frank



I can add to the uncertainty, which I think has been pretty well covered in the journal articles, the issue of the validity of the reporting metrics themselves. This was addressed squarely in Zhang, Y et al 2014 which challenged the validity of all three IWA low power metrics on the basis that the numbers reported, emissions per litre simmered (etc) did not rely directly on the performance of the stove. The reason is that the number of litres simmered has no influence on the emissions from the fire needed to keep the hot pot hot. Jiddu already commented on this list that it was, in effect, taking a valid number (emissions) and dividing it by a random number (litres in the pot).



The origin of this metric was, I think, the IWA but it was incorporated into the WBT. Please correct me if the WBT came with it first. What matters is that the experimental proof, done to a high precision by Yixiang Zhang et al, proved definitively what was 'understood' for a long time (i.e. Rani et al 1992) but experimental proof beyond all doubt was lacking. The experiment was reproduced by Jim Jetter with the same result: the amount of water in the pot (which is of course variable when boiling or simmering) has no effect on the heat required to keep it hot. This is true whether there is a lid used, or not.  Prof Annegarn has produced a theoretical analysis from first principles showing why this is true.



Therefore all three IWA low power metrics carry no clear performance information, and in fact introduce a variability in the reported result that is not present in the raw data - something quite unusual, if you check around. The increase in the variability is the result of a conceptual error, not something from the experiment such as operator inconsistency or a breeze etc.



Tami: I will provide this comment as my response to your earlier post about testing the tests and also Ranyee's earlier post about waiting for some process of review. I think the reviews completed to date (and two more are arriving soon) are sufficient to say that the best thing we can do at the moment is to stop using the WBT and compare the results of other tests, particularly those that have been reviewed, to see what should replace it.



No one is bound in any way by the IWA. There is no ISO Standard and when there is it too will be reviewed to see how the test and metrics etc stand up to scrutiny. I was shown a comparison chart for stoves that was produced based on a different approach to the testing (note the WBT) and the relative performance of all products shits significantly, which is to say, they are re-classified onto different tiers.



A conversation with the Gold Standard technical staff gave me an insight into how they accommodate the difference between the field performance and the WBT ratings. It was illuminating, but disconcerting, because they had taken the position that the WBT ratings were 'correct' reports of the comparative fuel consumption and that all departures from that were 'suppressed demand' not consistent errors in the rating. In no case did a stove perform above the WBT rating for fuel consumption in the 'standard comparison mode', meaning standard working conditions and standard fuel. That is the same as saying, the stove comparison chart produced by Berkeley.



One major reason for the failure to predict consumption correctly is of course the deduction of char energy from the denominator in the efficiency calculation.  So the source of the problem was easy enough to spot. What to do about it is now clear. We should all start using test protocols that report something close to the performance obtained in 'normal conditions', whatever that entails.  If 'normal' cannot be defined then we are on a hiding for nothing, as they say in South Africa. (taking a beating with no benefit)



So I am dropping my call for a review of the WBT 4.2.3. Looking through the available literature, it has been done by several groups and all a new one will show is more details and defects.  We should concentrate on evaluating others. There are several. EPTP, MWBT, CSI, BST, HTP, IS 15132 and so on and on. Let's get on with it.



Regards

Crispin





Dear Frank,

> What makes a test method good or bad is only how the results are

> interpreted and used.

The results WILL be interpreted and used. There's no way around that. I

test something because I want results. I want results because it allows

me to take a decision, on which depends time, money, and people.

Humanitarian agencies, development agencies, companies and NGOs have

been testing stoves and disseminating them for decades. This is not

gonna change in the future.

If the results are irrelevant, it doesn't matter how they are

interpreted, and used. The results are useless. By pure chance, the

results may be correct and you may have developed a stove that is

improved. It's like playing Russian roulette.



What makes a test method bad is if the validity of its results is highly

uncertain. It is the case for the WBT.



Best regards,



Xavier


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.bioenergylists.org/pipermail/stoves_lists.bioenergylists.org/attachments/20170122/b3cc508d/attachment.html>


More information about the Stoves mailing list