Additives are often in the focus of modern research efforts worldwide as their trade plays a major part in international animal nutrition business. For most additives, results under most practical conditions remain largely inconclusive and quite often difficult to evaluate.
By Ioannis Mavromichalis
The inconclusiveness of trial results could be the result of the fact that conditions under which they might be beneficial are difficult to understand without a rigorous scientific trial design and interpretation program, which of course requires extensive funding. At farm level, deciding which additive to use can often be a daunting experience as it requires not only a sound knowledge of basic nutrition but also a good understanding of statistics. Things get even more perplexed when trying to distinguish between two similar additives based on unpublished results. Today, there is a plethora of trials on additives that are never published under the scrutiny of scientific journals and as such, their validity can or should be questioned. Under this light, it often pays to think ahead when designing trials, so that embarrassing questions can be avoided later. Similarly, when evaluating research results supplied by suppliers it is handy to have a quick check-list to ensure trials were done properly and results are thus valid. Common pitfalls in experimental design in trials often used to promote commercial additives include (but not limited to) the following cases.
No negative control
It is often assumed that a long established additive (for example, antibiotics) is always effective when used under standard conditions. Thus, when such a product is replaced by a novel additive (for example, organic acids) without any loss in performance, it is often claimed that the two additives are of equal potency in enhancing animal performance or health. However, there is no value in comparing the two additives without a negative control (a diet without any of the two additives) to guard against the event of no response to the standard additive that would render the whole trial ineffectual. For example, pigs raised under high levels of sanitation would not benefit from the inclusion of antibiotics in their feed. Growth rate of pigs in such a trial would be equal to pigs receiving say, essential oils in their feed instead of the antibiotics. This equal performance does not mean that essential oils can replace antibiotics. In truth, if a third treatment (a negative control without antibiotics or essential oils) was included, it would be revealed that all three treatments were equal. Thus, only when the ‘old’ additive treatment gives a positive response over the negative control, then we can safely discuss the performance (or lack of) of the ‘new’ additive (Figure 1). Otherwise, the trial was a failure!
No positive control
Without a positive control, on the other hand, it is quite difficult to evaluate the return on investment from the use of a novel additive that may or may not support performance equal to the standard additive. Let’s use the example of butyric acid in piglet diets here. Say, a trial is presented where a diet without any organic acid is compared with the exact same diet but with butyric acid. Now, butyric acid can be an effective additive under certain circumstances, but it happens to be quite expensive. Let’s further assume that butyric acid is enhancing piglet growth by 5% (not an unreasonable expectation under less than ideal health conditions) and this is clearly demonstrated in the trial results. Now, the logical question to ask the supplier providing these results would be how much of a cheaper (but effective) acid would be required to elicit a similar response. Most likely such a treatment is not included, but this is information vital for the potential user of butyric acid. If it costs more to use butyric acid to get a 5% boost in piglet performance compared to another organic acid then it makes no sense to switch over! Of course, there is the trap of having endless candidates to compare with, but a short list of most important competitive products is always in the minds of commercial people.
Without proper dose-response trials, dosages often recommended by some manufacturers are educated guesses, based on theoretical calculations, at best. It is equally likely for a smaller or larger dose to elicit similar or even greater responses, respectively. The famous, or rather infamous, dosage of 1 kg per metric tonne readily diminished the credibility of most additives, especially because most additives are recommended at such levels of inclusion. A proper marketing dossier should include trials where a range of dosages is tested; say 0.25, 0.50, 0.75, 1.00, 1.25, 1.5, 1.75 kg per metric tonne. Only then we can safely deduct the exact level to use (Figure 2). The statistical analysis of dose-response trials can be rather tricky business and there is more than one way to interpret these results, so care should be taken when reviewing such trials.
No statistical analysis
Numerical differences without statistical analysis are never sufficient to base a decision regarding any novel additive. Understanding statistics can be a daunting task and it is often easier to ignore statistical requirements rather than actually performing a proper analysis. Yet, only with an accurate statistical analysis can results be trusted with any confidence. Sometimes, numerical differences are accepted (wrongly) as being real differences, even when a statistical analysis reveals no such relationship. For example, two treatment means may be slightly apart but have the same superscript (usually a,b,c, etc or x,y,z etc). The presence of a common superscript next to a number in a properly conducted trial report indicates that these two responses should be treated as being similar. For example, let’s assume we conduct a trial as described inFigure 1and we have the following responses for the negative control (NC), positive control (PC), and new additive (NA): NC= 250a g/day, PC=280b g/day, and NA=285b g/day. First, we are certain that the trial is valid because the positive control has a positive response over the negative control because 250 is different (statistically) than 280, as indicated by the two different superscripts (a versus b). However, although 280 and 285, the responses for the treatments containing the old and new additive are not the exact same number (numerically different), they should be considered as the same (statistically) because they share the same superscript (as decided by prior statistical analysis). Any decision made based on the assumption that the new additive is superior (albeit slightly) than the old additive is bound to lead to disappointment. This is because by the laws of chance, if the same experiment were to be repeated again, it would be equally possible for the numbers to be reversed, yet still remain statistically not different. In other words, it is to be expected that if the trial was repeated many (infinite) times, the overall average for each treatment would be the exact same number (both statistically and numerically).Table 1presents some possible trial outcomes based on statistical analysis of results. Of particular interest is Scenario C because here the performance of the new additive is lacking compared to the old additive, but it is still better than using no additive at all. In this case, perhaps, the new additive is a very inexpensive product and increasing its dosage might actually enhance performance! This is to emphasise that statistical analysis is not the end in making a business decision, just a tool in the process.
It is widely accepted that a probability value of 5% (denoted as P < 0.05) is sufficient to discern experimental data. Under commercial conditions, a stretched value of 10% (denoted as P < 0.10) is sometimes acceptable, especially in large-scale experiments, but even then it should be used as an indication (trend) and not as a definite result. Conclusions based on greater values are meaningless and should not be used to make business decisions. This index of proper statistical analysis should always be reported in conjunction with the above superscripts.
Quite often, experiments are conducted with limited replicates. In most cases, this means how many pens of animals are used to test each treatment. When replicates are few, it is extremely difficult to detect small differences in treatment responses, because such differences may be due to biological variation or even random effects. Trials with less than 6-10 replicates per treatment (for example, 10 pens for each treatment) are usually of limited value. In fact, the exact number of replicates required for each trial (depending on the variation expected on the expected results) can be easily calculated from proper statistical procedures. This is not an arduous task for the experienced researcher, but quite often it is overlooked for reasons of economy and rapidity in turning out experiments for marketing purposes.
Trials conducted under non-commercial conditions should be interpreted with care for application under commercial practices. For example, results obtained with one or two piglets per pen are not likely to be repeated in larger pens holding 10 or 20 pigs, as feeding behaviour is quite different in large groups. The same logic applies in diets used for experimental purposes. Although such trials are not without their own value, they should be used as the starting point in conducting more practical trials under conditions similar to those likely to be encountered in commercial farms where these new additives are expected to be used.
One change at a time
Quite often, many trials include more than one variable that is being changed, some times without design. In our example (Figure 1), only the type of additive should change among treatments. If however, more changes were implemented at the same time (say, also a reduction of crude protein), then it would be impossible to discern to what factor any response were attributable. This ‘rule’ would not apply if for example, the new additive was an enzyme that enhanced protein digestibility. In this case, the protein concentration of the positive control and that of the new enzyme would have to be lowered at a similar and equal manner; plus another negative control would have to be used (no additives and with lower protein to ensure pigs actually required the extra protein in the first instance).
Applied vs basic research
Data generated in vitro or in basic research programs do not always result in improved performance or profitability under commercial conditions. For example, additives that improve indices of digestive tract health do not always improve growth performance, health, or profitability in commercial farms. This does not imply that such trials are of no value but rather that it is best to establish under which conditions these additives are most effective before starting marketing.
Source: FeedMix vol 17 nr 3, 2009