Similarity or Difference – does it matter?

Introduction to Sensory Evaluation training course for a key client

Recently we ran a very enjoyable one day Introduction to Sensory Evaluation training course for a key client.  Like many companies, they run a lot of difference tests to determine the impact of recipe and process improvements.  During the training day there was much debate about the actual objectives behind these tests.

Often when running a difference test it is similarity, and not difference, that we want to establish. In fact when we are re-formulating a product, changing a flavour or some other ingredient, we often don’t want our customers to notice the change and a conclusion of ‘no significant difference’ is the result we would like our triangle or duo-trio test to give us.  So it is tempting to conclude after a triangle test with 18 or 24 participants that because you found no significant difference between the samples, that they MUST BE THE SAME.  Unfortunately this is NOT TRUE and a conclusion of ‘no significant difference’ does not mean that our samples are in fact, the same.

To establish similarity you usually need more people to take part in the test than for establishing difference and the reason for this all comes down to the statistics.

When we want to prove a difference we need to have a high level of confidence that the products are perceivably different.  This is achieved by choosing a small value for the type 1 error, alpha, i.e. allowing only a small chance of concluding that the samples are different when they are not. Usually we set alpha at 5%.  This means that 5 times out of every 100 we conclude there is a difference, we will be wrong. Or in other words, there is only a 1 in 20 chance that we will be wrong when we conclude there is a difference between our samples.

In similarity testing, the interest is reversed.  Here it is important that we have a high level of confidence that the samples are not perceivably different.  This is achieved by choosing a small value for the type 2 error, Beta, i.e. allowing only a small chance that samples that actually are perceivably different are incorrectly concluded to be indistinguishable.

Tests that control type 2 error carry with them a complicating factor. In addition to choosing a value for Beta (e.g. 10%), a value for a second variable must be specified, namely, the maximum acceptable proportion of the population that can distinguish between the products (Pd). Whilst our first instinct might be that no one should be able to tell the difference, this is practically impossible as, statistically, it would require several hundred panelists to take part. To make similarity testing possible, we typically set Pd to be somewhere between 20 and 35%.

On the practical side, testing for similarity will require more assessors than for difference and 60 is generally the minimum number for triangle testing when similarity is the objective.  So when designing similarity testing it is important to set alpha, beta and Pd in advance and then carry out your test accordingly.  Tables and statistical software are available to guide these decisions.