Consumers in industrialized countries are nowadays much more interested in information about the production methods and components of the food products that they eat, than they had been 50years ago. ...Some production methods are perceived as less “natural” (i.e. conventional agriculture) while some food components are seen as “unhealthy” and “unfamiliar” (i.e. artificial additives). This phenomenon, often referred to as the “clean label” trend, has driven the food industry to communicate whether a certain ingredient or additive is not present or if the food has been produced using a more “natural” production method (i.e. organic agriculture). However, so far there is no common and objective definition of clean label. This review paper aims to fill the gap via three main objectives, which are to a) develop and suggest a definition that integrates various understandings of clean label into one single definition, b) identify the factors that drive consumers' choices through a review of recent studies on consumer perception of various food categories understood as clean label with the focus on organic, natural and ‘free from’ artificial additives/ingredients food products and c) discuss implications of the consumer demand for clean label food products for food manufacturers as well as policy makers. We suggest to define clean label, both in a broad sense, where consumers evaluate the cleanliness of product by assumption and through inference looking at the front-of-pack label and in a strict sense, where consumers evaluate the cleanliness of product by inspection and through inference looking at the back-of-pack label. Results show that while ‘health’ is a major consumer motive, a broad diversity of drivers influence the clean label trend with particular relevance of intrinsic or extrinsic product characteristics and socio-cultural factors. However, ‘free from’ artificial additives/ingredients food products tend to differ from organic and natural products. Food manufacturers should take the diversity of these drivers into account in developing new products and communication about the latter. For policy makers, it is important to work towards a more homogenous understanding and application of the term of clean label and identify a uniform definition or regulation for ‘free from’ artificial additives/ingredients food products, as well as work towards decreasing consumer misconceptions. Finally, multiple future research avenues are discussed.
Display omitted
•We defined clean labels in a broad (front-of-pack) and strict (back-of-pack) sense.•We focused on organic, natural and free-from artificial additives and ingredients•Intrinsic, extrinsic and socio-cultural factors affect consumers' preferences for clean-labeled foods.•Implications for food manufacturers and policy makers were discussed.•Future research directions were suggested.
In applied spectroscopy, the purpose of multivariate calibration is almost exclusively to relate analyte concentrations and spectroscopic measurements. The multivariate calibration model provides ...estimates of analyte concentrations based on the spectroscopic measurements. Predictive performance is often evaluated based on a mean squared error. While this average measure can be used in model selection, it is not satisfactory for evaluating the uncertainty of individual predictions. For a calibration, the uncertainties are sample specific. This is especially true for multivariate calibration, where interfering compounds may be present. Consider in-line spectroscopic measurements during a chemical reaction, production, etc. Here, reference values are not necessarily available. Hence, one should know the uncertainty of a given prediction in order to use that prediction for telling the state of the chemical reaction, adjusting the process, etc. In this paper, we discuss the influence of variance and bias on sample-specific prediction errors in multivariate calibration. We compare theoretical formulae with results obtained on experimental data. The results point towards the fact that bias contribution cannot necessarily be neglected when assessing sample-specific prediction ability in practice.
The aim of the present work is to extend the Sequentially Orthogonalized-Partial Least Squares (SO-PLS) regression method, usually used for continuous output, to situations where classification is ...the main purpose. For this reason SO-PLS discriminant analysis will be compared with other commonly used techniques such as Partial Least Squares-Discriminant Analysis (PLS-DA) and Multiblock-Partial Least Squares Discriminant Analysis (MB-PLS-DA). In particular we will focus on how multiblock strategies can give better discrimination than by analyzing the individual blocks. We will also show that SO-PLS discriminant analysis yields some valuable interpretation tools that give additional insight into the data. We will introduce some new ways to represent the information, taking into account both interpretation and predictive aspects.
•Novel multiblock classification method coupling SO-PLS and LDA is proposed.•SO-PLS-LDA outperforms standard MB-PLS-LDA.•SO-PLS-LDA provides better interpretation and easier visualization of the results.
•Interaction of sensory and extrinsic food attributes affect consumers’ preferences.•Discussion of methods combining sensory and extrinsic food attributes.•Descriptions, objectives, advantages, ...drawbacks and applications for each method are examined.•Industrial challenges and future research needs are discussed.
Understanding the interaction of sensory and extrinsic product attributes in consumer preferences has been identified as one of the key pillars for raising the likelihood of food products’ success in the market. Over the course of the last decade there has been increased attention on research emphasizing a combination of these food-choice driving parameters. This paper discusses progress made in the field focusing on three groups of methods: (i) conjoint hedonic methods (ii) “classic” hedonic testing and (iii) alternative descriptive approaches. For each method a description of the methodology in question, its objectives, advantages, drawbacks and applications are examined. Industrial challenges and future research needs are discussed.
Multi-way data arrays are becoming more common in several fields of science. For instance, analytical instruments can sometimes collect signals at different modes simultaneously, as e.g. fluorescence ...and LC/GC-MS. Higher order data can also arise from sensory science, were product scores can be reported as function of sample, judge and attribute. Another example is process monitoring, where several process variables can be measured over time for several batches. In addition, so-called multi-block data sets where several blocks of data explain the same set of samples are becoming more common. Several methods exist for analyzing either multi-way or multi-block data, but there has been little attention on methods that combine these two data properties. A common procedure is to “unfold” multi-way arrays in order to obtain two-way data tables on which classical multi-block methods can be applied. However, it is a known fact that unfolding can lead to overfitted models due to increased flexibility in parameter estimation. In this paper we present a novel multi-block regression method that can handle multi-way data blocks. This method is a combination of a multi-block method called Sequential and Orthogonalized-PLS (SO-PLS) and the multi-way version of PLS, N-PLS. The new method is therefore called SO-N-PLS. We have compared the method to Multi-block-PLS (MB-PLS) and SO-PLS on unfolded data. We investigate the hypotheses that SO-N-PLS has better performances on small data sets and noisy data, and that SO-N-PLS models are easier to interpret. The hypotheses are investigated by a simulation study and two real data examples; one dealing with regression and one with classification. The simulation study show that SO-N-PLS predicts better than the unfolded methods when the sample size is small and the data is noisy. This is due to the fact that it filters out the noise better than MB-PLS and SO-PLS. For the real data examples, the differences in prediction are small but the multi-way method allows easier interpretation.
•Extension of SO-PLS to multi-way arrays: SO-N-PLS does not require unfolding.•SO-N-PLS filters out the noise better than SO-PLS and MB-PLS.•In simulations, SO-N-PLS outperforms SO-PLS and MB-PLS handling small and noisy data.•Interpretation tools take into account the three-way nature of the data.
•A trained panel evaluated yoghurt samples varying in texture and flavour intensity.•We compared TDS, TCATA, TDS by modality in sensory description and panel performance.•TDS by modality and TCATA ...provided more details about flavour perception.•TCATA provided additional information about the interaction between attributes.•TDS by modality and TCATA were better in discrimination and agreement abilities.
For describing the evolution of sensory properties during eating, dynamic sensory methods are still being developed and optimised. Temporal Dominance of Sensations (TDS) and Temporal Check All That Apply (TCATA) are currently the most used and discussed. The aim of this study was to compare TDS, TCATA and a variant of TDS, performed by modality (M-TDS) in the outcome of the dynamic sensory description. These methods were applied with the same trained panel (n = 10) for the evaluation of the dynamic properties of yoghurt samples, with identical composition, only varying in textural properties. Based on a design of experiment, the yoghurts varied in viscosity (thin/thick), size of cereal particle added (flour/flakes) and flavour intensity (low dose/optimised dose, by adding artificial sweetener and vanilla).
The TDS curves revealed that the variation in viscosity and particle size led to differences in perception mainly at the beginning of the eating process (Thin/Thick and Gritty/Sandy). Additionally, all samples were also perceived as Bitter at the end of the eating process. TCATA and TDS by modality results were, generally, in agreement with TDS, but they unveiled more details of the samples’ dynamic profiles in all stages of the eating process, showing the effect of Vanilla and Sweet for the samples with optimised flavour, and the masked perception of Bitter.
The duration of the eating process was standardized and split into three time intervals (T0-T40, T41-T80, T81-T100). Panelists’ responses were summarized as frequency values in each time interval. Principal Component Analysis was used to visualize sample trajectories over time in the sensory space, with the need to study up to the third dimension to better understand the trajectories. ANOVA models were used to find the attributes which were significantly differences among products. Panel performance was assessed based on MANOVA models for the three methods. The results indicated that TCATA was more discriminative and panelists were more in agreement. TCATA also described samples in more detail in terms of number of discriminating attributes as compared with TDS. The discussion also centers in the different aspects of perception that could respond to different research questions for the three compared methods.
•Effects of evoked meal context was found on intrinsic and extrinsic ratings.•Effect of evoked meal context was most evident for the extrinsic ratings.•The largest sample discrimination was found for ...the traditional meal.
To achieve product success in the market, it is important to understand the interplay between sensory and non-sensory product attributes, since both dimensions must be optimised during the product development process. Contextual factors have been shown to affect the outcome of acceptance studies, and it is important that consumer responses to food products are studied in an appropriate eating context. The main objective of this study was to explore how evoked meal contexts affect consumer responses to a set of products in relation to intrinsic and extrinsic cues. Six types of dry-cured ham were described by means of sensory profiling and presented to 120 consumers in a central location test, first in a blind condition (intrinsic rating) and then in an informed condition (extrinsic rating). The measured responses were acceptance and probability of buying. The extrinsic product attributes presented were origin, ageing time and price. Moreover, two meal contexts were presented during both intrinsic and extrinsic rating: a traditional meal and a novel meal. The meals were introduced to consumers by means of written texts and pictures. The evoked meal contexts affected both the intrinsic and the extrinsic rating, with the strongest effect being observed for the extrinsic rating. Moreover, consumers were somewhat more discriminating when evoking a traditional meal than when evoking a novel meal. Accordingly, it is important to develop existing consumer testing procedures further, and to incorporate instruments allowing for possible effects of the consumption context.
Recent technological advances enable us to collect huge amounts of data from multiple sources. Jointly analyzing such multi-relational data from different sources, i.e., data fusion (also called ...multi-block, multi-view or multi-set data analysis), often enhances knowledge discovery. For instance, in metabolomics, biological fluids are measured using a variety of analytical techniques such as Liquid Chromatography–Mass Spectrometry and Nuclear Magnetic Resonance Spectroscopy. Data measured using different analytical methods may be complementary and their fusion may help in the identification of chemicals related to certain diseases. Data fusion has proved useful in many fields including social network analysis, collaborative filtering, neuroscience and bioinformatics.
In this paper, unlike many studies demonstrating the success of data fusion, we explore the limitations as well as the advantages of data fusion. We formulate data fusion as a coupled matrix and tensor factorization (CMTF) problem, which jointly factorizes multiple data sets in the form of higher-order tensors and matrices by extracting a common latent structure from the shared mode. Using numerical experiments on simulated and real data sets, we assess the performance of coupled analysis compared to the analysis of a single data set in terms of missing data estimation and demonstrate cases where coupled analysis outperforms analysis of a single data set and vice versa.
•Components methods are useful for exploration of data.•Component methods are useful for interpretation of large data sets.•Components methods can be generalized to and used for many different ...purposes.•Component method can be used to confirm hypotheses.•There is a strong link between many different component methods.
This paper discusses the advantages of using so-called component-based methods in sensory science. For instance, principal component analysis (PCA) and partial least squares (PLS) regression are used widely in the field; we will here discuss these and other methods for handling one block of data, as well as several blocks of data. Component-based methods all share a common feature: they define linear combinations of the variables to achieve data compression, interpretation, and prediction. The common properties of the component-based methods are listed and their advantages illustrated by examples. The paper equips practitioners with a list of solid and concrete arguments for using this methodology.
External preference mapping is widely used in marketing and R&D divisions to understand the consumer behaviour. The most common preference map is obtained through a two-step procedure that combines ...principal component analysis and least squares regression. The standard approach exploits classical regression and therefore focuses on the conditional mean. This paper proposes the use of quantile regression to enrich the preference map looking at the whole distribution of the consumer preference. The enriched maps highlight possible different consumer behaviour with respect to the less or most preferred products. This is pursued by exploring the variability of liking along the principal components as well as focusing on the direction of preference. The use of different aesthetics (colours, shapes, size, arrows) equips standard preference map with additional information and does not force the user to change the standard tool she/he is used to. The proposed methodology is shown in action on a case study pertaining yogurt preferences.