Sensory evaluation techniques

"This new edition of a bestseller covers all phases of performing sensory evaluation studies, from listing the steps involved in a sensory evaluation project to presenting advanced statistical methods. Like its predecessors, Sensory Evaluation Techniques, Fifth Edition gives a clear and concise...

Full description

Saved in:
Bibliographic Details
Main Authors Meilgaard, Morten C. (Author), Civille, Gail Vance (Author), Carr, B. Thomas (Author)
Format Electronic eBook
LanguageEnglish
Published Boca Raton : CRC Press, Taylor & Francis Group, CRC Press is an imprint of the Taylor & Francis Group, an Informa business, [2016]
EditionFifth edition.
Subjects
Online AccessFull text
ISBN9781482216912
1482216914
9781523107506
1523107502
9781482216905
1482216906
Physical Description1 online resource (xxix, 600 pages)

Cover

Table of Contents:
  • Machine generated contents note: 1.1. Introduction
  • 1.2. Development of Sensory Testing
  • 1.3. Human Subjects as Instruments
  • 1.3.1. Chain of Sensory Perception
  • 1.4. Conducting a Sensory Study
  • References
  • 2.1. Introduction
  • 2.2. Sensory Attributes
  • 2.2.1. Appearance
  • 2.2.2. Odor/Aroma/Fragrance
  • 2.2.3. Consistency and Texture
  • 2.2.4. Flavor
  • 2.2.5. Noise
  • 2.3. Human Senses
  • 2.3.1. Sense of Vision
  • 2.3.2. Sense of Touch
  • 2.3.3. Olfactory Sense
  • 2.3.3.1. General
  • 2.3.3.2. Retronasal Odor
  • 2.3.3.3. Odor Memory
  • 2.3.4. Chemical/Trigeminal Sense
  • 2.3.5. Sense of Gustation/Taste
  • 2.3.6. Sense of Hearing
  • 2.4. Perception at Threshold and Above
  • References
  • 3.1. Introduction
  • 3.2. Test Controls
  • 3.2.1. Development of Test-Room Design
  • 3.2.2. Location
  • 3.2.3. Test-Room Design
  • 3.2.3.1. Booth
  • 3.2.3.2. Descriptive Evaluation and Training Area
  • 3.2.3.3. Preparation Area
  • 3.2.3.4. Office Facilities
  • 3.2.3.5. Entrance and Exit Areas
  • 3.2.3.6. Storage
  • Note continued: 3.2.4. General Design Factors
  • 3.2.4.1. Color and Lighting
  • 3.2.4.2. Air Circulation, Temperature, and Humidity
  • 3.2.4.3. Construction Materials
  • 3.3. Product Controls
  • 3.3.1. General Equipment
  • 3.3.2. Sample Preparation
  • 3.3.2.1. Supplies and Equipment
  • 3.3.2.2. Materials
  • 3.3.2.3. Preparation Procedures
  • 3.3.3. Sample Presentation
  • 3.3.3.1. Container, Sample Size, and Other Particulars
  • 3.3.3.2. Order, Coding, and Number of Samples
  • 3.3.4. Product Sampling
  • 3.4. Panelist Controls
  • 3.4.1. Panel Training or Orientation
  • 3.4.2. Product/Time of Day
  • 3.4.3. Panelists/Environment
  • References
  • 4.1. Introduction
  • 4.2. Physiological Factors
  • 4.2.1. Adaptation
  • 4.2.2. Enhancement or Suppression
  • 4.3. Psychological Factors
  • 4.3.1. Expectation Error
  • 4.3.2. Error of Habituation
  • 4.3.3. Stimulus Error
  • 4.3.4. Logical Error
  • 4.3.5. Halo Effect
  • 4.3.6. Order of Presentation of Samples
  • 4.3.7. Mutual Suggestion
  • Note continued: 4.3.8. Lack of Motivation
  • 4.3.9. Capriciousness versus Timidity
  • 4.4. Poor Physical Condition
  • References
  • 5.1. Introduction
  • 5.2. Psychophysical Theory
  • 5.2.1. Fechner's Law
  • 5.2.2. Stevens' Law
  • 5.2.3. Beidler Model
  • 5.3. Classification
  • 5.4. Grading
  • 5.5. Ranking
  • 5.6. Scaling
  • 5.6.1. Category Scaling
  • 5.6.2. Line Scales
  • 5.6.3. Magnitude Estimation Scaling
  • 5.6.3.1. Magnitude Estimation versus Category Scaling
  • 5.6.3.2. Magnitude Matching (Cross-Modality Matching)
  • 5.6.4. Labelled Magnitude Scales (LMS)
  • References
  • 6.1. Introduction
  • 6.2. Define the Project Objective
  • 6.3. Define the Test Objective
  • 6.4. Review Project Objective and Test Objectives: Revise Test Design
  • Reference
  • 7.1. Introduction
  • 7.2. Unified Approach to Difference and Similarity Testing
  • 7.3. Triangle Test
  • 7.3.1. Scope and Application
  • 7.3.2. Principle of the Test
  • 7.3.3. Test Subjects
  • 7.3.4. Test Procedure
  • Note continued: 7.3.5. Analysis and Interpretation of Results
  • 7.4. Duo Trio Test
  • 7.4.1. Scope and Application
  • 7.4.2. Principle of the Test
  • 7.4.3. Test Subjects
  • 7.4.4. Test Procedure
  • 7.5. Two-out-of-Five Test
  • 7.5.1. Scope and Application
  • 7.5.2. Principle of the Test
  • 7.5.3. Test Subjects
  • 7.5.4. Test Procedure
  • 7.6. Same/Different Test (or Simple Difference Test)
  • 7.6.1. Scope and Application
  • 7.6.2. Principle of the Test
  • 7.6.3. Test Subjects
  • 7.6.4. Test Procedure
  • 7.6.5. Analysis and Interpretation of Results
  • 7.7."A"-"Not A" Test
  • 7.7.1. Scope and Application
  • 7.7.2. Principle of the Test
  • 7.7.3. Test Subjects
  • 7.7.4. Test Procedure
  • 7.7.5. Analysis and Interpretation of Results
  • 7.8. Difference-from-Control Test
  • 7.8.1. Scope and Application
  • 7.8.2. Principle of the Test
  • 7.8.3. Test Subjects
  • 7.8.4. Test Procedure
  • 7.8.5. Analysis and Interpretation of Results
  • 7.9. Sequential Tests
  • 7.9.1. Scope and Application
  • Note continued: 7.9.2. Principle of the Test
  • 7.9.3. Analysis and Interpretation of Results: Parameters of the Test
  • References
  • 8.1. Introduction: Paired Comparison Designs
  • 8.2. Directional Difference Test: Comparing Two Samples
  • 8.2.1. Scope and Application
  • 8.2.2. Principle
  • 8.2.3. Test Subjects
  • 8.2.4. Test Procedure
  • 8.3. Specified Method of Tetrads: Comparing Two Samples on a Specified Attribute Using the Method of Tetrads
  • 8.3.1. Scope and Application
  • 8.3.2. Principle of the Test
  • 8.3.3. Test Assessors
  • 8.3.4. Test Procedure
  • 8.4. Pairwise Ranking Test: Friedman Analysis-Comparing Several Samples in All Possible Pairs
  • 8.4.1. Scope and Application
  • 8.4.2. Principle of the Test
  • 8.4.3. Test Subjects
  • 8.4.4. Test Procedure
  • 8.5. Introduction: Multisample Difference Tests-Block Designs
  • 8.5.1.Complete Block Designs
  • 8.5.2. Balanced Incomplete Block (BIB) Designs
  • Note continued: 8.6. Simple Ranking Test: Friedman Analysis: Randomized (Complete) Block Design
  • 8.6.1. Scope and Application
  • 8.6.2. Principle of the Test
  • 8.6.3. Test Subjects
  • 8.6.4. Test Procedure
  • 8.6.5. Analysis and Interpretation of Results
  • 8.7. Multisample Difference Test: Rating Approach-Evaluation by Analysis of Variance (ANOVA)
  • 8.7.1. Scope and Application
  • 8.7.2. Principle of the Test
  • 8.7.3. Test Subjects
  • 8.7.4. Test Procedure
  • 8.7.5. Analysis and Interpretation of Results
  • 8.8. Multisample Difference Test: BIB Ranking Test (Balanced Incomplete Block Design)- Friedman Analysis
  • 8.8.1. Scope and Application
  • 8.8.2. Principle of the Test
  • 8.8.3. Test Subjects
  • 8.8.4. Test Procedure
  • 8.9. Multisample Difference Test: BIB Rating Test-Evaluation by Analysis of Variance
  • 8.9.1. Scope and Application
  • 8.9.2. Principle of the Test
  • 8.9.3. Test Subjects
  • 8.9.4. Test Procedure
  • 8.9.5. Analysis and Interpretation of Results
  • References
  • Note continued: 9.1. Introduction
  • 9.2. Definitions
  • 9.3. Applications of Threshold Determinations
  • References
  • 10.1. Introduction
  • 10.2. Panel Development
  • 10.2.1. Personnel
  • 10.2.1.1. Special Considerations for a Quality Control/Quality Assurance (QC/QA) Panel
  • 10.2.2. Facilities
  • 10.2.3. Data Collection and Handling
  • 10.2.4. Projected Costs
  • 10.3. Selection and Training for Difference Tests
  • 10.3.1. Selection
  • 10.3.1.1. Matching Tests
  • 10.3.1.2. Detection/Discrimination Tests
  • 10.3.1.3. Ranking/Rating Tests for Intensity
  • 10.3.1.4. Interpretation of Results of Screening Tests
  • 10.3.2. Training
  • 10.4. Selection and Training of Panelists for Descriptive Testing
  • 10.4.1. Recruiting Descriptive Panelists
  • 10.4.2. Selection for Descriptive Testing
  • 10.4.2.1. Prescreening Questionnaires
  • 10.4.2.2. Acuity Tests
  • 10.4.2.3. Ranking/Rating Screening Tests for Descriptive Analysis
  • 10.4.2.4. Personal Interview
  • 10.4.2.5. Mock Panel
  • Note continued: 10.4.3. Training for Descriptive Testing
  • 10.4.3.1. Terminology Development
  • 10.4.3.2. Introduction to Descriptive Scaling
  • 10.4.3.3. Initial Practice
  • 10.4.3.4. Small Product Differences
  • 10.4.3.5. Final Practice
  • 10.5. Panel Performance and Motivation
  • 10.5.1. Performance
  • 10.5.2. Panelist Maintenance, Feedback, Rewards, and Motivation
  • Appendix 10.1 Prescreening Questionnaires
  • Appendix 10.2 Panel Leadership Advice
  • References
  • 11.1. Definition
  • 11.2. Field of Application
  • 11.3.Components of Descriptive Analysis
  • 11.3.1. Characteristics: The Qualitative Aspect
  • 11.3.2. Intensity: The Quantitative Aspect
  • 11.3.3. Order of Appearance: The Time Aspect
  • 11.3.4. Overall Impression: The Integrated Aspect
  • 11.4.Commonly Used Descriptive Test Methods with Trained Panels
  • 11.4.1. Flavor Profile Method
  • 11.4.2. Texture Profile Method
  • 11.4.3. Quantitative Descriptive Analysis (QDA®) Method
  • Note continued: 11.4.4. Spectrum["! Descriptive Analysis Method
  • 11.4.5. Time-Intensity Descriptive Analysis
  • 11.4.5.1. Fixed-Time-Point Methods
  • 11.4.5.2. Continuous Measurement Methods
  • 11.5.Commonly Used Descriptive Test Methods with Untrained Panels
  • 11.5.1. Free-Choice Profiling
  • 11.5.2. Flash Profiling
  • 11.5.3. Projective Mapping (Napping)
  • 11.5.4. Sorting
  • 11.6. Application of Descriptive Analysis Panel Data
  • References
  • 12.1. Designing a Descriptive Method
  • 12.2. Myths about the Spectrum Descriptive Analysis Method
  • 12.2.1. Myth 1: All Descriptive Methods Are the Same
  • 12.2.2. Myth 2: Concept Development Is Unnecessary in Training a Spectrum Panel
  • 12.2.3. Myth 3: All Spectrum Training and Panel Leaders Are the Same; Anyone Can Do It
  • 12.2.4. Myth 4: Consumer Terms Are Better than Technical Terms
  • 12.2.5. Myth 5: Spectrum Panelists Are Forced to Use Canned Lexicons
  • Note continued: 12.2.6. Myth 6: Spectrum Panelists Are Coerced into Intensity Calibration
  • 12.2.7. Myth 7: The Universal Scale Cannot Show Small Differences
  • 12.2.8. Myth 8: Published References and Terms Are the Equivalent of a Training Manual
  • 12.2.9. Myth 9: Product Users Make the Best Panelists and Hedonics Influence Panel Ratings
  • 12.2.10. Myth 10: Panelists Cannot Be Trained for an Array of Products
  • 12.2.11. Myth 11: Training for the Spectrum Method Is Too Time-Intensive
  • 12.2.12. Myth 12: The Spectrum Method Is Consensus Only
  • 12.2.13. Myth 13: Consensus Profiling Prevents Statistical Analysis of Panel Data
  • 12.2.14. Myth 14: Difficult-to-Find References Prevent Universality of the Spectrum Scale
  • 12.3. Terminology and Lexicon Development
  • 12.4. Intensity
  • 12.5.Combining the Spectrum Descriptive Analysis Method with Other Measures
  • 12.5.1. Using the Spectrum Method Simultaneously with Other Methods
  • Note continued: 12.5.2.Combining the Spectrum Method with Other Sources of Sensory Data
  • 12.6. Spectrum Descriptive Procedures for Quality Assurance, Shelf-Life Studies, and So On
  • References
  • Appendix 12.1 Spectrum Terminology for Descriptive Analysis
  • Appendix 12.2 Spectrum Intensity Scales for Descriptive Analysis
  • Appendix 12.3 Streamlined Approach to Spectrum References
  • Appendix 12.4 Spectrum Descriptive Analysis: Product Lexicons
  • Appendix 12.5 Spectrum Descriptive Analysis: Examples of Full Product Descriptions
  • Appendix 12.6 Spectrum Descriptive Analysis Training Exercises
  • 13.1. Purpose and Applications
  • 13.1.1. Product Maintenance
  • 13.1.2. Product Improvement/Optimization
  • 13.1.3. Development of New Products
  • 13.1.4. Assessment of Market Potential
  • 13.1.5. Category Review/Benchmarking
  • 13.1.6. Support for Advertising Claims
  • 13.1.7. Uncovering Consumer Needs
  • 13.2. Subjects/Consumers in Affective Tests
  • Note continued: 13.2.1. Sampling and Demographics
  • 13.2.1.1. User Group
  • 13.2.1.2. Age
  • 13.2.1.3. Gender
  • 13.2.1.4. Income
  • 13.2.1.5. Geographic Location
  • 13.2.2. Source of Test Subjects
  • 13.2.2.1. Employees
  • 13.2.2.2. Local Area Residents
  • 13.2.2.3. General Population
  • 13.3. Choice of Test Location
  • 13.3.1. Laboratory Tests
  • 13.3.2. Central Location Tests
  • 13.3.3. Home Use Tests
  • 13.4. Affective Methods: Qualitative
  • 13.4.1. Applications
  • 13.4.2. Qualitative Screener Development
  • 13.4.3. Types of Qualitative Affective Tests
  • 13.4.3.1. Focus Groups
  • 13.4.3.2. Focus Panels
  • 13.4.3.3. Mini Groups, Diads, Triads
  • 13.4.3.4. One-on-One Interviews
  • 13.5. Affective Methods: Quantitative
  • 13.5.1. Applications
  • 13.5.2. Design of Quantitative Affective Tests
  • 13.5.2.1. Quantitative Screener Development
  • 13.5.2.2. Questionnaire Design
  • 13.5.2.3. Protocol Design
  • 13.5.3. Types of Quantitative Affective Tests
  • 13.5.3.1. Preference Tests
  • Note continued: 13.5.3.2. Acceptance Tests
  • 13.5.4. Assessment of Individual Attributes (Attribute Diagnostics)
  • 13.5.5. Other Information
  • 13.6. Internet Research
  • 13.6.1. Introduction
  • 13.6.2. Applications 33I
  • 13.6.3. Design of Internet Research
  • 13.6.4. Internet Research Considerations
  • 13.6.4.1. Benefits and Pitfalls of Using the Internet for Research
  • 13.6.4.2. Platform
  • 13.6.4.3. Recommendations and Checks & Balances
  • Case Study: Internet Research
  • 13.7. Using Other Sensory Methods to Uncover Insights
  • 13.7.1. Relating Affective and Descriptive Data
  • Case Study: Relating Consumer Qualitative Information with Descriptive Analysis Data
  • 13.7.2. Using Affective Data to Define Shelf-Life or Quality Limits
  • 13.7.3. Rapid Prototype Development
  • Appendix 13.1 Screeners for Consumer Studies-Focus Group, CLT, and Home Use Test (HUT)
  • Appendix 13.2 Discussion Guide-Group or One-on-One Interviews
  • Appendix 13.3 Questionnaires for Consumer Studies
  • Note continued: 14.4.4. Transitioning from Percent-Distinguisher Model to the Thurstonian Model for Planning Discrimination Tests
  • 14.5. Statistical Design of Sensory Panel Studies
  • 14.5.1. Sampling: Replication versus Multiple Observations
  • 14.5.2. Blocking an Experimental Design
  • 14.5.2.1.Completely Randomized Designs
  • 14.5.3. Randomized (Complete) Block Designs
  • 14.5.3.1. Randomized Block Analysis of Ratings
  • 14.5.3.2. Randomized Block Analysis of Rank Data
  • 14.5.4. Balanced Incomplete Block Designs
  • 14.5.4.1. BIB Analysis of Ratings
  • 14.5.4.2. BIB Analysis of Rank Data
  • 14.5.5. Latin-Square Designs
  • 14.5.6. Split-Plot Designs
  • 14.5.6.1. Split-Plot Analysis of Ratings
  • 14.5.7.A Simultaneous Multiple Comparison Procedure
  • Appendix 14.1 Probability
  • References
  • 15.1. Introduction
  • 15.2. Data Relationships
  • 15.2.1. All Independent Variables
  • 15.2.1.1. Correlation Analysis
  • 15.2.1.2. Principal Components Analysis
  • Note continued: 15.2.1.3. Multidimensional Scaling
  • 15.2.1.4. Cluster Analysis
  • 15.2.2. Dependent and Independent Variables
  • 15.2.2.1. Regression Analysis
  • 15.2.2.2. Principal Component Regression
  • 15.2.2.3. Partial Least-Squares Regression
  • 15.2.2.4. Discriminant Analysis
  • 15.3. Preference Mapping
  • 15.3.1. Internal Preference Mapping
  • 15.3.2. External Preference Mapping
  • 15.3.2.1. Constructing the Perceptual Map of the Product Space
  • 15.3.2.2. Identifying Preference Segments
  • 15.3.2.3. From Perceptual Map to Preference Map
  • 15.3.2.4. Reverse Engineering the Profile of the Target Product
  • 15.3.2.5. External Preference Mapping of Individual Respondents
  • 15.3.3. Partial Least-Squares Mapping
  • 15.4. Treatment Structure of an Experimental Design
  • 15.4.1. Factorial Treatment Structures
  • 15.4.2. Fractional Factorials and Screening Studies
  • 15.4.2.1. Constructing Fractional Factorials
  • 15.4.2.2. Plackett-Burman Experiments
  • Note continued: 15.4.2.3.Computer-Aided Optimal Fractional Designs
  • 15.4.2.4. Analysis of Screening Studies
  • 15.4.3. Conjoint Analysis
  • 15.4.4. Response Surface Methodology
  • References
  • 16.1. Introduction
  • 16.1.1. Rationale
  • 16.1.2. Qualities of a Good Report
  • 16.2. Anatomy of the Report
  • 16.2.1. Part 1: Summary or Abstract
  • 16.2.2. Part 2: Objectives and Introduction
  • 16.2.3. Part 3: Materials and Methods
  • 16.2.4. Part 4: Results and Discussion
  • 16.3. Graphical Presentation of Data
  • 16.3.1. Introduction
  • 16.3.2. General Guidelines for Graphing Data
  • 16.3.3. Appropriateness of Graphs
  • 16.3.4.Common Graphs and Examples
  • 16.4. Example Reports
  • References
  • 17.1. Introduction
  • 17.2. Attribute Descriptive Methods
  • 17.2.1. Establishing Sensory Specifications
  • 17.2.1.1. Initial Sample Screening
  • 17.2.1.2. Sensory Descriptive Evaluations and Sample Selection for Consumer Testing
  • 17.2.1.3. Consumer Testing Production Samples
  • Note continued: 17.2.1.4. Establishing the Sensory Specifications
  • 17.2.2. Implementing the In-Plant QC/Sensory Function
  • 17.2.3. Product Sampling, Data Analysis, and Reporting
  • 17.3. Difference-from-Control Methods
  • 17.3.1. Establishing Sensory Specifications
  • 17.3.2. Implementing the In-Plant QC/Sensory Function
  • 17.3.3. Product Sampling, Data Analysis, and Reporting
  • 17.4. In-Out Method
  • 17.4.1. Establishing Sensory Specifications
  • 17.4.2. Implementing the In-Plant QC/Sensory Function
  • 17.4.3. Product Sampling, Data Analysis, and Reporting
  • References
  • 18.1. Introduction
  • 18.2. Front End of Innovation
  • 18.2.1. Definition, Purpose, Outcome
  • 18.2.2. Applications
  • 18.2.3. Tools and Techniques
  • 18.2.4. Design of Front-End Innovation Research
  • 18.2.5. Data Analysis and Mining
  • 18.2.5.1. Case Study: Understanding Consumer Perception of Crispy and Crunchy
  • 18.3. Sequence Mapping
  • 18.4. Capturing the Iconic Experience
  • Note continued: 18.4.1. Definition and Purpose or Scope
  • 18.4.2. Applications
  • 18.4.2. Design of Research
  • 18.4.3. Tools and Techniques
  • 18.4.4. Data Analysis and Mining/Conclusions
  • 18.5. Consumer Cocreation
  • 18.6. Qualitative Use of Kano Methodology
  • 18.7. Benefit Perception beyond Liking: Functional, Emotional, and Health and Wellness Benefits
  • 18.7.1. Definition and Purpose or Scope
  • 18.7.2. Tools and Techniques
  • 18.7.3. Applications
  • 18.7.4. Design of Research
  • 18.7.5. Conclusions
  • 18.8. Behavioral Economics
  • 18.9. Category Appraisals, Key Drivers Studies and Sensory Segmentation
  • 18.9.1. Definition and Purpose or Scope
  • 18.9.2. Design and Benefits of the Research
  • 18.9.2.1. Phase I: Defining the Limits of the Category
  • 18.9.2.2. Phase II: Documentation of Product Characteristics, Competitive Intelligence and Selection of Products for Consumer Testing
  • Note continued: 18.9.2.3. Phase III: Determining Consumer Acceptance and Perception of the Products in the Category
  • 18.9.2.4. Phase IV: Identifying Key Drivers, Drivers of Benefit Perception, and Strategic Product Guidance
  • 18.9.3. Conclusion
  • 18.10. Ad Claims
  • 18.10.1. Introduction
  • 18.10.2. Types of Claims
  • 18.10.3. Types of Claims Testing
  • 18.10.4. Building the Case
  • 18.10.5. Cautions and Things to Consider
  • Additional Resources
  • References
  • Scenario 1
  • Scenario 2
  • Scenario 3
  • Scenario 4
  • Scenario 5
  • References
  • Additional Qualitative References.