Evaluates and audits choice-based conjoint experiments using systematic social science standards and statistical best practices.
A comprehensive diagnostic tool for social scientists and data analysts performing conjoint analysis. It provides a structured framework to audit experimental designs, verify estimation scripts, assess measurement error through Intra-Respondent Reliability (IRR), and validate the interpretation of Average Marginal Component Effects (AMCEs) and Marginal Means (MMs). Whether reviewing a manuscript or auditing a codebase, this skill ensures that identifying assumptions—such as profile-order, carryover, and fatigue—are empirically tested and that subgroup analyses are statistically sound.
主要功能
01Detection of measurement error and swapping bias using projoint methods
02Systematic checklist for design, estimation, and external validity
03Verification of identifying assumptions (profile-order, carryover, fatigue)
04Code-level auditing for clustering, estimands, and IRR computation
05Guidance on subgroup analysis using Marginal Means vs. conditional AMCEs
0615 GitHub stars
使用场景
01Auditing R or Python scripts to ensure correct clustering and estimand specifications
02Assessing the external validity and behavioral benchmarks of experimental designs
03Reviewing academic manuscripts or working papers involving conjoint analysis