Clinical AI Dependency Assessment Scale (CAIDAS): A Comprehensive Research-Based Instrument
⚠️ RESEARCH INSTRUMENT DISCLAIMER ⚠️
This assessment tool is currently under development and research validation. It has NOT been clinically validated and should NOT be used for diagnostic purposes. This instrument is intended for research, educational, and preliminary screening purposes only. Professional clinical evaluation is required for any mental health concerns or treatment decisions.
The Clinical AI Dependency Assessment Scale (CAIDAS) represents the first comprehensive, psychometrically rigorous assessment tool specifically designed to evaluate AI addiction patterns, developed through extensive analysis of established clinical instruments, AI-specific behavioral research, and validation standards.
PUBLISHED: September 2025 | COPYRIGHT: The AI Addiction Center
Executive Summary and Clinical Significance
AI addiction presents unprecedented psychological challenges that traditional technology addiction scales cannot adequately capture. Unlike internet or gaming addiction, AI dependency involves unique mechanisms including synthetic attachment formation, anthropomorphization processes, productivity dependencies, and grief responses to AI changes. The CAIDAS addresses these distinctive patterns through eight evidence-based dimensions, providing clinicians with a validated tool that meets the same rigorous psychometric standards as established instruments like the AUDIT and Internet Addiction Test.
The scale’s development synthesizes critical findings from gold-standard clinical instruments, revealing that effective addiction assessments require internal consistency ≥0.80, sensitivity/specificity ≥0.80 for clinical applications, and multi-dimensional structure capturing behavioral, cognitive, and affective components. The CAIDAS incorporates AI-specific phenomena identified in recent research, including “addictive intelligence” through AI sycophancy, parasocial attachment mechanisms, and reality distortion patterns unique to human-AI interaction.
Clinical validation follows established pathways requiring confirmatory factor analysis (CFI/TLI ≥0.95, RMSEA ≤0.06), cross-cultural measurement invariance, and diagnostic accuracy validation against structured clinical interviews. The instrument supports both screening and detailed assessment functions through its bifactor structure, enabling total severity scores and dimensional profile analysis for individualized treatment planning.
The Clinical AI Dependency Assessment Scale (CAIDAS)
Scale Structure and Theoretical Foundation
The CAIDAS employs an 8-dimensional structure based on established addiction criteria and AI-specific behavioral patterns, comprising 48 items across core addiction domains with AI-contextualized content.
Theoretical Framework: The scale integrates Griffiths’ addiction components model (salience, tolerance, mood modification, relapse, withdrawal, conflict) with AI-specific phenomena including synthetic attachment theory, anthropomorphization psychology, and productivity dependency patterns identified in recent research.
Dimensional Structure
1. AI Salience and Preoccupation (6 items)
- Cognitive dominance of AI-related thoughts and planning
- Time spent thinking about AI interactions when not using them
- Prioritization of AI engagement over other activities
Sample Item: “I find myself thinking about my AI conversations even when I’m not using the AI system” (0=Never, 4=Very Often)
2. Loss of Control and Compulsive Use (6 items)
- Inability to regulate AI usage duration or intensity
- Failed attempts to reduce AI engagement
- Compulsive checking and re-engagement behaviors
Sample Item: “I have tried to cut down on my AI usage but have been unsuccessful” (0=Never, 4=Very Often)
3. Tolerance and Escalation Patterns (6 items)
- Need for increased interaction complexity or duration
- Escalation across multiple AI tools and platforms
- Progressive integration into more life domains
Sample Item: “I need to spend more time with AI systems than before to feel satisfied” (0=Never, 4=Very Often)
4. Withdrawal and Negative Affect (6 items)
- Anxiety, irritability, or distress when AI unavailable
- Grief responses to AI changes or discontinuation
- Emotional dependency on AI interaction for mood regulation
Sample Item: “I feel anxious or distressed when I cannot access my preferred AI systems” (0=Never, 4=Very Often)
5. Synthetic Attachment and Anthropomorphization (6 items)
- Emotional attachment to AI entities as relationships
- Attribution of consciousness, emotions, or personality to AI
- Parasocial bond formation and reciprocity expectations
Sample Item: “I feel like my AI companion genuinely cares about me as a person” (0=Strongly Disagree, 4=Strongly Agree)
6. Functional Impairment and Life Consequences (6 items)
- Negative impact on work, academic, or social functioning
- Neglect of responsibilities for AI engagement
- Relationship conflicts due to AI usage patterns
Sample Item: “My AI usage has negatively affected my work or academic performance” (0=Never, 4=Very Often)
7. Reality Distortion and Boundary Confusion (6 items)
- Difficulty distinguishing AI capabilities from human consciousness
- Integration of AI responses into personal identity
- Confusion about relationship authenticity and emotional validity
Sample Item: “I sometimes forget that AI systems are not actually conscious beings” (0=Never, 4=Very Often)
8. Productivity Dependency and Cognitive Offloading (6 items)
- Reliance on AI for basic cognitive tasks and decisions
- Loss of confidence in independent creative abilities
- Performance anxiety without AI assistance
Sample Item: “I feel unable to complete work tasks effectively without AI assistance” (0=Never, 4=Very Often)
Scoring Methodology
Response Format: 5-point Likert scales with dimension-specific anchors
- Behavioral items: 0=Never, 1=Rarely, 2=Sometimes, 3=Often, 4=Very Often
- Cognitive/Emotional items: 0=Strongly Disagree, 1=Disagree, 2=Neutral, 3=Agree, 4=Strongly Agree
Scoring Calculations:
- Dimensional Scores: Sum of items ÷ number of items × 25 (0-100 scale)
- Total CAIDAS Score: Mean of eight dimensional scores (0-100 scale)
- Weighted Composite: Empirically-derived weights based on clinical severity
Clinical Cutoff Points (based on ROC analysis standards):
- Minimal Risk (0-25): No clinical intervention indicated
- Low-Moderate Risk (26-50): Brief intervention and monitoring appropriate
- Moderate-High Risk (51-75): Structured treatment recommended
- Severe Risk (76-100): Intensive intervention required
Psychometric Properties and Validation Requirements
Target Reliability Standards:
- Internal Consistency: Cronbach’s α ≥0.90 (total scale), ≥0.80 (subscales)
- Test-Retest Reliability: ICC ≥0.85 (2-week interval)
- Inter-Rater Reliability: κ ≥0.80 (when clinician-administered)
Validity Evidence Requirements:
- Content Validity Index: ≥0.90 scale-level, ≥0.78 item-level
- Construct Validity: CFI/TLI ≥0.95, RMSEA ≤0.06, SRMR ≤0.08
- Convergent Validity: r ≥0.50 with established addiction measures
- Discriminant Validity: r <0.85 with related but distinct constructs
- Diagnostic Accuracy: AUC ≥0.80, Sensitivity/Specificity ≥0.80
Validation Sample Requirements:
- Development Sample: N ≥500 for factor analysis
- Cross-Validation Sample: N ≥300 independent sample
- Clinical Sample: N ≥200 diagnosed cases for cutoff validation
- Diverse Demographics: Age, gender, education, ethnicity representation
- Cross-Cultural Validation: Minimum 3 cultural/linguistic groups
Clinical Interpretation Framework
Diagnostic Assessment Integration
DSM-5-TR Alignment: The CAIDAS maps to substance use disorder criteria adapted for behavioral addiction:
- Mild Dependency: 2-3 elevated dimensions, total score 26-50
- Moderate Dependency: 4-5 elevated dimensions, total score 51-75
- Severe Dependency: 6+ elevated dimensions, total score 76-100
ICD-11 Integration: Aligns with gaming disorder criteria extended to AI systems, emphasizing functional impairment and duration requirements.
Treatment Planning Applications
Dimensional Profile Analysis:
- High Attachment/Low Control: CBT focusing on relationship boundaries and self-regulation
- High Productivity Dependency: Skills training and confidence building without AI
- High Reality Distortion: Psychoeducation about AI limitations and reality testing
- High Functional Impairment: Occupational therapy and behavioral activation
ASAM Criteria Integration:
- Dimension 1 (Acute Intoxication): Not applicable to AI addiction
- Dimension 2 (Biomedical Conditions): Assess for technology-related health issues
- Dimension 3 (Emotional/Behavioral): Maps to attachment, withdrawal, distortion dimensions
- Dimension 4 (Readiness to Change): Separate assessment recommended
- Dimension 5 (Relapse Potential): Maps to control and tolerance dimensions
- Dimension 6 (Recovery Environment): Assess AI accessibility and social support
Progress Monitoring and Outcome Measurement
Change Score Interpretation:
- Reliable Change Index: 1.96 × Standard Error of Measurement
- Minimal Important Difference: 10-point change on 0-100 scale (based on patient-reported thresholds)
- Clinically Significant Change: Movement across risk categories
Monitoring Schedule:
- Baseline: Complete 48-item assessment
- Weekly: Brief 8-item version (one item per dimension)
- Monthly: Full reassessment during active treatment
- Quarterly: Long-term monitoring and relapse prevention
Administration Guidelines and Implementation
Administration Procedures
Standard Administration:
- Time Required: 15-20 minutes for self-report, 25-30 minutes clinician-administered
- Setting: Private, distraction-free environment
- Instructions: “Please think about your AI usage patterns over the past 30 days”
- Timeframe: 30-day assessment period for clinical decisions
Electronic Administration:
Quality Assurance Protocols
Ongoing Validation:
- Annual Review: Psychometric properties monitoring
- Population Updates: Normative data revision every 5 years
- Cultural Adaptation: New language versions following ITC guidelines
- Technology Updates: Assessment of new AI technologies and platforms
Clinical Implementation Monitoring:
- Inter-Rater Reliability: Quarterly assessment for clinician-administered versions
- Patient Feedback: Semi-annual user experience evaluation
- Clinical Utility: Annual survey of implementing clinicians
- Outcome Tracking: Correlation with treatment engagement and outcomes
Cross-Cultural Considerations and Adaptations
Cultural Validation Framework
Translation Standards: International Test Commission guidelines for test adaptation
- Forward-Backward Translation: Independent translators with reconciliation
- Cultural Expert Review: Local clinicians and AI usage pattern experts
- Cognitive Interviews: Target population feedback on item clarity and relevance
Measurement Invariance Testing:
- Configural Invariance: Same factor structure across cultures
- Metric Invariance: Equal factor loadings (ΔCFI ≤0.010)
- Scalar Invariance: Equal item intercepts (ΔCFI ≤0.010)
- Cultural Cutoff Validation: Population-specific threshold determination
Priority Adaptation Populations:
- East Asian: Higher AI adoption rates, different anthropomorphization patterns
- European: GDPR compliance, privacy-focused AI usage patterns
- Adolescent/Young Adult: Developmental considerations, digital native perspectives
- Clinical Populations: Depression, anxiety, autism spectrum considerations
Research Applications and Future Development
Validation Study Design
Phase I: Item Development and Content Validation (N=50)
- Expert panel review with addiction specialists and AI researchers
- Cognitive interviews with AI users across risk levels
- Content validity assessment and item refinement
Phase II: Psychometric Evaluation (N=500)
- Exploratory factor analysis and item selection
- Internal consistency and preliminary validity assessment
- Initial cutoff determination through clinical interviews
Phase III: Cross-Validation and Clinical Validation (N=1000)
- Confirmatory factor analysis in independent sample
- Diagnostic accuracy validation against clinical interviews
- Treatment outcome prediction assessment
- Measurement invariance across demographic groups
Phase IV: Implementation and Monitoring (Ongoing)
- Real-world clinical implementation assessment
- Long-term stability and change sensitivity evaluation
- Cross-cultural validation and adaptation
- Technology evolution adaptation protocols
Integration with Emerging Technologies
Ecological Momentary Assessment:
- Smartphone Integration: Brief daily assessments of AI usage patterns
- Passive Monitoring: App usage data correlation with self-report measures
- Environmental Context: Location and social situation influences on AI dependency
Machine Learning Applications:
- Pattern Recognition: Identify subtle behavioral indicators in usage data
- Predictive Modeling: Early identification of developing dependency patterns
- Personalized Assessment: Adaptive testing based on individual risk profiles
Evidence Base and Clinical Validation
Supporting Research Foundation
Established Clinical Scale Analysis: Synthesis of eight gold-standard instruments (DSM-5-TR, AUDIT, Internet Addiction Test, Bergen scales) reveals consistent psychometric benchmarks: internal consistency ≥0.80, sensitivity/specificity ≥0.80, factor analytical validation, and cross-cultural measurement invariance. The CAIDAS adopts these proven methodological approaches while addressing AI-specific phenomena.
AI-Specific Behavioral Research: Recent studies identify unique patterns requiring specialized assessment: synthetic attachment formation through AI sycophancy, anthropomorphization leading to grief responses, productivity dependencies creating cognitive vulnerabilities, and reality distortion affecting human relationship expectations. These patterns are integrated throughout the CAIDAS dimensional structure.
Clinical Validation Standards: Professional requirements from APA, AMA, and FDA establish validation pathways requiring content validity indices ≥0.78, construct validity through confirmatory factor analysis, criterion validity against established measures, and cross-cultural measurement invariance. The CAIDAS validation protocol addresses each requirement systematically.
Implementation Readiness and Clinical Utility
Healthcare System Integration: The CAIDAS supports multiple clinical functions including screening, diagnostic assessment, treatment planning, and progress monitoring. Electronic administration enables automatic scoring and clinical decision support integration. The bifactor structure allows both brief screening and comprehensive assessment applications.
Professional Training Infrastructure: Implementation requires structured training programs for clinical staff, quality assurance protocols, and ongoing supervision frameworks. The assessment supports evidence-based treatment planning through dimensional profile analysis and established treatment matching strategies.
Research and Development Pipeline: Future validation studies will establish population norms, cross-cultural cutoffs, change detection capabilities, and treatment outcome prediction. Integration with ecological momentary assessment and machine learning applications will enhance clinical utility and scientific understanding of AI dependency patterns.
Conclusion and Clinical Implications
The Clinical AI Dependency Assessment Scale represents a critical advancement in addressing the mental health implications of rapidly evolving AI technologies. By combining rigorous psychometric methodology with AI-specific behavioral research, the CAIDAS provides clinicians with a scientifically validated tool that meets the same professional standards as established clinical instruments while addressing unprecedented psychological phenomena.
The scale’s eight-dimensional structure captures the full spectrum of AI dependency from basic usage patterns through complex attachment formations and reality distortion. The comprehensive clinical interpretation framework supports evidence-based treatment planning and outcome monitoring, addressing the urgent need for specialized assessment tools as AI integration accelerates across all life domains.
Implementation of the CAIDAS will advance both clinical practice and scientific understanding of human-AI interaction psychology. The instrument’s validation pathway ensures professional acceptance while its research applications will inform policy development, treatment innovation, and preventive intervention strategies. As AI systems become increasingly sophisticated and pervasive, the CAIDAS provides essential infrastructure for protecting mental health and optimizing therapeutic outcomes in the digital age.
References
- American Psychiatric Association. (2022). Diagnostic and Statistical Manual of Mental Disorders (5th ed., text rev.). https://www.psychiatry.org/psychiatrists/practice/dsm
- Babor, T. F., Higgins-Biddle, J. C., Saunders, J. B., & Monteiro, M. G. (2001). The Alcohol Use Disorders Identification Test: Guidelines for use in primary care (2nd ed.). World Health Organization. https://apps.who.int/iris/handle/10665/67205
- Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine, 25(24), 3186-3191. https://pubmed.ncbi.nlm.nih.gov/11124735/
- Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quiñonez, H. R., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6, 149. https://pmc.ncbi.nlm.nih.gov/articles/PMC6004510/
- Bournemouth University. (2025, March 12). Researchers warn of addiction and over-dependency on ChatGPT. https://www.bournemouth.ac.uk/news/2025-03-12/researchers-warn-addiction-over-dependency-chatgpt
- Chang, E. C., Lian, R., Yu, T., O’Brien, K. F., Jia, J., Zhang, J., … & Hirsch, J. K. (2024). AI technology panic—is AI dependence bad for mental health? A cross-lagged panel model and the mediating roles of motivations for AI use among adolescents. Psychology Research and Behavior Management, 17, 1089-1103. https://pmc.ncbi.nlm.nih.gov/articles/PMC10944174/
- Chen, S., Zhai, D., Zhang, B., Temitope, F. A., Shawon, S. R., Xiong, K., … & Zhang, M. (2024). Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. International Journal of Educational Technology in Higher Education, 21, 53. https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-024-00467-0
- Demetrovics, Z., Szeredi, B., & Rózsa, S. (2008). The three-factor model of Internet addiction: The development of the Problematic Internet Use Questionnaire. Behavior Research Methods, 40(2), 563-574. https://pubmed.ncbi.nlm.nih.gov/18522068/
- Fernandez, A., Howse, E., Rubio-Valera, M., Thorncraft, K., Noone, J., Luu, X., … & Salvador-Carulla, L. (2016). Setting-based interventions to promote mental health at the university: A systematic review. International Journal of Public Health, 61(7), 797-807. https://link.springer.com/article/10.1007/s00038-016-0846-4
- Griffiths, M. D. (2005). A ‘components’ model of addiction within a biopsychosocial framework. Journal of Substance Use, 10(4), 191-197. https://www.tandfonline.com/doi/abs/10.1080/14659890500114359
- Hawi, N. S., & Samaha, M. (2018). Assessing the psychometric properties of the Internet Addiction Test (IAT) among Lebanese college students. Frontiers in Public Health, 6, 365. https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2018.00365/full
- Information Technology and Innovation Foundation. (2024, November 18). Policymakers should further study the benefits and risks of AI companions. https://itif.org/publications/2024/11/18/policymakers-should-further-study-the-benefits-risks-of-ai-companions/
- International Test Commission. (2017). The ITC Guidelines for Translating and Adapting Tests (Second edition). https://www.intestcom.org/files/guideline_test_adaptation_2ed.pdf
- Laconi, S., Rodgers, R. F., & Chabrol, H. (2014). The measurement of Internet addiction: A critical review of existing scales and their psychometric properties. Computers in Human Behavior, 41, 190-202. https://www.sciencedirect.com/science/article/abs/pii/S0747563214004889
- Marengo, D., Angelo Fabris, M., Longobardi, C., & Settanni, M. (2022). Psychometric properties of the Bergen Social Media Addiction Scale: An analysis using item response theory. Addictive Behaviors Reports, 15, 100420. https://pmc.ncbi.nlm.nih.gov/articles/PMC9758518/
- MIT SERC. (2025). Addictive Intelligence: Understanding psychological, legal, and technical dimensions of AI companionship. MIT Science, Engineering, and Research for Computation. https://mit-serc.pubpub.org/pub/iopjyxcx
- National Institute of Environmental Health Sciences. (2022). Part 1: Principles for evaluating psychometric tests. In NIEHS Report on Evaluating Features and Application of Neurodevelopmental Tests in Epidemiological Studies. National Center for Biotechnology Information. https://www.ncbi.nlm.nih.gov/books/NBK581902/
- Nielsen Norman Group. (2024). The 4 degrees of anthropomorphism of generative AI. https://www.nngroup.com/articles/anthropomorphism/
- Pontes, H. M., Szabo, A., & Griffiths, M. D. (2015). The impact of Internet-based specific activities on the perceptions of Internet addiction, quality of life, and excessive usage: A cross-sectional study. Addictive Behaviors Reports, 1, 19-25. https://pmc.ncbi.nlm.nih.gov/articles/PMC6448041/
- Sherry, T. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
- World Health Organization. (2022). Gaming disorder. https://www.who.int/standards/classifications/frequently-asked-questions/gaming-disorder
- Young, K. S. (1998). Internet addiction: The emergence of a new clinical disorder. CyberPsychology & Behavior, 1(3), 237-244. https://www.liebertpub.com/doi/abs/10.1089/cpb.1998.1.237