Supporting Psychometric Instrument Usage Through the POEM Ontology
Abstract
Psychometrics is the field relating to the measurement of concepts within psychology, particularly the assessment of various social and psychological dimensions in humans. The relationship between psychometric entities is critical to finding an appropriate assessment instrument, especially in the context of clinical psychology and mental healthcare in which providing the best care based on empirical evidence is crucial. We aim to model these entities, which include psychometric questionnaires and their component elements, the subject and respondent, and the latent variables being assessed. The current standard for questionnaire-based assessment relies on text-based distributions of instruments; so, a structured representation is necessary to capture these relationships to enhance accessibility and use of existing measures, encourage reuse of questionnaires and their component elements, and enable sophisticated reasoning over assessment instruments and results by increasing interoperability. We present the design process and architecture of such a domain ontology, the Psychometric Ontology of Experiences and Measures, situating it within the context of related ontologies, and demonstrating its practical utility through evaluation against a series of competency questions concerning the creation, use, and reuse of psychometric questionnaires in clinical, research, and development settings.
Keywords and phrases:
ontology, ontology development, psychometric assessment, psychometric ontologyCategory:
ResourceCopyright and License:
Bruce F. Chorpita; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Computing methodologies Ontology engineering ; Theory of computation Semantics and reasoning ; Information systems OntologiesFunding:
This work is partially funded by the National Institute of Mental Health “Support for the RCADS Data Collection Measure” project, grant number 75N95022C00018-0-9999-1.DOI:
10.4230/TGDK.3.3.3Received:
2025-01-21Accepted:
2025-10-27Published:
2025-12-10Part Of:
TGDK, Volume 3, Issue 3Journal and Publisher:
Transactions on Graph Data and Knowledge, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
The field of psychometrics involves the design and evaluation of assessment instruments and associated models to capture complex aspects of mental health and human functioning [20]. A central challenge to the discipline involves the selection and construction of instruments and models that connect unobservable, latent variables, such as mental states and processes, to observable phenomena. While evidence-based medicine has had increasing applications in the improvement of mental health treatment, there is still a lack of attention to what might constitute evidence-based assessment (EBA) [25]. Mental health clinicians are expected to stay informed about accepted standards in psychometric assessment; however, according to Hunsley et al. [26], clinical research and implementation science have focused more on evidence-based intervention, leaving a gap in the area of assessment. While assessment is an integral part of training in graduate psychology programs, relevant guidelines tend to be underdeveloped relative to treatment guidelines; this often results in researchers taking the psychometric properties of their measures for granted [27].
Psychometric instruments and their accompanying handbooks, although informative, lack a standardized representation, hindering the querying, comparison, and reasoning over assessments based on crucial attributes such as length, reliability, validity, and underlying constructs assessed. The selection of appropriate and effective test instruments is pivotal to their successful application in research and service contexts, emphasizing the need for a standardized approach. A vast number of measures exist, but without a knowledge infrastructure to evaluate, organize, and select among them, it is difficult to determine whether a measure is suitable for a specific purpose and context. Many claims in psychometrics research aren’t scientifically valid or clinically relevant [45]. More scales are defined each year, and the absence of standardized measurement makes it difficult to combine data. The mental health field is held back by the continuous generation of an inconsistent and underused body of knowledge [17]. So, the ability to harmonize tools for aggregation and comparison would be a powerful tool.
Some existing resources attempt to provide a standard for assessing and distributing testing instruments, such as the Buros Center’s Mental Measurements published every three years [8]; however, this resource is expensive to access in PDF or print format. In efforts to make evidence-based assessment more accessible, Becker-Haimes et al. [5] and Beidas et al. [6] have published papers evaluating empirical literature in order to compile lists of assessments that are brief, free, and easily accessible. Although these papers are freely accessible and valuable forms of scholarship, the instruments they review are presented as unstructured data and lack comprehensive information about individual questions, derivations, scales, or scoring methods. Efforts to compile databases for accessibility and comparison, like American Psychological Association (APA) PsycTests, exist, but also fall short. APA PsychTests [42] aims to offer comprehensive information on psychological assessments, but each instrument is primarily characterized within text descriptions lacking standardized structure, which lacks details pertaining to individual items or scales. Clearly, there is an abundance of psychometric resources, but there exists no coordinating infrastructure for their selection, use, or reuse; so, it is very difficult to choose among instruments, and determine whether a specific instrument fits the user’s context and aims.
Given the wide variety of instruments and minimal knowledge infrastructure for their use, ontologies emerge as a promising solution to the challenges in this domain. Controlled vocabulary systems and taxonomies for mental health are well-developed within psychiatric nosology, and should be utilized towards this end; however, existing ontologies for assessment lack sufficient support for connecting basic questionnaire elements to these concepts, which they are intended to measure. We propose that ontology can be leveraged to formalize the questionnaire structure, explicate relationships with psychological constructs, track the provenance of questionnaire elements, and assist in aggregating evidence for instrument selection and use. In turn, such an ontology may assist clinical psychologists in the essential task of matching their purpose of testing to the appropriate assessment, increasing understanding of an assessment’s construction, administration, scoring, and interpretation.
We introduce the Psychometric Ontology of Experiences and Measures (POEM), which aims to represent crucial aspects of psychometric assessment that may be overlooked by the traditional text-based architecture currently supporting this field. Ontology is a natural progression from the taxonomy and nosology already utilized in the area of mental health, but allows for data management and sophisticated reasoning grounded in prior information. POEM integrates ontological support for psychometric assessment, psychology and mental health domain information, and description of associated entities including questionnaire items, scales, subjects, and respondents. In particular, POEM supports the ability to encode the underlying constructs of measures; that is, information about a questionnaire that is not apparent by examining a plain-text copy of the questionnaire itself. Even more importantly, POEM supports the preservation and propagation of knowledge about the questionnaires as asserted by questionnaire developers. Further, POEM knowledge can be leveraged by future users of questionnaires, specially when questionnaire developers are, for any reason, unavailable to help others on the use of their questionnaires. For example, some questionnaires are used in many countries, and in thousands of service contexts, making human support a large barrier to successfully scaling the application of an otherwise excellent assessment instrument. Similarly, POEM aims to support semantic search of psychometric questionnaires by modeling the underlying constructs and relationships for each entity, moving beyond simple metadata and allowing for complex reasoning.
To the best of our knowledge, POEM is currently the only ontology that tackles the particular issue of modeling questionnaires for clinical assessment, towards the end of deep semantic search, sharing, and reuse at the item, scale, and questionnaire levels. POEM emphasizes the tracking of provenance, enabling and encouraging documentation of the origin and reliability of questionnaire entities that are created from original research, derived from previous entities, or translated from one language to another.
In the following section, we present an overview of related work in the areas of psychological taxonomy and ontology, as well as ontologies that address questionnaires with some consideration of underlying constructs. In Section 3, we introduce the Psychometric Ontology of Experiences and Measures (POEM), including competency questions and the resulting ontology construction process, followed by a description of its architecture in Section 4. In 4.5 we validate the ontology by way of a use case scenario, and conclude with a discussion of our work so far along with plans for future work.
2 Related Work
2.1 Ontologies
Ontologies provide a formally structured way to represent knowledge by defining a shared vocabulary and the relationships between concepts within a particular domain [23] [32]. They enable both humans and machines to consistently interpret and reason over data. Ontologies are often expressed using standardized formats like the Web Ontology Language (OWL) [2], and are commonly used to integrate heterogenous data sources, support interoperability, and enable automated reasoning. Ontologies include definitions of basic concepts in the domain as a finite set of unambiguously identifiable classes and relationships [28]. While taxonomies and ontologies should be distinguished, a very lightweight ontology might also be classified as a taxonomy, and many ontologies contain hierarchies of classes that are based on existing taxonomies of concepts [44].
2.2 Neuropsychological taxonomy and ontology
A number of widely used, comprehensive ontologies and taxonomies exist to give structure to psychological concepts such as diseases, syndromes, signs, symptoms, and assessments; some target the domains of psychology or neuroscience specifically, while others contain neuropsychological concepts within a wider medical context. These ontologies do not represent the internal structure of instruments and questionnaires, but in some cases model the constructs that certain tasks measure. The Systematized Nomenclature of Medicine (SNOMED) is the largest and most widely-used of these; it was created in 1966 as a logic-based healthcare terminology, and has evolved into its current form, SNOMED Clinical Terms (SNOMED CT), which is the most comprehensive healthcare terminology internationally [11]. SNOMED CT contains many psychological concepts, including those subsumed under clinical findings, disorders, and attributes, and acts as an interoperability standard for healthcare information exchange in the United States. Although it is primarily a polyhierarchical taxonomy, SNOMED CT has an ontological foundation and is freely available as an ontology. SNOMED CT can be utilized to handle queries over medical systems in which it is used, due to its support of a simple description logic.
The structure of psychiatric conditions has been formalized for clinical purposes, the most prominent iterations of which are the International Classification of Diseases (ICD), currently on its 11th revision, and the Diagnostic and Statistical Manual of Mental Disorders (DSM), 5th revision [1]. The ICD-11 [33] is a medical taxonomy published by the World Health Organization (WHO); its content encompasses a wide range of health conditions, with a chapter regarding mental, behavioral, and neurodevelopmental disorders. This standard is used internationally, for statistical comparison of morbidity as well as clinical purposes. Currently mappings from ICD-11 codes to the equivalent SNOMED CT concepts are not available, presumably because the ICD-11 is a recent revision and not completely integrated in all settings, but such a map for the ICD-10 is made available by SNOMED International [21]. The DSM-5 [1] is similar in taxonomy to the ICD with key differences: the DSM contains psychiatric diagnoses only, and is based on mental health data and standards from the United States only, limiting its applicability and reach, but it is commonly used in the US for psychiatric diagnoses and treatment recommendations. Although neither is formalized as an ontology, each is an important source of structured mental health information to consider due to their ubiquity in clinical and research settings.
The Cognitive Atlas (CogAt) [36] is a knowledge base for cognitive neuroscience that aims to map mental processes onto brain function, while addressing issues with ambiguous terminology in the field. Additionally, CogAt adopts Wikipedia’s approach to collaborative knowledge building, capturing disagreement by allowing public contributions and discussion. CogAt uses basic ontological relations as included in the OBO Relational Ontology, as well as relations between processes, tasks, and “descended from” relations between tasks. The basic classes subsuming all entities in CogAt are Concepts, Tasks, Phenotypes, and specialized Collections of tasks and theories. Although CogAt’s hierarchy of Disorders is derived from the Disease Ontology (DO) database [40] and matches most other standard nosologies, CogAt does not aim to be as comprehensive in its inclusion of symptoms.
2.3 Semantic assessment frameworks
There are several existing standards for data exchange in clinical settings. For example, Logical Observation Identifiers Names and Codes (LOINC) [18] provides a common terminology for laboratory and clinical observations for clinical care and management. It has a rich catalog of clinical documents and standardized survey instruments, included psychometric assessments, and is available for public use. Similarly, the Health Level 7 (HL7) clinical document architecture contains many different standards supporting the interoperability of medical data. One such standard is Fast Healthcare Interoperability Resources (FHIR), which provides detailed specifications for document structure including for questionnaires [14].
Several ontologies exist to capture the semantics of measurement instruments; here, we consider those that are close to the goals of POEM either in domain (measurement of constructs in behavioral science, psychology), or in the depth of semantics covered. Beginning in 2010, Batrancourt et al. [3] [4] developed an ontology of Mental State Assessment (ONL-MSA) as an extension of the OntoNeuroLOG ontology [43], a common semantic model of neuroimaging data and tools. ONL-MSA’s fundamental contribution is a core ontology providing a general model of mental state assessments alongside a taxonomy of behavioral, neuropsychological, and neuroclinical instrument types. Instrument properties captured include decomposition into sub-instruments, definitions of associated variables, and the domains and qualities measured by instruments. Instruments are associated with a set of measuring actions that link resulting scores to measured variables, and numeric scoring scales are expressed with a set of codes.
In 2012, Cox et al. developed a pair of complementary ontologies to extend the Ontology for General Medical Sciences (OGMS): the Neurological Disease ontology (ND), which aims to be a comprehensive representation of all facets of neuropsychology, and the NeuroPyschological Testing ontology (NPT) [12] [13], a set of classes that represent cognitive functioning assays, including tests, cognitive functions measured, scores, and the associated scale results. NPT and ND also utilize the Mental Disease (MD) and Mental Function (MF) ontologies created by Hastings et al. [24]. Primarily, NPT aims to annotate neuropsychological tests and testing data in order to integrate results over a range of assessments in overlapping domains.
Bensmann et al. have created an ontology with the aims of facilitating data search and reuse for social science survey items [7]. Social surveys, which comprise items usually about attitudes, behaviors, and factual information, are currently findable based on survey-level metadata, but question-level and variable-level searches have previously not been available. Bensmann’s ontology tries to remedy this through annotation of question features including the nature of the problem or task given to the respondent, the tone and complexity of language used, and the nature of objects in question, among others.
3 Ontology Development
The initial motivation for POEM stemmed from the need for semi-automated answering of community questions, to relieve the time spent by the administrators of POEM in addressing recurring queries. The development of POEM followed a collaborative, bottom-up approach undertaken by a multidisciplinary team. This team is composed of subject matter experts (SMEs) who are researchers in psychological clinical assessment, led by two doctoral-level researchers working with four additional doctoral-level researchers who specialize in psychological measurement and/or psychometric theory, and four doctoral-level specialists in semantic technologies. This collaborative model allowed for domain knowledge and technical expertise to inform the ontology’s design in tandem, ensuring both semantic quality and clinical relevance. POEM was developed iteratively through continuously updated terminology, use case definitions, competency question development, and feedback between SMEs and ontology engineers through weekly meetings. As SMEs gained familiarity with ontology engineering practices, they also contributed refinements to ensure the ontology captured real-world clinical assessment needs.
We conducted a review of previous literature and resources in psychometric representation to create our initial modeling, and identify ontologies that could be reused. This involved a search of key academic databases including Google Scholar and Scopus, and established ontology repositories such as the EMBL-EBI Ontology Lookup Service and NCBO BioPortal. The search strategy utilized a set of keyphrases including but not limited to “mental health ontology”, “psychometric ontology”, and “mental health data representation”. Sources were deemed relevant if they described a formal ontology or data model that could be applied to psychometric assessments, or mental or behavioral health. Next, we established a set of use cases and competency questions, which are elaborated on in Section 3.1. The development process then progressed in three main phases: (1) modeling core questionnaire structure, (2) introducing psychometric constructs and their relations to measurement scales, and (3) incorporating evidence modeling, including provenance tracking to support claims about instrument quality attributes. We followed the Ontology 101 [32] ontology engineering methodology where practical, leveraging its iterative nature and best practices.
Throughout this process, the Revised Child Anxiety and Depression Scale (RCADS) instrument collection served as an anchor for evaluating ontology design and ensuring full use case coverage. Both the original 47-item (RCADS-47) and shortened 25-item (RCADS-25) versions were used. Several contributors to this work are also authors and maintainers of instruments within the RCADS collection, giving increased insight into use-case development and ontology construction. The RCADS instrument family has been widely adopted in the clinical assessment of child and adolescent mental health, and both the RCADS-25 and RCADS-47 have been evaluated in multiple studies. The RCADS-25 is derived from the RCADS-47 using exploratory bifactor analysis to reduce administration time, to be administered at schools or as part of longer test batteries, while maintaining psychometric integrity [16].
Both RCADS versions estimate elevations on multiple clinical dimensions, with scales for depression and anxiety as well as subscales for specific anxiety disorders. Additionally, they are available in multiple versions for child self-report and caregiver report, and have been translated into 31 languages as of the writing of this paper. These factors make the RCADS collection a good candidate for evaluating POEM’s ability to support cross-version modeling of questionnaire structure, constructs, and evidence.
3.1 Use Cases and Competency Questions
We have generated a set of use cases, along with competency questions that would be encountered in a broad range of interactions with psychometric assessments, accompanied by example answers based on the full 47-item Revised Child Anxiety and Depression Scale (RCADS-47) [10] and the shortened 25-item version (RCADS-25) [16]. These were generated through regular working sessions that included scenario walkthroughs and expert review, informed by the context of pragmatically supporting RCADS user support service, beginning with questions most frequently received as a starting point. Potential use cases identified are as follows:
-
Clinical service: assessment – finding the most appropriate assessment for a specific patient and context based on conditions measured, available languages and norms, provenance, and metrics showing primary utility and exposing strength of evidence; elucidating instrument usage instructions, target subject and respondent, and what an instrument measures
-
Clinical service: monitoring – finding the most appropriate assessment for a specific patient and context, which may include reuse of initial questionnaires and scales, or establishing confidence in a shorter assessment for ease of repeated use; elucidating instrument usage instructions, target subject and respondent, and what an instrument measures
-
Research: production – determining whether appropriate measures already exist, identifying where reliability and validity may be improved by iterating on an assessment or creating a new one, and supporting the semantic representation of new measures
-
Research: synthesis – supporting the process of combining the results of multiple studies by providing standardized representation of both external and internal questionnaire structure, metadata, and variables
-
Development: translation and derivation – determining whether specific language translations of questionnaires exist, as well as their reliability and validity (which may be very different from the original language iteration), determining the minimum design elements needed to create a valid derivative measure, and directing developers towards the provenance and meanings of constructs measured, which should support the accurate translation or adaptation of questionnaires in both linguistic and cultural contexts
Questionnaire-level competency questions can be seen in Table 1, with indication of the particular use cases they are each useful to. Competency questions relevant to specific questionnaire items and scales are relevant across use cases, and can be seen in Table 2.
| Clinical service: assessment | Clinical service: monitoring | Research: production | Research: synthesis | Development: translation | ||
| CQ1 | What conditions does (questionnaire) measure? | ✓ | ✓ | ✓ | ✓ | ✓ |
| CQ2 | How do I score (questionnaire)? | ✓ | ✓ | ✓ | ||
| CQ3 | How many/what scales does (questionnaire) have? | ✓ | ✓ | ✓ | ||
| CQ4 | What languages are available? | ✓ | ✓ | ✓ | ✓ | |
| CQ5 | Does (questionnaire) require norms? | ✓ | ✓ | ✓ | ✓ | |
| CQ6 | Does (questionnaire) have relevant norms? | ✓ | ✓ | ✓ | ||
| CQ7 | What do the scores actually mean? | ✓ | ✓ | ✓ | ||
| CQ8 | Where did the items in (questionnaire) come from? | ✓ | ✓ | |||
| CQ9 | Who can fill out (questionnaire)? | ✓ | ✓ | ✓ | ✓ | |
| CQ10 | Are there shorter versions of (questionnaire) available? | ✓ | ||||
| CQ11 | Does the short version of (questionnaire) relate to the long version? | ✓ | ||||
| CQ13 | How does (questionnaire) relate to other measures in research? | ✓ | ||||
| CQ14 | What measures of reliability and validity exist for the RCADS47? | ✓ | ||||
| CQ15 | Where can I find out more about the scale and item meanings? | ✓ |
| CQ16 | What concept does (item) represent? |
|---|---|
| CQ17 | If (item) corresponds to a symptom, what syndrome or other corresponding concern does the symptom correspond to? |
| CQ18 | Is the symptom captured by this (item) also captured by other instruments in the world? |
Based on our competency questions and continual dialogue between the clinical psychology researchers and semantic researchers on our team, we have established a terminology list containing the concepts and definitions that have emerged as important, maintained concurrently with a UML diagram (Figure 1) representing the structure of POEM as well as its alignment with the foundational ontologies we have chosen to use. Additionally, we developed prototype applications that exposed whether the entities and relationships modeled were sufficient to enable tools serviced by user-relevant data structures, continually iterating between POEM revisions and the design of these applications; particularly, tools associated with the RCADS website for accessing information on the RCADS assessment instruments.
3.2 Implementation
The main goal of POEM is to provide a logical and systematic description of measures in clinical psychology. Based on this goal, as well as terminology and use case documentation, we constructed the POEM ontology focusing first on basic questionnaire structure, then underlying constructs, and finally, evidence modeling, with continuous iteration based on feedback from domain experts. Throughout this process we have used the RCADS-25 and RCADS-47 as examples with which to evaluate our progression, while ensuring that the structure defined by the terms and relations in POEM generalize to other measures in clinical psychology.
POEM is implemented in OWL/RDF [29], with the help of the open-source ontology editor Protégé [30]. We maintain POEM using GitHub at https://github.com/tetherless-world/POEM, and documentation including use case deliverables, terminology, a static demo, and publications at https://tetherless-world.github.io/POEM/.
4 POEM
The most central classes to POEM can be seen in Table 4, along with their source ontologies, definitions, and parent classes.
4.1 Terminology Reuse
POEM utilizes several well-established ontologies at different domain levels in order to effectively build off of previous work and provide inherent modularity in its usage. Prefixes used for modules of the ontology are shown in Table 3.
| prefix | ontology | IRI |
|---|---|---|
| hasco | Human-Aware Science Ontology | https://hadatac.org/description/ont/hasco |
| vsto | Virtual Solar Terrestrial Observatory | https://hadatac.org/description/ont/vstoi |
| sio | Semanticscience Integrated Ontology | https://semanticscience.org/ontology/sio.owl |
| eco | Evidence and Conclusion Ontology | http://purl.obolibrary.org/obo/eco.owl |
| stato | The Statistics Ontology | http://purl.obolibrary.org/obo/stato.owl |
| sco | Study Cohort Ontology | https://purl.org/heals/sco/ |
| poem | Psychometric Ontology of Experiences and Measures | http://purl.org/twc/POEM |
4.1.1 Semanticscience Integrated Ontology (SIO)
The POEM ontology is built around the hierarchy provided by the Semanticscience Integrated Ontology (SIO) [15], an upper-level ontology that supports the description of objects, processes, and attributes needed to facilitate biomedical data discovery. SIO defines objects (sio:Object) as entities that have spatial components and identifiably persistent characteristics, while processes (sio:Process) are entities with a temporal element. Attributes (sio:Attribute) are qualities, capabilities, or roles that can describe some other entity. On top of this simple structure there is a large hierarchy with particular focus in the biomedical domain, and relations that can describe entities in terms of spatial organization, process flow, and referential relations. SIO offers a practical applied vocabulary that facilitates the integration of scientific data, with simple design patterns that suit the needs of POEM for the descripion of informational and biomedical entities.
4.1.2 Human-Aware Science Ontology (HAScO)
The Human-Aware Science Ontology (HAScO) [34] is designed to describe scientific data, supporting its related activities, such as data acquisition and scientific studies, in a way that applies to a diverse range of domains. HAScO encompasses three particular areas: scientific activities for data acquisition, data schema, and instruments, with reliance on VSTO-I (4.1.3). Through the use of Semantic variables (hasco:SemanticVariable) [35], POEM aligns with the capability of semantically describing scientific variables, enabling data annotation of datasets that contain data acquired through the application of questionnaires.
4.1.3 Virtual Solar Terrestrial Observatory– Instruments (VSTO-I)
The Virtual Solar-Terrestrial Observatory (VSTO) [19] is a semantic data framework requiring formal representations of physical quantities and their underlying representations. Originally intended to support observatory projects across various physics subfields, VSTO has been expanded to more generically support scientific instruments (vstoi:Instrument) with VSTO-I (the instrumentation portion of VSTO). VSTO-I has support for questionnaires (vstoi:Questionnaire) and items (vstoi:Item) on top of some of its most basic components:
-
vstoi:Detector: A device that detects measurements; items (vstoi:Item) are detectors of signals representing constructs through the recording of human response to some prompt
-
vstoi:Instrument: A device that receives attribute measurements from detectors, processing them into a useful value/s; questionnaires (vstoi:Questionnaire) are instruments that translate a set of responses to items into one or more scores
4.1.4 Evidence and Conclusion Ontology (ECO)
The Evidence and Conclusion Ontology (ECO) [31] is intended to capture annotations and evidence to support biomedical assertions. The main root class, Evidence (eco:evidence), is defined as being the output of some planned evidence-gathering process, or assay, as well as producing some conclusion based on data. ECO also defines automatic and manual assertion methods, which, alongside evidence classes, allow complex statements about the evidence gathering process. ECO was created by the founders of the Gene Ontology (GO) and so contains precise evidence types of molecular, cellular, and biological natures; however, the comprehensive hierarchy of evidence types, including computational, experimental, inferential, and similarity evidences, easily generalizes to psychometric usage.
4.1.5 Statistics Ontology (STATO)
The Statistics Ontology (STATO) [22] is a general purpose ontology based on the Basic Formal Ontology (BFO) [41], an upper-level ontology for scientific research. STATO covers processes involved in statistical analysis, including statistical tests, the input and output information from these tests, and aspects of experimental design; additionally, these statistical analysis concepts are related to different study designs. STATO supports the application of statistical tests, generation of results and reports, and communication of scientific results. Most notably to its usage in POEM, STATO defines study group populations and cohorts, as well as statistical data items that are commonly used to assess the reliability and validity of psychometric tools.
4.1.6 Study Cohort Ontology (SCO)
The Study Cohort Ontology [9] addresses challenges faced when matching patient populations to study cohort characteristics during the process of generating treatment recommendations within Clinical Practice Guidelines (CPGs). The primary focus of SCO is clinical trials that have study and control arms, but can be generalized to other types of cohort studies, including observational studies used to evaluate psychometric assessments. Primarily, SCO encodes the vocabulary needed to describe study populations, including study subjects, subject characteristics, and accompanying statistical measures. SCO supports a workflow that allows practitioners to perform population analysis, visualize cohort similarities, and derive clinically relevant inferences.
| Source | Class | Definition | Parent |
| POEM | Construct | A hypothetical or theoretical entity that can not be directly observed | sio:Entity |
| Psychometric Questionnaire | a questionnaire used to measure an individual’s mental capabilities, behaviors, or psychological traits | vstoi:Questionnaire | |
| Questionnaire Scale | a collection of indicators designed to be related to a shared construct | sio:Object | |
| Composite Scale | a scale containing two or more subscales | poem:QuestionnaireScale | |
| Experience | An informant’s observation or encounter in a situation or event | sio:Quality | |
| Item Stem Concept | The experience or construct that the question or statement used to prompt an individual intends to capture | sio:Object | |
| Instrument Family | A set of measurement instruments that have been derived from each other, are published by the same organization, or have some other reason for close association | sio:Collection | |
| HASCO | Semantic Variable | A variable specification that includes the target entities and attributes, but not the population property | owl:Thing |
| VSTO-I | Detector | A device which detects measurements | sio:Device |
| Instrument | A device or mechanism that is used to acquire attribute values of entities of interest | sio:Device | |
| Questionnaire | An instrument used to acquire data reported from human subjects | vstoi:Instrument | |
| Item | An item stem and its response option within an assessment | vstoi:Detector | |
| Item Stem | A question or statement used to prompt the individual to provide information regarding a latent variable | sio:Object | |
| Scale | A collection of indicators designed to be related to a shared construct | vstoi:Instrument | |
| Codebook | A document used to outline the content, format, and coding scheme of a dataset | sio:Entity | |
| Response Option | A possible answer choice provided for a question or statement | sio:Object | |
| Informant | An individual or respondent who provides information based on an observation about themselves or someone else | sio:Person |
We now describe the structure of the POEM ontology, describing in more detail how elements of the ontologies summarized in section 4.1 are utilized, with references to classes in the POEM ontology italicized for clarity.
4.2 Questionnaire Structure
The core concept of POEM is the psychometric questionnaire, which inherits the majority of its structural concepts from questionnaire (vstoi:Questionnaire). The psychometric questionnaire concept inherits provenance attributes such as authorship, licensing, intended subject and respondent, derivation, and structural attributes such as instructions. Additionally, the questionnaire item (vstoi:Item) encapsulates several entities and other pieces of information included in an assessment question: an item stem, which is the text presented in the context of prompting a response regarding some latent construct; the item stem concept, which is the specific phenomenon an item is intended to capture; and a codebook (vstoi:Codebook) representing a range of possible respondent experiences. Items have instrument membership attributes inherited from HAScO specification, giving items membership and position within any number of questionnaires.
4.3 Underlying Constructs
The primary focus of POEM lies in the representation of the underlying semantics of psychometric questionnaires. This modeling extends from the foundational structure established in VSTO-I, representing the constructs (poem:Construct) that questionnaires assess. A psychometric construct is a phenomenon whose signal is meant to be detected by an item or questionnaire scale.
An item is an indicator that targets a signal representing a particular construct, using the recorded human response. More precisely, an item stem is associated with an item stem concept, which is an expression of the specific construct or experience intended to be captured. Further, questionnaire scales (poem:QuestionnaireScale) comprise a collection of one or more item stem concepts designed to measure a shared construct. The set of item stem concepts in a scale that has been shown as reliable and valid is said to accurately estimate the presence of a construct.
To support a consistent view of variables for data unification, HAScO uses the notion of a semantic variable[35]. Formalized variable specifications support the alignment and combination of variables in processes that occur across multiple studies. In particular, the semantic variable includes entity and attribute properties, but no population property. Accordingly, two variables share a common semantic variable when the only distinction between them is their population properties. HAScO specifies that semantic variables are measured by detectors, and are also associated with attributes such as temporal and spatial information, and measurement unit. In POEM, the semantic variable detected by an item has, for its attribute, some symptom, and for its entity, a human of some demographic. In this way, questionnaire data can be aligned within and across cohorts.
POEM introduces the experience class. Each codebook corresponds with an experience related to the subject’s personal degree to which they experience a construct measured by a questionnaire item; for example, frequency or intensity.
We maintain that POEM should remain neutral with respect to frameworks such as the ICD or DSM, and recognize that these and related frameworks are not uncontroversial. Further, while POEM provides the scaffolding for linking questionnaire scales to clinical constructs, it does not limit these constructs to those in a particular nosology. The axiom <poem:QuestionnaireScale isAbout poem:Construct> is intentionally generalizable, allowing integration with any framework, provided that any object of this axiom is reasonable subsumed by the construct concept. In our evaluation, we use SNOMED entities to represent the concepts measured, because of its wide use and coverage, and mappings to other terminologies.
4.4 Evidence Modeling
Determining if psychometric tests are adequate to assess psychological constructs is a process that is crucial to successful clinical assessment. POEM supports these processes by modeling population-based studies (SIO:001041) that are conducted by researchers to determine whether features of questionnaires measure the construct they are intended to, including metrics such as reliability and validity. The study group population that meets some criteria for inclusion in a study is referred to as a cohort (hasco:cohort). Knowing the context in which a measure is applied can be important to evaluating its importance. We use SCO, which connects the cohort class to demographic information, effectively connecting studies to the context in which they were conducted.
Observational studies involve the administration of created measures to a cohort one or more times depending on the target metric; for example, measuring test-retest reliability requires comparison between scores within the same cohort over time. Questionnaires such as the Caregiver version of the RCADS-47 require the inclusion of two cohorts, since the respondent (caregiver) and subject (youth) are different people. Each metric produced as the result of a study constitutes a piece of evidence (ECO:Evidence), which supports some assertion about an item or scale of the instrument in question. For example, in an initial study of the RCADS-47 [10], the value of Cronbach’s Alpha () calculated for the Social Phobia scale was . This value of is generally considered to be good; so, the Cronbach’s Alpha coefficient for the Social Phobia scale is a piece of observational study evidence supporting the assertion that the Social Phobia Scale has good internal consistency for the cohort represented. The modeling described can be seen in Figure 2. This formalization of studies and evidence supports the aggregation of metrics about a questionnaire and its features, allowing clinicians to choose questionnaires that adequately fit their use case.
We avoid making claims about what standards constitute “good evidence”; for example, while a value of is generally considered good, there is no consistent criteria for determining assessment quality, and there are certain ways in which numerical indicators can be misleading [38].
4.5 Use Case Scenario
We demonstrate POEM’s utility in psychometrics via the competency questions posed early in the development process (table 1, table 2), using the RCADS-47 and RCADS-25 as subjects. We revisit the use cases outlined in Section 3.1 along with applicable competency questions, based on knowledge graphs representing the 47- and 25-item versions of the RCADS. For each competency question, we deploy SPARQL to query the complete knowledge graphs.
Table 5 revisits the competency questions posed in tables 1 and 2, along with specific answers based on the RCADS-47 and RCADS-25. Table 5 also shows how the current version of POEM can support the answering of a subset of our original competency questions through SPARQL queries, generating results that align with SME-generated answers. The SMEs reviewed the ontology’s structure and its ability to provide meaningful responses, through demonstration of queries that could sufficiently answer the generated competency questions dynamically, provided by the ontology experts. Overall, SMEs expressed a high degree of satisfaction with the ontology’s coverage and utility.
5 Future Work
The primary focus of our work has been to formalize assessment concepts to support measurement of disorders in a clinical setting. Ongoing work focuses on enhancing POEM’s utility through generalization to broader types of psychometric questionnaires. Key extensions of POEM include the modeling of scoring instructions, and continued work on evidence modeling. This includes the formal representation of norms, which are the sets of answers from test-takers within specific groups. They may be used during the scoring process to determine the relative standing of a subject within a group sharing their attributes, generating a percentile or other normative score. The ability to integrate different norms for the same test across various demographics can support various applications such as an automatic scoring tool that leverages ontology linkages, which is currently at the advanced prototyping stage. We also plan to model additional details of provenance, such as linking of questionnaires and components to publications for further details and evidentiary support, and providing authorship details.
POEM is being used to support the Semantic Instrument Repository (SIR), a software infrastructure for the management and distribution of knowledge graphs about data acquisition instruments, particularly for mental health screening. SIR is an ongoing project and collaborative effort that is open source and freely available to the public, with a Drupal module client component and a Java API server component with access to a FUSEKI triple-store repository. The usage of POEM enables powerful semantic search and retrieval of published instruments and elements, as well as tracking the evolution and reuse of questionnaires. Users at the appropriate level of authorization will be able to draft, publish, edit, and deprecate questionnaires.
Additionally, SIR uses POEM to enable instrument rendering and sharing in formats such as XML/OWL, Turtle, and JSON, and uses POEM knowledge graphs to map from canonical descriptions into rendering tools like REDCap and FHIR. SIR also utilizes semantic data dictionaries [37], a standard for semantic representation of data, to formally represent acquired data. SIR also supports the rendering of POEM questionnaires into human-readable formats such as PDF. SIR is currently under development and not yet deployed, but future work is planned.
6 Conclusion
To address the lack of knowledge infrastructure currently supporting psychometric assessment, we have designed the POEM ontology, aligning it with SIO and other high-level ontologies. We have described how POEM formalizes the structure of questionnaires and the constructs they measure, aligning with terminology used in psychology and psychometrics, and shown how, given a knowledge graph of an assessment, POEM can answer a range of queries about content and provenance within clinical and research use cases. Following discussion of difficulties faced due to lack of formal guidance in proper selection and usage of instruments, we see value in continued development of POEM as research drives an increasingly diverse proliferation of assessment instruments.
References
- [1] DSMTF American Psychiatric Association, American Psychiatric Association, et al. Diagnostic and statistical manual of mental disorders: DSM-5, volume 5. American psychiatric association Washington, DC, 2013. doi:10.1176/appi.books.9780890425596.
- [2] Grigoris Antoniou and Frank van Harmelen. Web ontology language: Owl. Handbook on ontologies, pages 91–110, 2009. doi:10.1007/978-3-540-92673-3_4.
- [3] Bénédicte Batrancourt, Michel Dojat, Bernard Gibaud, and Gilles Kassel. A core ontology of instruments used for neurological, behavioral and cognitive assessments. In FOIS, pages 185–198, 2010. doi:10.3233/978-1-60750-535-8-185.
- [4] Bénédicte Batrancourt, Michel Dojat, Bernard Gibaud, and Gilles Kassel. A multilayer ontology of instruments for neurological, behavioral and cognitive assessments. Neuroinformatics, 13:93–110, 2015. doi:10.1007/s12021-014-9244-3.
- [5] Emily M Becker-Haimes, Alexandra R Tabachnick, Briana S Last, Rebecca E Stewart, Anisa Hasan-Granier, and Rinad S Beidas. Evidence base update for brief, free, and accessible youth mental health measures. Journal of Clinical Child & Adolescent Psychology, 49(1):1–17, 2020.
- [6] Rinad S Beidas, Rebecca E Stewart, Lucia Walsh, Steven Lucas, Margaret Mary Downey, Kamilah Jackson, Tara Fernandez, and David S Mandell. Free, brief, and validated: Standardized instruments for low-resource mental health settings. Cognitive and behavioral practice, 22(1):5–19, 2015. doi:10.1016/j.cbpra.2014.02.002.
- [7] Felix Bensmann, Andrea Papenmeier, Dagmar Kern, Benjamin Zapilko, and Stefan Dietze. Semantic annotation, representation and linking of survey data. In Semantic Systems. In the Era of Knowledge Graphs: 16th International Conference on Semantic Systems, SEMANTiCS 2020, Amsterdam, The Netherlands, September 7–10, 2020, Proceedings 16, pages 53–69. Springer International Publishing, 2020. doi:10.1007/978-3-030-59833-4_4.
- [8] Janet F Carlson, Kurt F Geisinger, Jessica L Jonson, and Nancy A Anderson. The twenty-first mental measurements yearbook. (No Title), 2021.
- [9] Shruthi Chari, Miao Qi, Nkechinyere N Agu, Oshani Seneviratne, Jamie P McCusker, Kristin P Bennett, Amar K Das, and Deborah L McGuinness. Making study populations visible through knowledge graphs. In International Semantic Web Conference, pages 53–68. Springer, 2019. doi:10.1007/978-3-030-30796-7_4.
- [10] Bruce F Chorpita, Letitia Yim, Catherine Moffitt, Lori A Umemoto, and Sarah E Francis. Assessment of symptoms of dsm-iv anxiety and depression in children: A revised child anxiety and depression scale. Behaviour research and therapy, 38(8):835–855, 2000. doi:10.1016/S0005-7967(99)00130-8.
- [11] Ronald Cornet and Nicolette de Keizer. Forty years of snomed: a literature review. BMC medical informatics and decision making, 8(1):1–6, 2008. doi:10.1186/1472-6947-8-S1-S2.
- [12] Alexander P Cox, Mark Jensen, William Duncan, Bianca Weinstock-Guttman, Kinga Szigeti, Alan Ruttenberg, Barry Smith, and Alexander D Diehl. Ontologies for the study of neurological disease. Third International Conference on Biomedical Ontology, 2012.
- [13] Alexander P Cox, Mark Jensen, Alan Ruttenberg, Kinga Szigeti, and Alexander D Diehl. Measuring cognitive functions: Hurdles in the development of the neuropsychological testing ontology. In ICBO, pages 78–83. Citeseer, 2013. URL: https://ceur-ws.org/Vol-1060/icbo2013_submission_46.pdf.
- [14] Robert H Dolin, Liora Alschuler, Sandy Boyer, Calvin Beebe, Fred M Behlen, Paul V Biron, and Amnon Shabo. Hl7 clinical document architecture, release 2. Journal of the American Medical Informatics Association, 13(1):30–39, 2006. doi:10.1037/1040-3590.17.3.251.
- [15] Michel Dumontier, Christopher JO Baker, Joachim Baran, Alison Callahan, Leonid Chepelev, José Cruz-Toledo, Nicholas R Del Rio, Geraint Duck, Laura I Furlong, Nichealla Keath, et al. The semanticscience integrated ontology (sio) for biomedical research and knowledge discovery. Journal of biomedical semantics, 5:1–11, 2014. doi:10.1186/2041-1480-5-14.
- [16] Chad Ebesutani, Steven P Reise, Bruce F Chorpita, Chelsea Ale, Jennifer Regan, John Young, Charmaine Higa-McMillan, and John R Weisz. The revised child anxiety and depression scale-short version: scale reduction via exploratory bifactor modeling of the broad anxiety factor. Psychological assessment, 24(4):833, 2012.
- [17] Gregory K Farber, Suzanne Gage, Danielle Kemmer, and Rory White. Common measures in mental health: a joint initiative by funders and journals. The Lancet Psychiatry, 10(6):465–470, 2023. doi:10.1016/S2215-0366(23)00139-6.
- [18] Arden W Forrey, Clement J Mcdonald, Georges DeMoor, Stanley M Huff, Dennis Leavelle, Diane Leland, Tom Fiers, Linda Charles, Brian Griffin, Frank Stalling, et al. Logical observation identifier names and codes (loinc) database: a public use set of codes and names for electronic reporting of clinical laboratory test results. Clinical chemistry, 42(1):81–90, 1996. doi:10.1093/clinchem/42.1.81.
- [19] Peter Fox, Deborah L McGuinness, Luca Cinquini, Patrick West, Jose Garcia, James L Benedict, and Don Middleton. Ontology-supported scientific data frameworks: The virtual solar-terrestrial observatory experience. Computers & Geosciences, 35(4):724–738, 2009. doi:10.1016/j.cageo.2007.12.019.
- [20] R Michael Furr. Psychometrics: an introduction. SAGE publications, 2021.
- [21] Kathy L Giannangelo and Jane Millar. Mapping snomed ct to icd-10. In MIE, pages 83–87, 2012. doi:10.3233/978-1-61499-101-4-83.
- [22] Alejandra Gonzalez-Beltran. Statistics ontology. URL: http://purl.obolibrary.org/obo/stato.owl.
- [23] Thomas R Gruber. A translation approach to portable ontology specifications. Knowledge acquisition, 5(2):199–220, 1993. doi:10.1006/knac.1993.1008.
- [24] Janna Hastings, Werner Ceusters, Mark Jensen, Kevin Mulligan, and Barry Smith. Representing mental functioning: Ontologies for mental health and disease. Third International Conference on Biomedical Ontology, 2012.
- [25] John Hunsley and Eric J Mash. Introduction to the special section on developing guidelines for the evidence-based assessment (eba) of adult disorders. Psychological assessment, 17(3):251, 2005.
- [26] John Hunsley and Eric J Mash. Evidence-based assessment. Annu. Rev. Clin. Psychol., 3:29–51, 2007. doi:10.1093/oxfordhb/9780199328710.013.019.
- [27] Scott O Lilienfeld and Adele N Strother. Psychological measurement and the replication crisis: Four sacred cows. Canadian Psychology/Psychologie Canadienne, 61(4):281, 2020. doi:10.1037/cap0000236.
- [28] Deborah L McGuinness. Ontologies come of age. In Spinning the semantic web, pages 171–194, 2003. doi:10.7551/mitpress/6412.003.0008.
- [29] Deborah L McGuinness, Frank Van Harmelen, et al. Owl web ontology language overview. W3C recommendation, 10(10):2004, 2004.
- [30] Mark A Musen. The protégé project: a look back and a look forward. AI matters, 1(4):4–12, 2015. doi:10.1145/2757001.2757003.
- [31] Suvarna Nadendla, Rebecca Jackson, James Munro, Federica Quaglia, Bálint Mészáros, Dustin Olley, Elizabeth T Hobbs, Stephen M Goralski, Marcus Chibucos, Christopher John Mungall, et al. Eco: the evidence and conclusion ontology, an update for 2022. Nucleic acids research, 50(D1):D1515–D1521, 2022. doi:10.1093/nar/gkab1025.
- [32] Natalya F Noy, Deborah L McGuinness, et al. Ontology development 101: A guide to creating your first ontology, 2001.
- [33] World Health Organization. International statistical classification of diseases and related health problems, volume 11. World Health Organization, 2019.
- [34] Paulo Pinheiro, Marcello Peixoto Bax, Henrique Santos, Sabbir Rashid, Zhicheng Liang, Yue Liu, James Mccusker, Deborah Mcguinness, and Yarden Ne’eman. Annotating diverse scientific data with hasco. In Seminar on Ontology Research in Brazil. Universidade Federal de Minas Gerais, 2018.
- [35] Paulo Pinheiro, Henrique Santos, Miao Qi, Kristin P Bennett, and Deborah L McGuinness. Towards machine-assisted biomedical data preparation: A use case on disparity in access to health care. 6th International Workshop on Semantic Web solutions for large-scale biomedical data analytics, 2023.
- [36] Russell A Poldrack, Aniket Kittur, Donald Kalar, Eric Miller, Christian Seppa, Yolanda Gil, D Stott Parker, Fred W Sabb, and Robert M Bilder. The cognitive atlas: toward a knowledge foundation for cognitive neuroscience. Frontiers in neuroinformatics, 5:17, 2011. doi:10.3389/fninf.2011.00017.
- [37] Sabbir M Rashid, James P McCusker, Paulo Pinheiro, Marcello P Bax, Henrique O Santos, Jeanette A Stingone, Amar K Das, and Deborah L McGuinness. The semantic data dictionary–an approach for describing and annotating data. Data intelligence, 2(4):443–486, 2020. doi:10.1162/dint_a_00058.
- [38] Andres De Los Reyes and David A Langer. Assessment and the journal of clinical child and adolescent psychology’s evidence base updates series: Evaluating the tools for gathering evidence. Journal of Clinical Child & Adolescent Psychology, 47(3):357–365, 2018. doi:10.1080/15374416.2018.1458314.
- [39] Kelsey Rook, Henrique Santos, Deborah L. McGuinness, Manuel S. Sprung, Paulo Pinheiro, and Bruce F. Chorpita. POEM Ontology. Model (visited on 2025-12-08). URL: https://github.com/tetherless-world/POEM, doi:10.4230/artifacts.25228.
- [40] Lynn M Schriml, Elvira Mitraka, James Munro, Becky Tauber, Mike Schor, Lance Nickle, Victor Felix, Linda Jeng, Cynthia Bearer, Richard Lichenstein, et al. Human disease ontology 2018 update: classification, content and workflow expansion. Nucleic acids research, 47(D1):D955–D962, 2019. doi:10.1093/nar/gky1032.
- [41] Barry Smith, Anand Kumar, and Thomas Bittner. Basic formal ontology for bioinformatics. IFOMIS reports, 2005.
- [42] Susan E Swogger. Psyctests. Journal of the Medical Library Association: JMLA, 101(3):234, 2013. doi:10.3163/1536-5050.101.3.021.
- [43] Lynda Temal, Michel Dojat, Gilles Kassel, and Bernard Gibaud. Towards an ontology for sharing medical images and regions of interest in neuroimaging. Journal of Biomedical Informatics, 41(5):766–778, 2008. doi:10.1016/j.jbi.2008.03.002.
- [44] Reinout Van Rees. Clarity in the usage of the terms ontology, taxonomy and classification. Cib Report, 284(432):1–8, 2003.
- [45] Eric A Youngstrom, Sophia Choukas-Bradley, Casey D Calhoun, and Amanda Jensen-Doss. Clinical guide to the evidence-based assessment approach to diagnosis and treatment. Cognitive and Behavioral Practice, 22(1):20–35, 2015. doi:10.1016/j.cbpra.2013.12.005.
Appendix A Competency Questions and SPARQL Queries
Ontology prefixes:
PREFIX poem: <http://purl.org/poem#>
PREFIX rcads: <http://purl.org/poem/individuals#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX dc: <http://purl.org/dc/terms/>
| # | Question | Answer |
| CQ1 | What conditions does the RCADS-47 measure? | separation anxiety disorder, social phobia, generalized anxiety disorder, panic disorder, obsessive compulsive disorder, major depressive disorder |
| SELECT ?condition (STR(?lab) AS ?label) WHERE { rcads:RCADS47Questionnaire sio:hasMember ?scale . ?scale rdf:type poem:QuestionnaireScale . ?scale sio:isAbout ?construct . ?construct rdfs:label ?label . } | ||
| CQ3 | How many/what scales does the RCADS-47 have? | 8 scales; separation anxiety disorder, social phobia, generalized anxiety disorder, panic disorder, obsessive compulsive disorder, major depressive disorder, total anxiety, total anxiety and depression |
| SELECT ?subscaleLabel WHERE { rcads:RCADS47Questionnaire sio:hasMember ?scale . ?scale rdf:type poem:QuestionnaireScale . } | ||
| CQ4 | What languages are available for the RCADS-47? | US English, Chinese, Danish, Dutch, Finnish, French, German, Greek, Icelandic, Japanese, Korean, Norwegian, Persian, Polish, Slovene, Portuguese, Spanish, Swedish, Turkish, Urdu |
| SELECT ?languageCode WHERE { rcads:RCADS47Questionnaire sio:hasMember ?item . ?item rdf:type vstoi:Item . ?item sio:hasSource ?itemStem . ?itemStem dc:language ?languageCode . } | ||
| CQ9 | Who can fill out the RCADS-47? | Youth (8-18 years), caregiver of youth (8-18 years) |
| SELECT ?informantLabel WHERE { rcads:RCADS47Questionnaire sio:hasAttribute ?informant . ?informant rdf:type poem:Informant . ?informant rdfs:label ?informantLabel . } | ||
| CQ10 | Are there shorter versions of the RCADS-47 available? | Yes; the RCADS-25 |
| SELECT ?questionnaireLabel ?itemCount WHERE { { SELECT ?questionnaire (COUNT(?item) AS ?itemCount) WHERE { ?questionnaire sio:hasMember ?item . } GROUP BY ?questionnaire } { SELECT (COUNT(?rcads47item) AS ?rcads47itemCount) WHERE { rcads:RCADS47Questionnaire sio:hasMember ?rcads47item . } } FILTER (?itemCount < ?rcads47itemCount) } ORDER BY ?itemCount | ||
| CQ11 | Does the RCADS-25 relate to the RCADS-47? | Yes; the RCADS-25 is an abbreviated version of the RCADS-47, and contains 25 items also in the RCADS-47. |
| SELECT (COUNT(?item) AS ?overlapCount) WHERE { rcads:RCADS47questionnaire sio:hasMember ?item . rcads:RCADS25questionnaire sio:hasMember ?item . } | ||
| CQ14 | What measures of reliability and validity exist for the RCADS47? | Several studies have found the RCADS-47 to be a reliable and valid measure of children’s anxiety and depression (for example: Chorpita 2000, Ebesutani 2012.) |
| SELECT ?scale ?reliabilityMeasure ?reliabilityValue \\ ?validityMeasure ?validityValue WHERE { rcads:RCADS47Questionnaire sio:hasMember ?scale . ?scale a poem:questionnaireScale . ?evidence a eco:0000000 . # eco:0000000 = Evidence ?evidence sio:isAbout ?scale . OPTIONAL { ?evidence a poem:reliabilityFinding . } OPTIONAL { ?evidence a poem:validityFinding . } FILTER(BOUND(?reliabilityFinding) || BOUND(?validityFinding)) } | ||
| CQ16 | What concept does Item 1 (“I worry about things”) of the RCADS-47 represent? | worry (generalized) |
| SELECT ?conceptLabel WHERE { rcads:item/1 sio:hasSource ?itemStem . ?itemStem sio:isAbout ?itemStemConcept . ?itemStemConcepts sio:isAbout ?construct . ?construct a poem:construct . } | ||
| CQ17 | If RCADS-47 Item 1 corresponds to a symptom, what condition does the symptom correspond to? | generalized anxiety disorder |
| SELECT ?condition WHERE { rcads:itemStemConcept/1 sio:isAbout ?symptom . ?symptom a poem:construct . ?scale sio:hasMember rcads:itemStemConcept/1 . ?scale a poem:questionnaireScale . ?scale sio:isAbout ?condition . } | ||
