MORE EVENTS
Leadership
Exchange
Solutions
Summit
DigCit
Connect

Measuring Personalized Learning Experiences: Development and Validation of the Learner-Centered Sensor

Times and dates are displayed based on your device's time zone setting.

Listen and learn : Research paper
Lecture presentation

Research papers are a pairing of two 20 minute presentations followed by a 5 minute Q & A.
This is presentation 1 of 2, scroll down to see more details.

Other presentations in this group:

Dr. Ling Zhang  
Dr. James Basham  
Dr. Richard Carter  

Learn about a study on development and validation of a student self-report instrument that could be used to measure students’ personalized learning experiences in environments designed based on Universal Design for Learning. Learn development, validation and usage of the online instrument.

Audience: Professional developers, Teachers, Teacher education/higher ed faculty
Attendee devices: Devices useful
Attendee device specification: Smartphone: Android, iOS, Windows
Laptop: Chromebook, Mac, PC
Tablet: Android, iOS, Windows
Topic: Personalized learning
Grade level: 6-12
Subject area: Language arts, Special education
ISTE Standards: For Educators:
Designer
  • Explore and apply instructional design principles to create innovative digital learning environments that engage and support learning.
For Students:
Empowered Learner
  • Students build networks and customize their learning environments in ways that support the learning process.
Additional detail: ISTE author presentation

Proposal summary

Framework

To support learner variability, the framework of Universal Design for Learning (UDL) has emerged as a foundational framework for designing flexible, inclusive, and PL environments (Zhang et al., 2020). For example, the latest U.S. federal education law, the Every Student Succeeds Act (ESSA; 2015), endorsed UDL, supported through data use and technology integration, as an instructional framework for designing and implementing personalized learning Additionally, research investigating how characteristics of personalized learning environments aligned to UDL provided preliminary evidence on the potential of the framework to guide PL designs (e.g., Abawi, 2015; Basham et al., 2016).

The development of the online instrument was framed within UDL. The UDL framework was developed drawing upon decades of research on how brain networks (i.e., affective, recognition, strategic networks) function when learning occurs (CAST, 2018). Providing insights into learner variability in how these networks work, UDL guides intentional designs of flexible learning environments to reduce barriers existing and emerging from learner-environment interactions (Meyer et al., 2014). To support learner needs emerging from learner-environment interactions, it is critical to make informed decisions for improvement based on timely feedback on the learning environment designs. One way to conduct timely assessments is to have “sensors” embedded in the learning environment to measure changes in student learning in response to changes in the learning environment designs. We posit that learners who are immersed in a learning environment and have first-hand learning experiences could serve as human “sensors” within the learning environment to collect data on personalized learning needs. In addition, incorporating student perceptions in learning environment design and assessment provides an avenue for students to voice their learning needs. Thus, having students evaluate environmental designs would potentially increase opportunities for students to develop agency over decision making if their needs reflected through evaluations are addressed and supported in follow-up design process (Mäkelä et al., 2018).

Methods

Research Design:
This study was designed to evaluate the Learner-Centered Sensor, which consists of five constructs and 25 items, by generating evidence on its content validity. Content validity refers to the extent to which an instrument consists of an appropriate sample of items for underlying constructs that are intended to be measured (Polit & Beck, 2006). Establishing content validity evidence includes quantifying the degree of agreement of experts’ ratings on the extent to which items are clear or relevant to the definition of constructs of a survey being measured (Gajewski et al., 2012). Expert ratings of item clarity or relevance can generate two types of content validity index (CVI)—item-level CVI (ICVI) and scale-level CVI (S-CVI; Polit & Beck, 2006). I-CVI represents experts’ ratings of clarity or relevance of an individual item to the underlying construct; S-CVI represents the proportion of items on an instrument that were rated to be quite or highly relevant (Polit & Beck, 2006). In addition, qualitative data on experts’ feedback on the comprehensiveness of items that measure relevant underlying constructs were collected during the validation process.

Participants and Data sources:
Purposive sampling was used to UDL experts from CAST and UDLIRN, two organizations that help lead the growth and development of UDL. A panel of seven experts (including three females and four males; one person of color and six Caucasians) replied and participated in the study. An online CVI survey was created using Qualtrics to collect both quantitative and qualitative data on experts’ evaluation of clarity, relevancy, and comprehensiveness of the instrument.

Data Analysis:
We computed I-CVI and S-CVI scores based on experts’ ratings on clarity and relevancy. I-CVI was calculated by dividing the number of experts who rated either 3 or 4 on the 4-point scale by the total number of experts; S-CVI is calculated by averaging I-CVI across items (Polit et al., 2007). Moreover, reliability or interrater agreement, which is an indication of the extent to which participating experts were reliable in their ratings, was calculated for item relevance and clarity as a supplement to CVI (Rubio et al., 2003). Specifically, a Kappa coefficient of agreement (i.e., a measure of agreement that adjusts for chance agreement) was computed to evaluate the interrater agreement on relevance and clarity of the Sensor items. Qualitative data collected from experts’ comments on individual items, coupled with ICVIs and Kappa coefficients for clarity and relevance, were analyzed to inform whether item revisions were needed. In addition, experts’ open-ended responses to the comprehensiveness of each construct (i.e., the extent to which the items represent the content domain adequately) were analyzed to identify commonalities among experts’ feedback to inform whether adding or deleting items was needed.

Results

The content validation results yielded excellent CVIs for item relevance and good-to excellent CVIs for item clarity of the Sensor. On average, all constructs have an excellent level of CVI for item relevance and clarity. High interrater reliability scores across all items demonstrated excellent levels of expert agreement on relevance and clarity adjusted for chance agreement. Minor changes were made to seven items with relatively low CVIs for clarity based on experts’ suggestions for improving the language. Additionally, the instrument has a readability grade level of five. In the following section, we acknowledged limitations of the study, followed by a discussion of implications for future research and practice.

Importance

To date, there is no instrument specifically created to measure PL experiences within a UDL-based learning environment from student perspectives. The instrument developed in this study can support students in evaluating whether the design features of a UDL-based learning environment meet their learning needs. Teachers can use those perception data to identify barriers to student learning and areas of improvement in instructional designs in various learning environments.

References

Abawi, L. A. (2015). Inclusion “from the gate in”: Wrapping students with personalised learning support. International Journal of Pedagogies and Learning, 10, 47–61. https://doi.org/10.1080/22040552.2015.1084676.

Basham, J. D., Hall, T. E., Carter, R. A., & Stahl, W. M. (2016). An operationalized understanding of personalized learning. Journal of Special Education Technology, 31, 126–136. https://doi.org/10.1177/0162643416660835.

CAST (2018). The UDL Guidelines Version 2.0. http://udlguidelines.cast.org/?utm_medium=web&utm_campagin=none&utm_source=cast-home

Gajewski, B. J., Price, L. R., Coffland, V., Boyle, D. K., & Bott, M. J. (2011). Integrated analysis of content and construct validity of psychometric instruments. Quality & Quantity, 47, 57–78. https://doi.org/10.1007/s11135-011-9503-4

Mäkelä, T., Helfenstein, S., Lerkkanen, M. K., & Poikkeus, A. M. (2018). Student participation in learning environment improvement: Analysis of a co-design project in a Finnish upper secondary school. Learning Environments Research, 21, 19-41. https://doi.org/10.1007/s10984-017-9242-0

Meyer, A., Rose, D. H., & Gordon, D. (2014). Universal design for learning: Theory and practice. CAST Professional Publishing.

Polit, D. F., Beck, C. T. (2006). The content validity index: Are you sure you know what’s being reported? Research in Nursing & Health, 29, 489–497.

Rubio, D. M., Berg-Weger, M., Tebb, S. S., Lee, E. S., & Rauch, S. (2003). Objectifying content validity: Conducting a content validity study in social work research. Social Work Research, 27(2), 94–104. https://doi.org/10.1093/swr/27.2.94

More [+]

Presenters

Photo
Dr. Ling Zhang, UNC at Chapel Hill
Photo
Dr. James Basham, University of Kansas
Photo
Dr. Richard Carter, University of Wyoming

People also viewed

Best Tools for Global Collaboration
Empowering Students to Thrive: Supporting Executive Functioning and Connection in the Pandemic
Learn How to Construct a Jigsaw Activity With PearDeck, Flipgrid and Jamboard