ERIC Identifier: ED482269
Publication Date: 2003-09-00
Author: Chapman, Elaine
Source: ERIC Clearinghouse on Assessment and Evaluation
Assessing Student Engagement Rates. ERIC Digest.
Given the emphasis placed on levels of academic achievement in schools,
the way in
which students acquire knowledge through the learning process has become
a primary concern. Several studies have highlighted the significant role
that affective factors can play in learning (e.g., Mathewson, 1994; Wigfield,
1997), placing particular emphasis on student engagement. This Digest defines
student engagement and describes various methods used to measure it, both
in empirical research studies and at the classroom level.
"WHAT IS STUDENT ENGAGEMENT?"
Early studies of student engagement often focused on time-on-task behaviors
Fisher, et al., 1980; Brophy, 1983). More recently, however, other
appeared in the literature. Student engagement has been used to depict
willingness to participate in routine school activities, such as attending
submitting required work, and following teachers' directions in class.
Natriello (1984) defined student engagement as "participating in the
activities offered as part of the school program" (p.14). Negative indicators
of engagement in this study included unexcused absences from classes, cheating
on tests, and damaging school property.
Another definition focuses on more subtle cognitive, behavioral, and
affective indicators of student engagement in specific learning tasks.
This orientation is reflected well in the definition offered by Skinner
& Belmont (1993):
Children who are engaged show sustained behavioral involvement in learning
activities accompanied by a positive emotional tone. They select tasks
at the border of their competencies, initiate action when given the opportunity,
and exert intense effort and concentration in the implementation of learning
tasks; they show generally positive emotions during ongoing action, including
enthusiasm, optimism, curiosity, and interest.
The opposite of engagement is disaffection. Disaffected children are
passive, do not tryhard, and give up easily in the face of challenges [they
can] be bored, depressed,
anxious, or even angry about their presence in the classroom; they
can be withdrawn
from learning opportunities or even rebellious towards teachers and
From a different perspective, Pintrich and & De Groot (1990) associated
engagement levels with students' use of cognitive, meta-cognitive and self-regulatory
strategies to monitor and guide their learning processes. In this view,
student engagement is viewed as motivated behavior apparent from the kinds
of cognitive strategies students choose to use (e.g., simple or "surface"
processing strategies such as rehearsal versus "deeper" processing strategies
such as elaboration), and by their willingness to persist with difficult
tasks by regulating their own learning behavior.
Use of cognitive and meta-cognitive strategies (e.g., I went back over
things I didn't
understand" and "I tried to figure out how today's work fit with what
I had learned
before") may be taken to indicate active task engagement, while use
strategies (e.g., "I skipped the hard parts") may be taken to indicate
engagement (Meece, Blumefield, and Hoyle, 1988).
"HOW IS STUDENT ENGAGEMENT MEASURED?"
The most common way that student engagement is measured is through information
reported by the students themselves. Other methods include checklists
scales completed by teachers, observations, work sample analyses, and
Each of these methods is described briefly below.
"Self-Reports." Students may be asked to complete surveys or questionnaires
regarding their level of task engagement. Items relating to the cognitive
aspects of engagement often ask students to report on factors such as their
attention versus distraction during class, the mental effort they expend
on these tasks (e.g., to integrate new concepts with previous knowledge),
and task persistence (e.g., their reaction to perceived failure to comprehend
the course material). Students can also be asked to report on their response
levels during class time (e.g., making verbal responses within group discussions,
looking for distractions, and engaging in non-academic social interaction)
as an index of behavioral task engagement. Affective engagement questions
typically ask students to rate their interest in and emotional reactions
to learning tasks on indices such as choice of activities (e.g., selection
of more versus less challenging tasks), the desire to know more about particular
topics, and feelings of stimulation or excitement in beginning new projects.
In addition to asking the question of whether students are engaged in
self-report measures can provide some indication of why this is the
case. Research into achievement goal orientations, for example, has indicated
between task or mastery goals, which reflect a desire for knowledge
or skill acquisition, and students' use of effective learning strategies
(e.g., Covington, 2000). Studies have also demonstrated positive relationships
between students' perceived learning control and adaptive learning processes
(e.g., Strickland, 1989; Thompson et al., 1998).
"Checklists and Rating Scales." In addition to student self-report measures,
a few studies have used summative rating scales to measure student engagement
levels. For example, the teacher report scales used by Skinner & Belmont
(1993) asked teachers to assess their students' willingness to participate
in school tasks (i.e., effort, attention, and persistence during the initiation
and execution of learning activities, such as "When faced with a difficult
problem, this student doesn't try"), as well as their emotional reactions
to these tasks (i.e., interest versus boredom, happiness versus sadness,
anxiety and anger, such as "When in class, this student seems happy").
The Teacher Questionnaire on Student Motivation to Read developed by Sweet,
Guthrie, & Ng (1996) asks teachers to report on factors relating to
student engagement rates, such as activities (e.g., enjoys reading about
favorite activities), autonomy (e.g., knows how to choose a book he or
she would want to read), and individual factors (e.g., is easily distracted
"Direct Observations." Although self-report scales are widely used,
the validity of the data yielded by these measures will vary considerably
with students' abilities to accurately assess their own cognitions, behaviors,
and affective responses (Assor & Connell, 1992). Direct observations
are often used to confirm students' reported levels of engagement in learning
tasks. A number of established protocols are available in this area (e.g.,
Ellett & Chauvin, 1991). Most of these observational studies have used
some form of momentary time sampling system. In these methods, the observer
records whether a behavior was present or absent at the moment that the
time interval ends or else during a specific time period.
In classwide observations, approximately 5 minutes of observational
data can generally be collected on each target student per lesson. Thus,
a 30-minute observation period would allow observations of approximately
5 target students, with 6 to 7 sessions being required to observe a full
class. In addition, to obtain a representative sample of students' behavior
over the full course of a lesson, observations are generally rotated across
students so that each student is observed continuously for only one minute
at a time.
"Work Sample Analyses." Evidence of higher-order problem-solving and
metacognitive learning strategies can be gathered from sources such as
student projects, portfolios, performances, exhibitions, and learning journals
or logs (e.g., Royer, Cisero, & Carlo, 1993; Wolf, et al., 1990). The
efficacy of these methods hinges on the use of suitably structured tasks
and scoring rubrics. For example, a rubric to assess the application of
higher-order thinking skills in a student portfolio might include criteria
for evidence of problem-solving, planning, and self-evaluation in the work.
A number of formal and informal protocols for assessing students' self-regulated
learning strategies also incorporate components that focus on metacognitive
skills (e.g., Pintrich & DeGroot, 1990; Ward & Traweek, 1993).
The Metacognitive Knowledge Monitoring Assessment and the Assessment of
Cognitive Monitoring Effectiveness are more targeted measures suitable
for use in classroom situations and with demonstrated sound psychometric
properties in empirical evaluations (Osborne, 2001).
"Focused Case Studies." When the focus of an investigation is restricted
to a small
group of target students, it is often more useful to collect detailed
descriptive accounts of engagement rates. Case studies allow researchers
to address questions of student engagement inductively by recording details
about students in interaction with other people and objects within classrooms.
These accounts should describe both students' behaviors and the classroom
contexts in which they occur. This might include, for example, the behavior
of peers, direct antecedents to the target student's behaviors (e.g., teacher
directions), as well as the student's response and the observed consequences
of that response (e.g., reactions from teachers or peers). Case studies
generally attempt to place observations of engagement within the total
context of the classroom and/or school, and are concerned as much with
the processes associated with engagement as they are in depicting engagement
Teachers interested in assessing student engagement in the classroom
should consider using separate measures to get at the cognitive, affective,
and behavioral aspects of task engagement. Within each of these domain
areas, using a range of methods can also strengthen the validity of findings
and provide alternative perspectives on the results. Teachers may wish
to include measures that address the question of why students do, or do
not, engage with particular types of tasks. Clearly, however, final decisions
on protocol components must also take into account any practical constraints
within the given context.
Assor, A., & Connell, J.P. (1992). The validity of students' self-reports
as measures of performance-affecting self-appraisals. In D.H. Schunk &
J. Meece (Eds.), Student Perceptions in the Classroom (pp.25-46). Hillsdale,
NJ: Lawrence Erlbaum.
Brophy, J. (1983). Conceptualizing student motivation. Educational Psychologist,
Covington, M. (2000). Goal theory, motivation, and school achievement:
an integrative review. Annual Review of Psychology, 51, 171-200.
Ellett, C.D., & Chauvin, E. (1991). Development, validity, and reliability
of a new
generation of assessments of effective teaching and learning: Future
directions for the
study of learning environments. Journal of Classroom Interaction, 26(2):
Fisher, C., Berliner, D., Filby, N., Marliave, R., Cahen, L., &
Dishaw, M. (1980).
Teaching behaviors, academic learning time, and student achievement:
An overview. In C. Denham & A. Lieberman (Eds.), Time to Learn. Washington,
D.C.: National Institute of Education.
Mathewson, G.C. (1994). Model of attitude influence upon reading and
learning to read. In R.B. Ruddell & H. Singer (Eds.), Theoretical Models
and Processes of Reading, 3rd. Ed. (pp. 1131-1161). Newark, DE: International
Meece, J.L., Blumenfield, P.C., & Hoyle, R.H. (1988). Students'
goal orientations and cognitive engagement in classroom activities. Journal
of Educational Psychology, 80 (4): 514-523.
Natriello, G. (1984). Problems in the evaluation of students and student
disengagement from secondary schools. Journal of Research and Development
in Education, 17, 14-24.
Osborne, J. (2001). Assessing metacognition in the classroom: the assessment
cognition monitoring effectiveness. Unpublished manuscript, the Department
Educational Psychology, University of Oklahoma.
Pintrich, P.R., & De Groot, E.V. (1990). Motivational and self-regulated
components of classroom academic performance. Journal of Educational
Psychology, 82(1): 33-40.
Royer, J.M., Cisero, C.A., & Carlo, M.S. (1993). Techniques and
assessing cognitive skills. Review of Educational Research, 63(2):
Skinner, E.A., & Belmont, M.J. (1993). Motivation in the classroom:
Reciprocal effects of teacher behavior and student engagement across the
school year. Journal of Educational Psychology, 85(4): 571-581.
Strickland, B.R. (1989). Internal-external control expectancies: From
contingency to creativity. American Psychologist, 44(1): 1-12.
Sweet, A.P., Guthrie, J.T., & Ng, M. (1996). Teacher Perceptions
Motivation To Read (Reading Research Report No. 69). Athens, GA: National
Reading Research Center.
Thompson, M., Kaslow, N.J., Weiss, B., & Nolen-Hoeksema, S. (1998).
Children's Attributional Style Questionnaire revised: Psychometric examination.
Psychological Assessment, 10(2): 166-170.
Ward, L., & Traweek, D. (1993). Application of a metacognitive strategy
to assessment, intervention, and consultation: A think-aloud technique.
Journal of School Psychology, 31, 469-485.
Wigfield, A. (1997). Reading engagement: a rationale for theory and
teaching. In J.T. Guthrie and A. Wigfield (Eds.), Reading Engagement: Motivating
Integrated Instruction. Newark, DE: International Reading Association.
Wolf, D., Bixby, J., Glenn, J., & Gardner, H. (1990). To use their
Investigating new forms of student assessment. Review of Research in
Education, 17, 31-74.