Tải bản đầy đủ - 0 (trang)
5 Computer science, psychology, and education

5 Computer science, psychology, and education

Tải bản đầy đủ - 0trang

2.5 Computer Science, Psychology, and Education 43



underlying theory of learning. However, current models of learning are incomplete, and

it is unreasonable to put off building these systems until a complete model is available.

Thus, researchers in the field simultaneously pursue major advances in all three

areas: learning models, human information processing, and computational systems

for teaching. Because computational models must first explore and evaluate alternative theories about learning, a computational model of teaching could provide a first

step for a cognitively correct theory of learning. Such a model could also serve as a

starting point for empirical studies of teaching and for modifying existing theories of

learning. The technological goal of building better intelligent tutors would accept a

computational model that produces results, and the cognitive goal would accept any

model of human information processing verified by empirical results.

Cognitive science is concerned with understanding human activity during the

performance of tasks such as learning. Cognitive modeling in the area of learning has

contributed pedagogical and subject-matter theories, theories of learning, instructional design, and enhanced instructional delivery (Anderson et al., 1995). Cognitive

science results, including empirical methods, provide a deeper understanding of

human cognition, thus tracking human learning and supporting flexible learning.

Cognitive scientists often view human reasoning as reflecting an information processing system, and they identify initial and final states of learners and the rules

required to go from one state to another. A typical cognitive science study might

assess the depth of learning for alternative teaching methods under controlled conditions (Corbett and Anderson, 1995), study eye movements (Salvucci and Anderson,

2001), or measure the time to learn and error rate (accuracy) of responses made by

people with differing abilities and skills (Koedinger and MacLaren, 1997).

Artificial intelligence (AI) is a subfield of computer science concerned with

acquiring and manipulating data and knowledge to reproduce intelligent behavior

(Shapiro, 1992). AI is concerned with creating computational models of cognitive

activities (speaking, learning, walking, and playing) and replicating commonsense tasks

(understanding language, recognizing visual scenes, and summarizing text). AI techniques have been used to perform expert tasks (diagnose diseases), predict events based

on past events, plan complex actions, and reason about uncertain events. Teaching

systems use inference rules to provide sophisticated feedback, customize a curriculum,

or refine remediation. These responses are possible because the inference rules explicitly represent tutoring, student knowledge, and pedagogy, allowing a system to reason

about a domain and student knowledge before providing a response. Nonetheless,

deep issues remain about AI design and implementation, beginning with the lack of

authoring tools (shells and frameworks) similar to those used to build expert system.

Cognitive science and AI are two sides of the same coin; each strives to understand the nature of intelligent action in whatever form it may take (Shapiro, 1992).

Cognitive science investigates how intelligent entities, whether human or computer,

interact with their environment, acquire knowledge, remember, and use knowledge

to make decisions and solve problems. This definition is closely related to that for AI,

which is concerned with designing systems that exhibit intelligent characteristics,

such as learning, reasoning, solving problems, and understanding language.



44 CHAPTER 2 Issues and Features



Education is concerned with understanding and supporting teaching primarily in

schools. It focuses on how people teach and how learning is impacted by communication, course and curriculum design, assessment, and motivation. One long-term

goal of education is to produce accessible, affordable, efficient, and effective teaching. Numerous learning theories (behaviorism, constructivism, multiple intelligence)

suggest ways that people learn. Within each learning theory, concepts such as memory and learning strategies are addressed differently. Specific theories are often developed for specific domains, such as science education. Education methods include

ways to enhance the acquisition, manipulation, and utilization of knowledge and the

conditions under which learning occurs. Educators might evaluate characteristics

of knowledge retention using cycles of design and testing. They often generate an

intervention—a circumstance or environment to support teaching—and then test

whether it has a lasting learning effect.



2.6 BUILDING INTELLIGENT TUTORS

When humans teach, they use vast amounts of knowledge. Master teachers know the

domain to be taught and use various teaching strategies to work opportunistically

with students who have differing abilities and learning styles.To be successful, intelligent tutors also require vast amounts of encoded knowledge. They must have knowledge about the domain, student, and teaching along with knowledge about how to

capitalize on the computer’s strengths and compensate for its inherent weakness.

These types of knowledge are artificially separated, as a conceptual convenience, into

phases of computational processing. Most intelligent tutors move from one learning

module to the next, an integration process that may happen several times before the

tutor’s response is produced. Despite this integration, each component of an intelligent tutor will be discussed separately in this book (see Chapters 3 through 5).

Components that represent student tutoring and communication knowledge are outlined below.

Domain knowledge represents expert knowledge, or how experts perform in

the domain. It might include definitions, processes, or skills needed to multiply

numbers (AnimalWatch), generate algebra equations (PAT), or administer medications for an arrhythmia (Cardiac Tutor).

Student knowledge represents students’ mastery of the domain and describes

how to reason about their knowledge. It contains both stereotypic student

knowledge of the domain (typical student skills) and information about the

current student (e.g., possible misconceptions, time spent on problems, hints

requested, correct answers, and preferred learning style).

Tutoring knowledge represents teaching strategies, (examples, and analogies)

and includes methods for encoding reasoning about the feedback. It might be

derived from empirical observation of teachers informed by learning theories,

or enabled by technology thus only weakly related to a human analogue (simulations, animated characters).



Summary 45



Communication knowledge represents methods for communicating between

students and computers (graphical interfaces, animated agents, or dialogue

mechanisms). It includes managing communication, discussing student reasoning, sketching graphics to illustrate a point, showing or detecting emotion, and

explaining how conclusions were reached.

Some combination of these components are used in intelligent tutors. For those

tutors that do contain all four components, a teaching cycle might first search

through the domain module for topics about which to generate customized problems and then reason about the student’s activities stored in the student module.

Finally, the system selects appropriate hints or help from the tutoring module and

chooses a style of presentation from options in the communication module.

Information flows both top-down and bottom-up. The domain module might recommend a specific topic, while the student model rejects that topic, sending information back to identify a new topic for presentation. The categorization of these

knowledge components is not exact; some knowledge falls into more than one category. For example, specification of teaching knowledge is necessarily based on identifying and defining student characteristics, so relevant knowledge might lie in both

the student and tutoring modules.



SUMMARY

This chapter described seven features of intelligent tutors. Three of these features—

generativity, student modeling, and mixed-initiative—help tutors to individualize

instruction and target responses to each student’s strengths and weaknesses. These

capabilities also distinguish tutors from more traditional CAI teaching systems. This

chapter described three examples of intelligent tutors: (1) AnimalWatch, for teaching

grade school mathematics; (2) PAT, for algebra; and (3) the Cardiac Tutor, for medical

personnel to learn to manage cardiac arrest. These tutors customize feedback to students, maximizing both student learning and teacher instruction.

A brief theoretical framework for developing teaching environments was presented, along with a description of the vast amount of knowledge required to build a

tutor. Also described were the three academic disciplines (computer science, psychology, and education) that contribute to developing intelligent tutors and the knowledge

domains that help tutors customize actions and responses for individual students.



This page intentionally left blank



PART



Representation,

Reasoning, and

Assessment



II



This page intentionally left blank



CHAPTER



Student Knowledge



3



Human teachers support student learning in many ways, e.g., by patiently repeating material, recognizing misunderstandings, and adapting feedback. Learning is

enhanced through social interaction (Vygotsky, 1978; see Section 4.3.6), particularly

one-to-one instruction of young learners by an older child, a parent, teacher, or other

more experienced mentor (Greenfield et al., 1982; Lepper et al., 1993). Similarly, novices are believed to construct deep knowledge about a discipline by interacting with

a more knowledgeable expert (Brown et al., 1994; Graesser et al., 1995). Although

students’ general knowledge might be determined quickly from quiz results, their

learning style, attitudes, and emotions are less easily determined and need to be

inferred from long-term observations.

Similarly, a student model in an intelligent tutor observes student behavior and

creates a qualitative representation of her cognitive and affective knowledge. This

model partially accounts for student performance (time on task, observed errors)

and reasons about adjusting feedback. By itself, the student model achieves very little;

its purpose is to provide knowledge that is used to determine the conditions for

adjusting feedback. It supplies data to other tutor modules, particularly the teaching

module. The long-term goal of the field of AI and education is to support learning for

students with a range of abilities, disabilities, interests, backgrounds, and other characteristics (Shute, 2006).

The terms student module and student model are conceptually distinct and yet

refer to similar objects. A module of a tutor is a component of code that holds knowledge about the domain, student, teaching, or communication. On the other hand, a

model refers to a representation of knowledge, in this case, the data structure of that

module corresponding to the interpretation used to summarize the data for purposes

of description or prediction. For example, most student modules generate models that

are used as patterns for other components (the teaching module) or as input to subsequent phases of the tutor.

This chapter describes student models and indicates how knowledge is represented, updated, and used to improve tutor performance. The first two sections provide a rationale for building student models and define their common components.

The next sections describe how to represent, update, and improve student model 49



50 CHAPTER 3 Student Knowledge



knowledge and provide examples of student models, including the three outlined

in Chapter 2 (PAT, AnimalWatch, and Cardiac Tutor) and several new ones (Affective

Learning Companions, Wayang Outpost, and Andes). The last two sections detail cognitive science and artificial intelligence techniques used to update student models

and identify future research issues.



3.1 RATIONALE FOR BUILDING A STUDENT MODEL

Human teachers learn about student knowledge through years of experience with

students. Master teachers often use secondary learning features, e.g., a student’s facial

expressions, body language, and tone of voice to augment their understanding of

affective characteristics. They may adjust their strategies and customize responses to

an individual’s learning needs. Interactions between students and human teachers

provide critical data about student goals, skills, motivation, and interests.

Intelligent tutors make inferences about presumed student knowledge and store it

in the student model. A primary reason to build a student model is to ensure that the

system has principled knowledge about each student so it can respond effectively,

engage students’ interest, and promote learning. The implication for intelligent tutors

is that customized feedback is pivotal to producing learning. Instruction tailored to

students’ preferred learning style increases their interest in learning and enhances

learning, in part, because tutors can support weak students’ knowledge and develop

strong students’ strengths. Master human teachers are particularly astute at adapting

material to students’ cognitive and motivational characteristics. In mathematics, for

example, using more effective supplemental material strongly affects learning at the

critical transition from arithmetic to algebra and achievement of traditionally underperforming students (Beal, 1994). Students show a surprising variety of preferred

media; given a choice, they select many approaches to learning (Yacci, 1994). Certain

personal characteristics (gender and spatial ability) are known to correlate with

learning indicators such as mathematics achievement (Arroyo et al., 2004) and learning methods (Burleson, 2006). Characteristics such as proficiency with abstract reasoning also predict responses to different interventions. Thus, adding more detailed

student models of cognitive characteristics may greatly increase tutor effectiveness.



3.2 BASIC CONCEPTS OF STUDENT MODELS

Before discussing student models, we describe several foundational concepts common to all student models. Intelligent tutors are grounded in methods that infer and

respond to student cognition and affect. Thus, the more a tutor knows about a student, the more accurate the student model will be. This section describes features

such as how tutors reason about a discipline (domain models), common forms of

student and misconceptions models (overlay models and bug libraries, respectively), information available from students (bandwidth), and how to support students in evaluating their own learning (open student models).



3.2 Basic Concepts of Student Models 51



3.2.1 Domain Models

A domain usually refers to an area of study (introductory physics or high school

geometry), and the goal of most intelligent tutors is to teach a portion of the domain.

Building a domain model is often the first step in representing student knowledge,

which might represent the same knowledge as the domain model and solve the

same problems. Domain models are qualitative representations of expert knowledge

in a specific domain. They might represent the facts, procedures, or methods that

experts use to accomplish tasks or solve problems. Student knowledge is then represented as annotated versions of that domain knowledge. In AnimalWatch, the domain

model was a network of arithmetic skills and prerequisite relationships, and in the

Cardiac Tutor, it was a set of protocols and plans.

Domains differ in their complexity, moving from simple, clearly defined to highly

connected and complex. Earliest tutors were built in well-defined domains (geometry, algebra, and system maintenance), and fewer were built in less well-structured

domains (law, design, architecture, music composition) (Lynch et al., 2006). If knowledge domains are considered within an orthogonal set of axes that progress from

well-structured to ill-structured on one axis and from simple to complex on the

other, they fall into three categories (Lynch et al., 2006):





Problem solving domains (e.g., mathematics problems, Newtonian mechanics)

live at the simple and most well-structured end of the two axes. Some simple

diagnostic cases with explicit, correct answers also exist here (e.g., identify a

fault in an electrical board).







Analytic and unverifiable domains (e.g., ethics and law) live in the middle

of these two axes along with newly defined fields (e.g., astrophysics). These

domains do not contain absolute measurement or right/wrong answers and

empirical verification is often untenable.







Design domains (e.g., architecture and music composition) live at the most

complex and ill-structured end of the axes. In these domains, the goals are novelty and creativity, not solving problems.



For domains in the simple, well-defined end of the continuum, the typical teaching strategy is to present a battery of training problems or tests (Lynch et al., 2006).

However, domains in the complex and ill-structured end of the continuum have no

formal theory for verification. Students’ work is not checked for correctness.Teaching

strategies in these domains follow different approaches, including case studies (see

Section 8.2) or expert review, in which students submit results to an expert for comment. Graduate courses in art, architecture, and law typically provide intense formal

reviews and critiques (e.g., moot court in law and juried sessions in architecture).

Even some simple domains (e.g., computer programming and basic music theory)

cannot be specified in terms of rules and plans. Enumerating all student misconceptions and errors in programming is difficult, if not impossible, even considering only

the most common ones (Sison and Shimora, 1998). In such domains it is also

impossible to have a complete bug library (discussed later) of well-understood errors.



52 CHAPTER 3 Student Knowledge



Even if such a library were possible, different populations of students (e.g., those

with weak backgrounds, disabled students) might need different bug libraries (Payne

and Squibb, 1990). The ability to automatically extend, let alone construct, a bug

library is found in few systems, but background knowledge has been automatically

extended in some, such as PIXIE (Hoppe, 1994; Sleeman et al., 1990), ASSERT (Baffes

and Mooney, 1996), and MEDD (Sison et al., 1998).



3.2.2 Overlay Models

A student model is often built as an overlay or proper subset of a domain model

(Carr and Goldstein, 1977). Such models show the difference between novice and

expert reasoning, perhaps by indicating how students rate on mastery of each topic,

missing knowledge, and which curriculum elements need more work. Expert knowledge may be represented in various ways, including using rules or plans. Overlay

models are fairly easy to implement, once domain/expert knowledge has been enumerated by task analysis (identifying the procedures an expert performs to solve

a problem). Domain knowledge might be annotated (using rules) and annotated by

assigning weights to each expert step. Modern overlay models might show students

their own knowledge through an open user model (Kay, 1997), see Section 3.2.5

subsequent discussion).

An obvious shortcoming of overlay models is that students often have knowledge

that is not a part of an expert’s knowledge (Chi et al., 1981) and thus is not represented by the student model. Misconceptions are not easily represented, except as

additions to the overlay model. Similarly unavailable are alternative representations

for a single topic (students’ growing knowledge or increasingly sophisticated mental

models).



3.2.3 Bug Libraries

A bug library is a mechanism that adds misconceptions from a predefined library

to a student model; a bug parts library contains dynamically assembled bugs to fit

a student’s behavior. Mal-rules might be hand coded or generated based on a deep

cognitive model. The difficulty of using bug libraries has been demonstrated in the

relatively simple domain of double-column subtraction (Brown and VanLehn, 1980).

Many observable student errors were stored in a bug library, which began with an

expert model and added a predefined list of misconceptions and missing knowledge.

Hand analysis of several thousands of subtraction tests yielded a library of 104 bugs

(Burton, 1982b; VanLehn, 1982). Place-value subtraction was represented as a procedural network (recursive decomposition of a skill into subskills or subprocedures).

Basing a student model on such a network required background knowledge that

contained all necessary subskills for the general skill, as well as all possible incorrect

variants of each subskill. The student model then replaced one or more subskills in

the procedural network by one of their respective incorrect variants, to reproduce

a student’s incorrect behavior. This early “Buggy” system (Burton and Brown, 1978)

was extended in a later diagnostic system called “Debuggy” (Burton, 1982a).



3.2 Basic Concepts of Student Models 53



When students were confronted with subtraction problems that involved borrowing across a zero, they frequently made mistakes, invented a variety of incorrect rules

to explain their actions, and often consistently applied their own buggy knowledge

(Burton, 1982b). These misconceptions enabled researchers to build richer models

of student knowledge. Additional subtraction bugs, including bugs that students

never experienced, were found by applying repair theory (VanLehn, 1982). When

these theoretically predicted bugs were added to the bug library and student model,

reanalysis showed that some student test answers were better matched by the new

bugs (VanLehn, 1983).

Bug library approaches have several limitations. They can only be used in procedural and fairly simple domains. The effort needed to compile all likely bugs is

substantial because students typically display a wide range of errors within a given

domain, and the library needs to be as complete as possible. If a single unidentified

bug (misconception) is manifested by a student’s action, the tutor might incorrectly

diagnose the behavior and attribute it to a different bug or use a combination of

existing bugs to define the problem (VanLehn, 1988a). Compiling bugs by hand is

not productive, particularly without knowing if human students make the errors or

whether the system can remediate them. Many bugs identified in Buggy were never

used by human students, and thus the tutor never remediated them.

Self (1988) advised that student misconceptions should not be diagnosed if they

could not be addressed. However diagnostic information can be compiled and later

analyzed. Student errors can be automatically tabulated by machine learning techniques to create classifications or prediction rules about domain and student knowledge (see Section 7.3.1). Such compilations might be based on observing student

behavior and on information about buggy rules from student mistakes. A bug parts

library could then be dynamically constructed using machine learning, as students

interact with the tutor, which then generates new plausible bugs to explain student

actions.



3.2.4 Bandwidth

Bandwidth describes the amount and quality of information available to the

student model. Some tutors record only a single input word or task from students.

For example, the programming tutor, PROUST (Johnson and Soloway, 1984) accepted

only a final and complete program from students, from which it diagnosed each student’s knowledge and provided feedback, without access to the student’s scratch

work or incomplete programs. The LISP programming tutor (Reiser et al., 1985) analyzed each line of code and compared it to a detailed cognitive model proposed to

underlie programming skills. Step-wise tutors, such as PAT and Andes, asked students

to identify all their steps before submission of the final answer. These tutors traced

each step of a students solution and compared it to a cognitive model of an expert’s

solution.

The Cardiac Tutor evaluated each step of a student’s actions while treating a simulated patient (Eliot and Woolf, 1996). In all these tutors, student actions (e.g., “begin



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

5 Computer science, psychology, and education

Tải bản đầy đủ ngay(0 tr)

×