Tải bản đầy đủ - 0 (trang)
The “Core” Version of the Formal Model

The “Core” Version of the Formal Model

Tải bản đầy đủ - 0trang

Memory Search



51



where xik is the value of exemplar i on psychological dimension k; K is the

number of dimensions thatP

define the space; r defines the distance metric of

the space; and wk (0 < wk, wk ¼ 1) is the weight given to dimension k in

computing distance. In situations involving the recognition of holistic or

integral-dimension stimuli (Garner, 1974), which will be the main focus of

the present work, r is set equal to 2, which yields the familiar Euclidean

distance metric. The dimension weights, wk, are free parameters that reflect

the degree of “attention” that subjects give to each dimension in making

their recognition judgments. In situations in which some dimensions are

more relevant than others in allowing subjects to discriminate between old

versus new items, the attentioneweight parameters may play a significant

role (eg, Nosofsky, 1991). In the experimental situations considered in the

present work, however, all dimensions tend to be relevant and the attention

weights will turn out to play a minor role.

The similarity of test item i to exemplar j is an exponentially decreasing

function of their psychological distance (Shepard, 1987),

À

Á

sij ¼ exp Àcj dij ;

(2)

where cj is the sensitivity associated with exemplar j. The sensitivity governs

the rate at which similarity declines with distance in the space. When

sensitivity is high, the similarity gradient is steep, so even objects that are

close together in the space may be highly discriminable. By contrast, when

sensitivity is low, the similarity gradient is shallow, and objects are hard to

discriminate. In most previous tests of the EBRW model, a single global

level of sensitivity was assumed that applied to all exemplar traces stored in

long-term memory. In application to the present short-term recognition

paradigms, however, allowance is made for forms of exemplar-specific

sensitivity. For example, in situations involving high-similarity stimuli, an

observer’s ability to discriminate between test item i and exemplar-trace j

will almost certainly depend on the recency with which exemplar j was

presented: Discrimination is presumably much easier if an exemplar was just

presented, rather than if it was presented earlier on the study list (due to

factors such as interference and decay).

Each exemplar j from the memory set is stored in memory with

memory strength mj. As is the case for the sensitivities, the memory strengths

are exemplar specific (with the detailed assumptions stated later). Almost

certainly, for example, exemplars presented more recently will have greater

strengths.

When applied to oldenew recognition, the EBRW model presumes

that abstract elements termed criterion elements are part of the cognitive



52



Robert M. Nosofsky



processing system. The strength of the criterion elements, which we

hypothesize is at least partially under the control of the observer, helps

guide the decision about whether to respond “old” or “new.” In particular,

as will be explained below, the strength setting of the criterion elements

influences the direction and rate of drift of the EBRW process. Other

well-known sequential-sampling models include analogous criterionrelated parameters for generating drift rates, although the conceptual

underpinnings of the models are different from those in the EBRW model

(eg, Ratcliff, 1985, pp. 215e216; Ratcliff, Van Zandt, & McKoon, 1999,

p. 289).

Presentation of a test item causes the old exemplars and the criterion

elements to be activated. The degree of activation for exemplar j, given

presentation of test item i, is given by

aij ¼ mj sij :



(3)



Thus the exemplars that are most strongly activated are those with high

memory strengths and that are highly similar to test item i. The degree of

activation of the criterion elements (C) is independent of the test item

that is presented. Instead criterion-element activation functions as a fixed

standard against which exemplar-based activation can be evaluated. As discussed later in this chapter, however, criterion-element activation may be

influenced by factors such as the size and structure of the memory set,

because observers may adjust their criterion settings when such factors are

varied.

Upon presentation of the test item, the activated stored exemplars and

criterion elements race to be retrieved (Logan, 1988). The greater the degree

of activation, the faster the rate at which the individual races take place. On

each step, the exemplar (or criterion element) that wins the race is retrieved.

Whereas in Logan’s (1988) model, the response is based on only the first

retrieved exemplar, in the EBRW model the retrieved exemplars drive a

random-walk process. First, there is a random-walk counter with initial

setting zero. The observer establishes response thresholds, Rold and Rnew,

that determine the amount of evidence needed for making each decision.

On each step of the process, if an old exemplar is retrieved, then the

random-walk counter is incremented by unit value toward the Rold

threshold; whereas if a criterion element is retrieved, the counter is decremented by unit value toward the Rnew threshold. If either threshold is

reached, then the appropriate recognition response is made. Otherwise a

new race is initiated, another exemplar or criterion element is retrieved



53



Memory Search



(possibly the same one as on the previous step) and the process continues.

The recognition decision time is determined by the total number of steps

required to complete the random walk. It should be noted that the concept

of a “criterion” appears in two different locations in the model. First, as

explained above, the strength setting of the criterion elements influences

the direction and rate of drift of the random walk. Second, the magnitude

of the Rold and Rnew thresholds determine how much evidence is needed

before an old or a new response is made. Again other well-known sequential-sampling models include analogous criterion-related parameters at these

same two locations (for extensive discussion, see, eg, Ratcliff, 1985).

Given the detailed assumptions in the EBRW model regarding the race

process (see Nosofsky & Palmeri, 1997, p. 268), it turns out that, on each

step of the random walk, the probability (p) that the counter is incremented

toward the Rold threshold is given by

pi ẳ Ai =Ai ỵ Cị;



(4)



where Ai is the summed activation of all of the old exemplars (given presentation of item i), and C is the summed activation of the criterion

elements. (The probability that the random walk steps toward the Rnew

threshold is given by qi ¼ 1Àpi.) In general, therefore, test items that match

recently presented exemplars (with high memory strengths) will cause high

exemplar-based activations, leading the random walk to march quickly to

the Rold threshold and resulting in fast OLD RTs. By contrast, test items that

are highly dissimilar to the memory-set items will not activate the stored

exemplars, so only criterion elements will be retrieved. In this case, the

random walk will march quickly to the Rnew threshold, leading to fast NEW

RTs. Through experience in the task, the observer is presumed to learn an

appropriate setting of criterion-element activation (C) such that summed

activation (Ai) tends to exceed C when the test probe is old, but tends to be

less than C when the test probe is new. In this way, the random walk will

tend to drift to the appropriate response thresholds for old versus new lists. In

most applications, for simplicity, I assume the criterion-element activation is

linearly related to memory set size. (Because summed activation of exemplars, Ai, tends to increase with memory-set size, the observer needs to adopt

a stricter criterion as memory-set size increases.)

Given these processing assumptions and the computed values of pi (Eq.

(4)), it is then straightforward to derive analytic predictions of recognition

choice probabilities and mean RTs for any given test probe and memory

set. The relevant equations are summarized by Nosofsky and Palmeri



54



Robert M. Nosofsky



(1997, pp. 269e270, 291e292). Simulation methods are used when the

model is applied to predict fine-grained RT distribution data.

In sum having outlined the general form of the model, I now review specific applications of the model to predicting RTs and accuracies in different

variants of the short-term probe-recognition paradigm.



3. SHORT-TERM PROBE RECOGNITION IN A

CONTINUOUS-DIMENSION SIMILARITY SPACE

In Nosofsky et al.’s (2011) initial experiment for testing the model,

the stimuli were a set of 27 Munsell colors that varied along the dimensions

of hue, brightness, and saturation. Similarity-scaling procedures were used

to derive a precise multidimensional-scaling (MDS) solution for the colors

(Shepard, 1980). The MDS solution provides the xik coordinate values for

the exemplars (Eq. (1)) and is used in combination with the EBRW model

to predict the results from the probe-recognition experiment (cf.

Nosofsky, 1992).

The design of the experiment involved a broad sampling of different list

structures to provide a comprehensive test of the model. There were 360

lists in total. The size of the memory set on each trial was one, two, three,

or four items, with an equal number of lists at each set size. For each set size,

half the test probes were old and half were new. In the case of old probes,

the matching item from the memory set occupied each serial position

equally often. To create the lists, items were randomly sampled from the

full set of stimuli, subject to the constraints described above. Thus a highly

diverse set of lists was constructed, varying not only in set size, old/new

status of the probe, and serial position of old probes, but also in the similarity structure of the lists.

Because the goal was to predict performance at the individual-subject

level, three subjects were each tested for approximately 20 1-h sessions,

with each of the 360 lists presented once per session. As it turned out,

each subject showed extremely similar patterns of performance, and the

fits of the EBRW model yielded similar parameter estimates for the three

subjects. Therefore, for simplicity, and to reduce noise in the data, I report

the results from the analysis of the averaged subject data.

In the top panels of Fig. 2, I report summary results from the experiment.

The top-right panel reports the observed correct mean RTs plotted as a

function of (1) set size, (2) whether the probe was old or new (ie, a lure),

and (3) the lag with which old probes appeared in the memory set. (Lag is



Memory Search



55



Figure 2 Summary data from the short-term memory experiment of Nosofsky et al.

(2011). (Top) Observed error rates and mean response times (RTs). (Bottom) Predictions

from the exemplar-based random walk model. Reprinted from Nosofsky, R.M., Little, D.R.,

Donkin, C., & Fific, M. (2011). Short-term memory scanning viewed as exemplar-based

categorization. Psychological Review, 188, 288. Copyright 2011 by APA. Reprinted with

permission.



counted backward from the end of the list.) For old probes, there was a big

effect of lag: In general, the more recently a probe appeared on the study list,

the shorter was the mean RT. Indeed once one takes lag into account, there

is little remaining effect of set size on the RTs for the old probes. That is, as

can be seen, the different set size functions are nearly overlapping (cf.

McElree & Dosher, 1989; Monsell, 1978). The main exception is a persistent primacy effect, in which the mean RT for the item at the longest lag

for each set size is “pulled down.” (The item at the longest lag occupies

the first serial position of the list.) By contrast, for the lures, there is a big

effect of set size, with longer mean RTs as set size increases. The mean

proportions of errors for the different types of lists, shown in the top-left

panel of Fig. 2, mirror the mean RT data just described.

The goal of the EBRW modeling, however, was not simply to account

for these summary trends. Instead, the goal was to predict the choice probabilities and mean RTs observed for each of the individual lists. Because



56



Robert M. Nosofsky



there were 360 unique lists in the experiment, this goal entailed simultaneously predicting 360 choice probabilities and 360 mean RTs. The results

of that model-fitting goal are shown in the top and bottom panels of Fig. 3.

The top panel plots, for each individual list, the observed probability that the

subjects judged the probe to be “old” against the predicted probability from

the model. The bottom panel does the same for the mean RTs. Although

there are a few outliers in the plots, overall the model achieves a good fit

to both data sets, accounting for 96.5% of the variance in the choice probabilities and for 83.4% of the variance in the mean RTs.

The summary-trend predictions that result from these global fits are

shown in the bottom panels of Fig. 2. It is evident from inspection that

the EBRW does a good job of capturing these summary results. For the

old probes, it predicts the big effect of lag on the mean RTs and the nearly

overlapping set-size functions. Likewise it predicts with good quantitative

accuracy the big effect of set size on the lure RTs. The error-proportion

data (left panels of Fig. 2) are generally also well predicted.

The explanation of these results in terms of the EBRW model is straightforward. According to the best-fitting parameters from the model (see

Nosofsky et al., 2011), more recently presented exemplars had greater memory strengths and sensitivities than did less recently presented exemplars.

From a psychological perspective, this pattern seems highly plausible. For

example, presumably, the more recently an exemplar was presented, the

greater should be its strength in memory. Thus if an old test probe matches

the recently presented exemplar, it will give rise to greater overall activation,

leading to shorter mean old RTs. In the case of a lure, as set size increases, the

overall summed activation yielded by the lure will also tend to increase. This

pattern arises both because a greater number of exemplars will contribute to

the sum, and because the greater the set size, the higher is the probability that

it at least one exemplar from the memory set will be highly similar to the

lure. As summed activation yielded by the lures increases, the probability

that the random walk takes correct steps toward the Rnew threshold decreases,

and so mean RTs for the lures get longer.

Beyond accounting well for these summary trends, inspection of the

detailed scatterplots in Fig. 3 reveals that the model accounts for fine-grained

changes in choice probabilities and mean RTs depending on the finegrained similarity structure of the lists. For example, consider the choiceprobability plot (Fig. 3, top panel) and the Lure-Size-4 items (open

diamonds). Whereas performance for those items is summarized by a single

point on the summary-trend figure (Fig. 2), the full scatterplot reveals



Memory Search



57



Figure 3 Scatterplots of observed and exemplar-based random walkepredicted choice

probabilities and mean response times (RTs) associated with individual lists from the

short-term memory experiment of Nosofsky et al. (2011). Reprinted from Nosofsky,

R.M., Little, D.R., Donkin, C., & Fific, M. (2011). Short-term memory scanning viewed as

exemplar-based categorization. Psychological Review, 188, 286e287. Copyright 2011 by

APA. Reprinted with permission.



58



Robert M. Nosofsky



extreme variability in results across different tokens of the Lure-Size-4 lists.

In some cases the false-alarm rates associated with these lists are very low, in

other cases moderate, and in still other cases the false-alarm rates exceed the

hit rates associated with old lists. The EBRW captures well this variability in

false-alarm rates. In some cases, the lure might not be similar to any of the

memory-set items, resulting in a low false-alarm rate; whereas in other cases

the lure might be highly similar to some of the memory-set items, resulting

in a high false-alarm rate.



4. SHORT-TERM PROBE RECOGNITION OF DISCRETE

STIMULI

The application in the previous section involved short-term probe

recognition in a continuous-dimension similarity space. A natural question,

however, is how the EBRW model might fare in a more standard version of

the paradigm, in which discrete alphanumeric characters are used. To the

extent that things work out in a simple, natural fashion, the applications

of the EBRW model to the standard paradigm should be essentially the

same as in the just-presented application, except they would involve a highly

simplified model of similarity. That is, instead of incorporating detailed

assumptions about similarity relations in a continuous multidimensional

space, we apply a simplified version of the EBRW that is appropriate for

highly discriminable, discrete stimuli.

Specifically, in the simplified model, I assume that the similarity between an item and itself is equal to one; whereas the similarity between

two distinct items is equal to a free parameter s (0 < s < 1). Presumably

the best-fitting value of s will be small, because the discrete alphanumeric

characters used in the standard paradigm are not highly confusable with

one another. Note that the simplified model makes no use of the dimensional attentioneweight parameters or the lag-dependent sensitivity parameters. All other aspects of the model were the same, so we estimated the

lag-dependent memory strengths, random walk thresholds, and criterionelement parameters.

Here I illustrate an application of the simplified EBRW model to a wellknown data set collected by Monsell (1978; Experiment 1, immediate condition). In brief, Monsell (1978) tested eight subjects for an extended period

in the probe-recognition paradigm, using visually presented consonants as

stimuli. The design was basically the same as the one described in the

previous section of this chapter, except that the similarity structure of the lists



Memory Search



59



was not varied. A key aspect of Monsell’s procedure was that individual

stimulus presentations were fairly rapid, and the test probe was presented

either immediately or with brief delay. Critically the purpose of this procedure was to discourage subjects from rehearsing the individual consonants of

the memory set. If rehearsal takes place, then the psychological recency of

the individual memory-set items is unknown, because it will vary depending

on each subject’s rehearsal strategy. By discouraging rehearsal, the psychological recency of each memory set item should be a systematic function

of its lag. (Another important aspect of Monsell’s design, which I consider

later in this review, is that he varied whether or not lures were presented

on recent lists. The present applications are to data that are collapsed across

this variable.)

The mean RTs and error rates observed by Monsell (1978) in the immediate condition are reproduced in the top panel of Fig. 4. (The results

obtained in the brief-delay condition showed a similar pattern.) Inspection

of Monsell’s RT data reveals a pattern that is very similar to the one we

observed in the previous section after averaging across the individual tokens

of the main types of lists (ie, compare to the observed-RT panel of Fig. 2). In

particular, the mean old RTs vary systematically as a function of lag, with

shorter RTs associated with more recently presented probes. Once lag is

taken into account, there is little if any remaining influence of memoryset size on old-item RTs. For new items, however, there is a big effect of

memory-set size on mean RT, with longer RTs associated with larger set

sizes. Because of the nonconfusable nature of the consonant stimuli, error

rates are very low; however, what errors there are tend to mirror the

RTs. Another perspective on the observed data is provided in Fig. 5, which

plots mean RTs for old and new items as a function of memory-set size, with

the old RTs averaged across the differing lags. This plot shows roughly linear

increases in mean RTs as a function of memory-set size, with the positive

and negative functions being roughly parallel to one another. (The main

exception to that overall pattern is the fast mean RT associated with positive

probes to 1-item lists.) This overall pattern shown in Fig. 5 is, of course,

extremely commonly observed in the probe-recognition memory-scanning

paradigm.

Nosofsky et al. (2011) fitted the EBRW model to the Fig. 4 data by using

a weighted least-squares criterion (see the original article for details). The

predicted mean RTs and error probabilities from the model are shown

graphically in the bottom panel of Fig. 4. Comparison of the top and bottom

panels of the figure reveals that the EBRW model does an excellent job of



60



Robert M. Nosofsky



Figure 4 Observed (top panel) and exemplar-based random walkepredicted data

(bottom panel) for Monsell (1978, Experiment 1). Mean response times (RTs) and error

rates plotted as a function of lag, memory-set size, and type of probe. Observed data

are estimates from Monsell’s (1978) Figs. 3 and 4. Reprinted from Nosofsky, R.M., Little,

D.R., Donkin, C., & Fific, M. (2011). Short-term memory scanning viewed as exemplar-based

categorization. Psychological Review, 188, 290. Copyright 2011 by APA. Reprinted with

permission.



capturing the performance patterns in Monsell’s (1978) study. Mean RTs for

old patterns get systematically longer with increasing lag, and there is little

further effect of memory-set size once lag is taken into account. Mean

RTs for lures are predicted correctly to get longer with increases in memory-set size. (The model is also in the right ballpark for the error proportions,

although in most conditions the errors are near floor.) Fig. 5 shows the

EBRW model’s predictions of mean RTs for both old and new probes as



Memory Search



61



Figure 5 Observed and exemplar-based random walkepredicted set size functions,

averaged across different lags, for Monsell (1978, Experiment 1). Observed data are

based on estimates from Monsell’s (1978) Figs. 3 and 4. Reprinted from Nosofsky,

R.M., Little, D.R., Donkin, C., & Fific, M. (2011). Short-term memory scanning viewed as

exemplar-based categorization. Psychological Review, 188, 291. Copyright 2011 by APA.

Reprinted with permission.



a function of memory-set size (averaged across differing lags), and the model

captures the data from this perspective as well. Beyond accounting for the

major qualitative trends in performance, the EBRW model provides an

excellent quantitative fit to the complete set of data.

The best-fitting parameters from the model (see Nosofsky et al., 2011)

were highly systematic and easy to interpret. As expected, the memorystrength parameters decreased systematically with lag, reproducing the

pattern seen in the fits to the data from the previous section. The best-fitting

value of the similarityemismatch parameter (s ¼ 0.050) reflected the low

confusability of the consonant stimuli from Monsell’s experiment. The conceptual explanation of the model’s predictions is essentially the same as

already provided in the previous section.

In sum, without embellishment, the EBRW model appears to provide a

natural account of the major patterns of performance in the standard version

of the probe-recognition paradigm in which discrete alphanumeric characters are used, at least in cases in which the procedure discourages rehearsal

and where item recency exerts a major impact. In addition, I should note

that although the present chapter focuses on predictions and results at the

level of mean RTs, the exemplar model has also been shown to provide successful quantitative accounts of probe-recognition performance at the level

of complete RT distributions. Examples of such applications are provided by



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

The “Core” Version of the Formal Model

Tải bản đầy đủ ngay(0 tr)

×