Tải bản đầy đủ - 0 (trang)
1 Introduction: Consensus Summary of Dopamine’s Actions in the Circuitry of the Basal Ganglia

1 Introduction: Consensus Summary of Dopamine’s Actions in the Circuitry of the Basal Ganglia

Tải bản đầy đủ - 0trang


D. Bullock

DA neurons—a good example is the retina—but these neurons are not a large proportion of the total, and function as interneurons, with no projections beyond the

area. Recently, Fuxe and colleagues (2010) reviewed the huge literature that has

developed since the A8–A14 clusters were mapped. They reprised impressive evidence that (1) a highly similar mapping applies across a wide range of mammalian

species and (2) DA often works via volume transmission, which utilizes diffusion

well beyond release sites (Rice and Cragg 2008; but see Ishikawa et al. 2013), hence

does not require that the DA release sites be immediately adjacent to the receptors

at which DA acts. Of course, all systemically delivered neuroactive drugs also work

via volume transmission, after crossing the blood–brain barrier. Consistent with this

mode of operation, single DA neurons exhibit remarkably widespread branching,

with multiple axonal bushes, in target areas such as the striatum (e.g., Matsuda et al

2009). Thus, DA is typically regarded as a nonspecific, “broadcast” signal, highly

distinct from the specific, topographically organized projections found in other neural systems, e.g., at successive stages of processing within a sensory modality, or in

the motor output pathways.

Although DA signals play diverse roles in the neural symphony, one prototypical

and vital role is as a primary mediator of the ancient learning process by which

animals explore novel environments and thereby learn both to choose actions that

are expected to lead to more rewarding outcomes, and to suppress actions expected

to lead to less rewarding or aversive outcomes. Dopamine strongly affects such

learning via its systematic effects on LTD and LTP of glutamatergic synapses

between afferents to striatum and the medium spiny neurons (MSPNs) that project

from striatum to other BG nuclei. However, DA also has strong effects on performance, including both motor and cognitive performance. Its influence on performance is powerfully attested by the tight link between striatal DA loss and

Parkinsonian akinesia, but it is also revealed in much subtler ways, such as a higher

velocity of eye movements to rewarded than to equidistant but non-rewarded targets

(Hong and Hikosaka 2011), and altered reaction time distributions following sleep

deprivation, which have been reproduced in a computational model that includes

dopamine–adenosine interactions in striatum (Bullock and St. Hilaire 2014).

Action selection based on expected outcomes is enabled by mammalian forebrain circuits, among which the striatum and other constituents of the BG (see

Fig. 5.1) have a preeminent status (Swanson 2005; Gurney et al. 2015). Although

DA innervation is densest in striatum, it also reaches many other parts of the brain,

especially parts of the BG, thalamus, and cerebral cortex. Moreover, the innervation

of cerebral cortex is significantly more elaborated in primates than in rodents (Smith

et al. 2014). Because operation of the BG is so critically dependent on dense innervation from DA neurons of cluster A10 (much of which falls in the VTA), A9

(mostly in the SNc), and A8 (mostly in the retrorubral area = RRA), these pools are

regarded as an integral part of the BG system in this chapter. Thus, the BG system

spans cells found in both the subcortical forebrain and the midbrain.

DA acts differentially in striatum by facilitating a “direct”, action-promoting

pathway, and by simultaneously dis-facilitating an “indirect”, action-opposing path-

5 Dopamine and Its Actions in the Basal Ganglia System


Fig. 5.1 Basic connectivity of the basal ganglia. Arrowheads indicate glutamatergic links; all others are GABAergic, but MSPNs co-release ENK or SP. STN subthalamic nucleus, FSIN Fastspiking interneuron, MSPN medium spiny projection neuron, D2 dopamine D2 receptor, ENK

enkephalin, D1 dopamine D1 receptor, SP substance P, GPe globus pallidus externus, GPi globus

pallidus internus, Ret. Nuc. thalamus reticular nucleus of the thalamus, Vb, III, and Va are layers of

cerebral cortex. Adapted from Bullock et al. (2009)

way (see Fig. 5.1). The same DA signal can have such opponent effects because

DA-recipient cells express either D1-type DA receptors (namely D1 or D5 receptors), which facilitate neural activation, or D2-type receptors (namely D2, D3, or D4

receptors), which dis-facilitate neural activation. The striatal cells of origin of the

direct (GO) and indirect (NO-GO) pathways are variously called medium spiny

neurons (MSNs or MSPNs), or Medium densely Spiny Projection Neurons

(MdSNs). The D1-M4-SP-DYN-GABA-MSPNs of the direct pathway express both

dopamine D1 receptors (D1Rs) and muscarinic m4 receptors (M4Rs), and corelease GABA, substance P (SP), and dynorphin (DYN). The D2-M1-ENK-GABAMSPNs of the indirect, “NOGO” or “STOP,” pathway express dopamine D2

receptors (D2Rs) and muscarinic m1 receptors (M1Rs), and co-release GABA and

enkephalin (ENK).

As one might expect, the simple D1-MSPN vs. D2-MSPN scheme for striatum,

proposed in seminal works such as Gerfen et al. (1990), does not capture the entire

story of MSPN types and their projections to targets outside striatum (e.g., Surmeier

et al. 1996; Sonomura et al. 2007). Nevertheless, it remains a valid and key starting

point for understanding the system’s fundamental organization (Gerfen and

Surmeier 2011). The differential action of DA on these two opponent pathways,

which is well established for the striatum in primates and rodents and schematized

in Fig. 5.2, appears to be extremely ancient in the animal kingdom. Such opponent

pathways are ubiquitous across the vertebrates (Reiner 2009), including even jawless fish (Grillner and Robertson 2015), and recent reports have argued for a systematic homology between the core vertebrate and arthropod neural circuits for

DA-guided behavior control (Strausfeld and Hirth 2013) and reinforcement learning (Waddell 2013).


D. Bullock

Fig. 5.2 How tonically active neurons (TAN) mediate part of the DAergic regulation of medium

spiny neurons (MSPN) in striatum. Acetylcholine (ACh) released by a TAN inhibits MSPN

expressing the dopamine D1 receptor (D1R) via the muscarinic 4 receptor (M4R) and stimulates

MSPN expressing the dopamine D2 receptor (D2R) via the muscarinic 1 receptor (M1R).

Dopamine (DA) released by the substantia nigra pars compacta (SNc) or the ventral tegmental area

(VTA) stimulates MSPN expressing the D1R receptor and inhibits MSPN expressing the D2R

receptor. Dopamine also inhibits TAN via the dopamine D2 receptor. GPe globus pallidus externus,

GPi globus pallidus internus, SNr substantia nigra, pars reticulata


The Dopamine-Acetylcholine Cascade in Striatum

It can be expected that such an ancient neural feature as learned behavior guided by

rewards and punishments would be robustly supported by multiple, partly redundant,

mechanisms in modern brains. Indeed, Fig. 5.2 (adapted from Tan and Bullock

2008a) highlights the fact that in mammals, there is a well-established dopamineacetylcholine cascade within the striatum. In addition to its direct action on MSPNs,

DA acts via D2Rs to inhibit large ACh-releasing striatal interneurons, which are

alternately called TANs (tonically active neurons) or ChINs (cholinergic interneurons). A close study of Fig. 5.2 reveals that the actions of DA and ACh are synergistic.

A DA burst will induce TAN pausing, and both the DA increment and the ACh

decrement favor the direct pathway’s D1-MSPNs over the indirect pathway’s

D2-MSPNs; conversely, a DA dip will disinhibit TANs, and both the DA decrement

and the ACh increment favor the indirect over the direct pathway MSPNs. These

opposing synergistic actions are possible because both DA neurons and TANs are

tonically active (“pacemaker”) neurons that can exhibit antiphase bursts and pauses

5 Dopamine and Its Actions in the Basal Ganglia System


(Morris et al. 2004), and because DA has opposite actions via D1Rs and D2Rs,

whereas ACh has a reversed set of opposite actions via M1Rs and M4Rs (Kaneko

et al. 2000; Hoebel et al. 2007). A human watchmaker of the old school would

admire the beauty of this machine.

The robustness-promoting redundancy probably has several further components.

For example, the DAergic projection from the ventral tegmental area (VTA) to the

nucleus accumbens (NAcc) is complemented by a GABAergic projection, and

Cohen et al. (2012) presented data indicating that all VTA GABA neurons (presumably including those projecting to NAcc) showed sustained increases in activity

during an interval between onset of a reward-predicting odor-cue and actual reward

delivery. Since the VTA GABAergic projection to NAcc synapses preferentially on

TANs (Brown et al. 2013), this projection’s effect in striatum is synergistic with the

effect of the DAergic projection: it promotes the direct pathway while opposing the

indirect pathway.

The Fig. 5.2 circuit helps to explain a wide range of effects. For example, both

DA agonists and acetylcholine (Ach) antagonists can help normalize function in a

striatum suffering from DA depletion, e.g., in the striatum of patients with Parkinson’s

Disease (PD). Early findings of a critical role for striatal DA loss in PD (Hornykiewcz

1973) have been abundantly supported (e.g., Iversen and Iversen 2007), and it has

been verified that some human DA cell populations that project to striatum, such as

those in the ventral tier of the substantia, pars compacta (SNc), are usually lost much

earlier in the disease process than other DA cell populations, such as those in the

VTA (Damier et al. 1999) or (in the primate MPTP model of PD) in the periaqueductal gray (PAG) (Shaw et al. 2010). The DA-ACh cascade in Fig. 5.2 has also been

strongly implicated in dystonia. Recent research (e.g., Sciamanna, et al. 2014;

Jaunarajs et al. 2015) indicates that DYT1-type dystonia depends on a genetic mutation that flips the sign of action of DA in the striatal DA-ACh cascade: the mutation

makes D2R activation excitatory to striatal TANs, not inhibitory. This affects not just

performance but also learning, because some DA- and D2R-dependent learning

effects, once attributed solely to direct DA action on D2-MSPNs, are mediated by

D2Rs on TANs (Wang et al. 2006). The reader is referred to Chap. 7 in this volume

for further discussion on the possible role of the basal ganglia in dystonia.

However, the Fig. 5.2 circuit is not the whole story, even for striatum, and DA

loss in other parts of BG also contributes to motor disorders (Rommelfanger and

Wichmann 2010). More broadly, there are clinically important differences between

primates and rodents in DAergic innervation beyond the BG (Smith et al. 2014).

Notably, there is much greater DAergic innervation of motor cortex from SNc in

primates than in rodents (Berger et al. 1991; Williams and Goldman-Rakic 1998).

In consequence, DA loss in humans may have dramatic motor effects beyond the

striatum and other BG nuclei. A further caveat is that DA cell loss is often accompanied by cell loss in other monoaminergic nuclei of the midbrain/brainstem

(Surmeier and Sulzer 2013), and some animal models involve a PD-like syndrome

with cell loss restricted to such nuclei, e.g., the locus coeruleus (Delaville et al.

2011). More generally, many effects of DA loss on motor and cognitive performance can be partly mimicked by loss of other neuromodulators.



D. Bullock

Multiple Components Found in Dopamine Neuron


DA neurons operate in several modes. They are spontaneously active pacemakers,

and the associated tonic release of DA is vital for normal performance of actions

mediated by BG circuits. Rapid progress in understanding the learning effects of

DA was catalyzed by the discovery that the DA signal in SNc/VTA also has distinct

phasic components, which are responsive to learning. In addition to the tonic component associated with pacemaker firing, Schultz and colleagues (e.g., Schultz

1998) observed burst and dip components that reflect positive and negative reward

prediction errors (R-PEs). Fiorillo et al. (2003) later discovered an uncertainty component (of the DA signal in SNc and VTA) that is maximal when the odds of a favorable vs. unfavorable outcome are even (p = 0.5 for either). The same component is

often called a risk signal.


Dopamine as an Internal Reinforcement Signal

A consensus has emerged that the phasic components of the DA signal—bursts and

dips—have all the characteristics of an internal reinforcement signal, i.e., an internal signal that always shows appropriate properties when events that constitute

positive or negative reinforcers occur. Event types that constitute positive or negative reinforcers have been established in behavioral studies of reinforcement learning in both classical (Pavlovian) and operant conditioning paradigms. Rewards that

are not completely predictable in timing and magnitude elicit a DA burst response

in SNc and VTA (Schultz 1998, 2013; Bermudez and Schultz 2014), whereas onset

of an aversive input elicits a DA pause response (Tan et al. 2012; Mileykovskiy and

Morales 2011; Fiorillo 2013; Fiorillo et al. 2013). Also, the offset of an aversive

stimulus—a strong negative reinforcer of learned avoidance responses—induces

rebound DA release (Budygin et al. 2012; Navratilova et al. 2012; Fiorillo et al.

2013). It has been shown that bored animals will work to earn presentations of

novel, non-aversive stimuli (they are positive reinforcers), and such stimuli elicit

DA bursts (e.g., Bromberg-Martin et al. 2010) until their novelty wears off (Lloyd

et al. 2014). Similarly, both the burst responses of DA neurons and the reinforcing

power of a primary reward wane with satiation for that reward (Cone et al. 2014;

Ostlund et al. 2011).

Moreover, it has been shown, mostly through classical conditioning paradigms,

that when a cue-A reliably predicts a following reward, cue-A by itself can serve as

a (conditioned) reinforcer. Such reward-predicting cues also elicit DA bursts. After

such training with a cue-A, the introduction of a redundant cue-B, coincident with

cue-A, does not lead to any new learning about cue-B, a phenomenon known as

blocking. Notably, cue-B does not become a conditioned reinforcer. This suggests

that after cue-A is established as a reliable predictor of reward, and cue-B coincident with cue-A is followed by that reward, that reward is no longer a reinforcer in

5 Dopamine and Its Actions in the Basal Ganglia System


the context of cue-A. Indeed, once cue-A is established as a reliable predictor of

reward, the reward itself no longer elicits a DA burst (Schultz 1998, 2013). This

effect is graded: to the extent that cue-A is less than perfectly reliable as a predictor—because the exact timing, magnitude, or probability of reward is not certain, a

second cue-B can be learned. Correspondingly, such uncertainty leads to less than

complete suppression of the DA cells’ burst responses to reward, and the residual

burst response to reward appears to depend more on probability than reward size (cf.

Tan et al. 2008). Finally, if a conditioned reinforcer cue-A is ever not followed by

the expected reward, it begins to extinguish as a conditioned reinforcer. This suggests the existence of an internal signal of opposite sign, and indeed, every such

presentation of cue-A followed by omission of the expected reward induces a DA

dip (Schultz 1998, 2013). From such correspondences, and the mediation of positive reinforcement learning by D1 and D2 receptors (e.g., Steinberg et al. 2014), it

appears that the phasic components of the DA signal observed in SNc and VTA, and

in striatal zones that receive the signal in the form of increments or decrements of

DA release, are suitable to guide reinforcement learning of the type seen in behavioral studies with many species of animals.

Associative learning has been shown to depend on more than the dopaminergic

reward prediction error signal. Notably, it also depends on an arousal or attentional

signal that is high when surprising outcomes occur (cf. Song and Fellous 2014).

Recently, evidence has begun to accumulate that these arousal signals are present in

the basolateral amygdala (BLA), which projects strongly to the ventral striatum.

Moreover, the BLA arousal signal itself depends on DAergic R-PE signals sent to

BLA (Esber et al. 2012). Thus, DAergic R-PE signals can effect striatum via the

direct projections from VTA/SNc as well as indirectly via the BLA.


Reward Prediction Errors, Punishment Prediction

Errors, or Both?

Because of the burst and dip components of DA neurons, the hypothesis was

advanced that the phasic components of the DA signal constitute a reward prediction error signal: a burst occurs whenever an outcome is better than expected, and a

dip whenever an outcome is worse than expected. As already noted, an unexpected

aversive event causes a dip in DA neuron firing. Suppose that a cue-C is followed

reliably by an aversive event. Will that cue-C come to elicit a DA firing dip, and will

the aversive event itself no longer cause a DA dip on trials when cue-C is presented

as predictor of the aversive event? If the answer to these questions was to be yes, for

at least some DA neurons that also show R-PE signals to rewarding cues and events,

then it could be claimed that such DA cells signal a full range of value prediction

errors, whether the events involved are aversive or rewarding. This question is still

unsettled. Fiorillo (2013) showed that many DA neurons in dorsal SNc do not code

prediction errors for aversive stimuli. Though they do show dips in response to

aversive stimuli, they do not stop responding to cue-signaled aversive stimuli once


D. Bullock

the animal has learned the predictive status of the cue. From these studies, Fiorillo

concluded that the prediction error processing systems for reward must be separate

from that for aversive/punishing events: there are two dimensions, rather than a

single dimension with both negative and positive regions. Below, this “separate

dimensions” conclusion is endorsed, but with the caveat that separable DA cell clusters probably mediate the separate A-PE (aversive prediction error) signaling.

Indeed, Fiorillo’s exclusion of DA cells from the latter system has been challenged

(Morrens 2014) on grounds that Fiorillo (2013) recorded very few cells in VTA,

which in some other studies (e.g., Matsumoto and Hikosaka 2009; Matsumoto and

Takada 2013) has been shown to have a higher percentage of DA neurons that

respond to both rewards and aversive events.

Although both Fiorillo (2013) and Morrens (2014) state that no one has identified A-PE cells, striatal A-PE signals have been reported (e.g., Delgado et al. 2008),

and others report that A-PE cells, as such, have been identified, but remain understudied relative to DA neurons in VTA and SNc. Johansen et al. (2010) and McNally

et al. (2011) summarized rodent data indicating that an A-PE is computed in the

vlPAG (ventrolateral periaqueductal grey). In this system, the learned, cuedependent expectation of an aversive outcome appears to be mediated in part by

release of an endogenous opioid, which is capable of canceling the effect on vlPAG

neurons of an ascending pain signal (Cole and McNally 2007; Krasne et al. 2011).

Roy et al. (2014) reported analyses of human functional magnetic resonance imaging (fMRI) data that supported the hypothesis regarding PAG (fMRI resolution was

insufficient to isolate vlPAG), while also ruling out several other candidate areas,

such as the ventral striatum, as sites that compute A-PEs.

Whereas in the fear conditioning model of Krasne et al. (2011), which is based

mostly on rodent data, the source of learned expectations sent to PAG is the CeA

(central nucleus of the amygdala), the human fMRI study of Roy et al. (2014)

implicated the putamen and vmPFC. However, there may be no cross-species

discrepancy because the CeA, a key part of the EA (extended amygdala; Zahm

et al. 2011), borders the putamen, and like putamen, can be classified as a striatal

territory (Swanson 2000), in which the dominant type of cells are MSPNs that

receive a convergence of glutamatergic inputs (from cortex and pyramid-rich amygdalar nuclei, notably BLA) and ascending DAergic inputs from the midbrain.

Indeed, the lateral CeA, lCeA, which is a key site of fear conditioning and is medial

to and continuous with the putamen, contains GABAergic and somatostatin-positive long-range projection neurons that directly inhibit PAG neurons (Penzo et al.

2014; Penzo et al. 2015). Finally, although McHugh et al. (2014) report blood

oxygenation-dependent (BOLD) and local field potential (LFP) responses (but not

single unit responses) in basolateral amygdala (BLA) that reflect A-PEs, this is

consistent with the proposal that the primary A-PE computation occurs in PAG. The

multiple pathways by which PAG output affects BLA, another major site of fear

learning, remain to be established, but one via mid and intralaminar thalamus is a

good candidate, because it has been implicated in mediation of the PE-dependent

blocking effect in fear conditioning (Sengupta and McNally 2014).

5 Dopamine and Its Actions in the Basal Ganglia System


One caveat noted by McNally et al. (2011) is that whereas the A-PE cells of

vlPAG exhibit robust positive prediction errors, they have not been shown to exhibit

responses (e.g., pauses) that are indicative of negative prediction errors. However,

Berg et al. (2014) have recently reported that neurons in the adjacent dorsal raphe

nucleus (DRN) do exhibit robust responses to negative A-PEs. They further showed

that lesions of DRN did not impair fear acquisition on deterministic schedules, but

did impair learning during fear extinction and during adaptation to Pavlovian fear

conditioning that used probabilistic CS-US contingencies. This selectivity is just

what is expected if DRN mediates negative but not positive A-PE signals.

Furthermore, the DRN innervates both BLA and CeA sectors of the amygdala.

Such data immediately raise the question of whether DA neurons are critically

involved in the PAG/DRN system for computing A-PEs and projecting PE signals

to learning sites in the EA. In fact, there is a continuous vein of DA neurons within

the vlPAG and adjacent retrorubral area that is known as dcA10 (Hasue and

Shammah-Lagnado 2002; Yetnikoff, et al. 2014), i.e., the dorso-caudal compartment of A10 (whereas the main compartment of the DA neuron population known

as A10 is in the VTA). Three classes of DA cells are known to exist in vlPAG, and

its DA cells have been implicated as mediators of PAG’s role in opioid reward and

reduction of nociception (Flores et al. 2006; Dougalis et al. 2012; see also Messanvi

et al. 2013, which has implicated an additional DAergic projection from A13 in

opioid effects). Moreover, Hasue and Shammah-Lagnado (2002) reported that

nearly half of the tyrosine hydroxylase-labeled fibers in CeA originated in the

vlPAG. Such tyrosine hydroxylase fibers are usually indicative of neurons that

release DA, and Poulin et al. (2014) reported that their DA neuron subtype DA2D

was localized in PAG and DRN and projected to two territories, the striatum-like

lateral central amygdala (lCeA) and the pallidum-like oval portion of the bed

nucleus of the stria terminalis (oBST), but not to other striatal or pallidal territories.

Because of this specificity of projection, DAergic A-PEs could have appropriately

different effects than DAergic R-PEs arising in SNc or the main part of VTA. Although

definitive research appears to be lacking, an otherwise puzzling observation consistent with this possibility is the finding (Flores et al. 2006) that D2R blockade in

vPAG (and adjacent DAergic RLi) dose-dependently opposed the rewarding effects

of opioids. If this effect were assumed to be mediated by D2Rs acting as inhibitory

autoreceptors on DA cells that signal R-PEs, it is very puzzling. If instead these DA

cells signal A-PEs, the result is as expected: D2R blockade would lead to greater

DA release in lCeA that would oppose opioid reward by promoting learned aversion. Such direct competition between the processing of rewarding and aversive

stimuli has been demonstrated in recent studies (Choi et al. 2014; Namburi et al.

2015). If verified, the hypothesis of A-PE-mediating DA cells, in vPAG/DRN, that

project uniquely to both lCeA and oBST is of great interest. Both areas are strongly

implicated in conditioned fear and anxiety (Day et al. 2005, 2008; Haubensak et al.

2010; Fox et al. 2015).

Although direct activation of identified DA cells in vlPAG by aversive cue onsets

has not yet been reported, there have been such reports for some other A10 subpopulations, e.g., a subset of VTA dopamine neurons (Gore et al. 2014; Brischoux


D. Bullock

et al. 2009) that are important for normal fear conditioning (Zweifel et al. 2011).

Relatedly, increments of DA release to aversive cue onsets have been observed in

the shell of NAcc (Badrinarayan et al. 2012). Finally, Poulin et al. (2014) noted that

their Vip-expressing DA2D pool in PAG/DRN did not project to cortex, and Flores

et al. (2006) noted three total (non-NE) TH-labeled neuron types in the vPAG/

DRN. One that is DAergic has projections to PFC and has been implicated in

ascending arousal and control of waking (Lu et al. 2006). It has also been suggested

(Misu et al. 1996) that some of the TH-labeled neurons of dcA10 are DOPAergic

but not DAergic; they release DA’s endogenous precursor, L-DOPA, instead of

DA. This is of interest because L-DOPA as such has been shown to act as a transmitter (Misu et al 2002; Porras et al. 2014). In striatum, it can act via D2 receptors on

TANs (see Fig. 5.2) to reduce ACh release.

Figure 5.3 summarizes the emerging picture regarding prediction error (PE)

computations involving DA neurons in SNc and VTA (left column), and vlPAG

(middle column), corresponding respectively to the Poulin et al. (2014) types DA1A

(ventral tier SNc), DA1B (dorsal tier SNc), DA2A and DA2B (in VTA), and DA2D (in

PAG/DRN). The rightmost column in Fig. 5.3 makes the point that PE computation

is not exclusive to DA neurons. As exemplified here, it is also performed by nonDAergic neurons in the olivary nuclei, another ancient subcortical region. In all, the

three columns in the Fig. 5.3 cover four sites for computing PEs in “Pavlovian”

(CS-US) learning paradigms. In each case, a neural stage compares a learned centrifugal inhibitory expectation with an unlearned centripetal excitation to compute a

PE that serves as a “teaching signal.” The comparisons respectively involve: convergence of CS-induced inhibitory dorsal or ventral striatal output and rewarding-USinduced excitatory inputs to DAergic R-PE cells of the SNc/VTA; convergence of

inhibitory CeA output and excitatory (nociceptive) US inputs to proposed DAergic

A-PE cells of the vlPAG; and convergence of inhibitory deep-cerebellar (DNC)

output and excitatory US input to glutamatergic PE neurons of the olivary nuclei,

which are the source of the climbing fiber signals that gate learning in the cerebellar cortex (Medina et al. 2002). There is growing evidence that similar “neural

comparators” enable PE computations in cerebral cortex (Berteau et al. 2013).

Further evidence that the two DAergic circuits in Fig. 5.3 mediate reward vs.

aversion learning comes from studies showing that the NAcc-VTA system and the

CeA-PAG system have opponent properties (Namburi et al. 2015; Nasser and

McNally 2013). Nevertheless, it is vital to remember that the amygdala system, as a

whole, mediates the assignment of salience to a full range of motivationally relevant

cues, not only those that predict punishment. Notably, much research (e.g., Esber

et al. 2015) has implicated a projection from CeA via SNc to the dorsolateral striatum (DLS) both in reward-guided learning of conditioned orienting responses and

in the enhanced attention accorded to surprising omissions of expected stimuli.

Altered DA release in DLS by fibers from SNc is a common factor in these learning

and performance effects.

In summary, for many years mammalian research implicated DA in R-PE computations and appetitive learning. Recent data suggest an equally pivotal role for DA

5 Dopamine and Its Actions in the Basal Ganglia System


Fig. 5.3 Comparisons of inhibitory expectation signals with excitatory stimulus-induced signals

are mediated by dopamine neurons of VTA or SNc (left), dopamine neurons of the ventral lateral

periaqueductal grey (middle; vlPAG), and by glutamate-releasing neurons of the olivary nuclei

(right; IO and DAO). MSPN medium spiny neuron, DA dopamine, GLU glutamate, DNC deep

cerebellar nucleus, CBM cerebellum, PE prediction error

in A-PE computations and aversion learning. For arthropods (e.g., drosophila),

research proceeded in the opposite order. Early studies implicated DA in aversion

learning, but recent research shows an equally vital role in appetitive learning

(Waddell 2013).


Dopamine Cell Firing Rate Is Only One Factor

Controlling Dopamine Release Amounts

Charting the relationship between the behavior of DA neurons and actual release of

DA from fiber terminals in striatum or other brain areas has proven to be surprisingly complex. This is because several distinct factors act on DA fiber terminals to

modulate or gate release (Zhang and Sulzer 2012; Cachope and Cheer 2014). For

example, Howland et al. (2002) and Jones et al. (2010) have reported evidence that

activation of glutamatergic fibers projecting from BLA to NAcc caused release of

DA in NAcc, even when the VTA was inactivated with lidocaine. In contrast,

Taepavarapruk et al. (2008) reported that activation of glutamatergic fibers from

hippocampus to NAcc enhanced DA release in NAcc only if the VTA was


D. Bullock

coincidently activated. Threlfell and colleagues (2011, 2012) have reported that

ACh release from TANs strongly affects striatal DA release, and does so differently

in ventral vs. dorsal striatum. Brimblecombe and Cragg (2015) presented evidence

from mice that striatal DA release is partly controlled by striatal SP, in a way that

varies across three chemically defined striatal compartments (Graybiel and Ragsdale

1978; Faull et al. 1989). Notably, SP promoted DA release in striosome centers,

opposed DA release in striosome-matrix border zones, and had no effect on DA

release in the striatal matrix. This suggests that SP-sensitive neurokinin receptors

are expressed in DA neurons projecting to striosomes, but not in those projecting to

matrix. This aligns well with the finding (Gerfen et al. 1987) that the midbrain DA

neurons projecting to striosomes (aka striatal patches) are segregated from those

projecting to the matrix. In particular, a large proportion of striosome-projecting DA

neurons were found in the ventral tier of the SN, which is also the locus of the DA

neurons that are most vulnerable in human PD (Damier et al. 1999). Finally, it

should be noted that once released, then, depending on site-specific factors such as

local diffusion rates and dopamine transporter (DAT) levels, DA acts for shorter or

longer intervals, and at sites nearer or more distal to terminal release sites. Across

the ventromedial to dorsolateral axis of the striatum, there is sufficient covariation

of terminal density (hence number of release sites) and DAT expression to imply

significantly different signal dynamics, and, presumably, related effects on synaptic

learning processes that are gated by DA (Wickens et al. 2007; Patrick et al. 2014).


Does the Magnitude of Dopamine Release Indicate

the Subjective Utility of an Option?

After training with reward-predicting cues (Fiorillo et al. 2003; Tobler et al. 2005),

the magnitude of DA single neuron and DA population burst responses to cues

scales with the expected value, i.e., the product of reward size and the conditional

probability of reward given the cue, p(reward|cue). Such results suggest, but do not

entail, that DA might serve as the “common currency” used to weight options prior

to decision-making. However, there appear to be limitations of ventral striatal DA

release as a predictor of action selection when response costs are significant (e.g.,

Hollon et al. 2014). Moreover, there is abundant evidence that there are both

DAergic and non-DAergic evaluation systems in the brain (e.g., Dranias et al. 2008;

Brooks et al. 2010).

A well-known result from the operant conditioning literature is that an animal

will switch its preference from an option A, which gives a larger reward that is

earned by more responses, to an option B, which gives a smaller reward for fewer

responses, if the difference in the response costs is large enough. In short, action

preference depends on a cost–benefit analysis, not solely on the expected benefit.

Evidence suggests that DA release is important to motivate choices that entail

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

1 Introduction: Consensus Summary of Dopamine’s Actions in the Circuitry of the Basal Ganglia

Tải bản đầy đủ ngay(0 tr)