Tải bản đầy đủ - 0 (trang)
5 Compatibility, Convexity, and Coarse-Graining

5 Compatibility, Convexity, and Coarse-Graining

Tải bản đầy đủ - 0trang


11 Joint Measurability

how far away an incompatible pair of effects E, F are from a suitable “nearby” pair

of compatible effects.

While the compatibility characterisation of Proposition 11.5 is operational (in

the sense just described) and efficiently decidable via a semidefinite programme, it

does nothing to elucidate the relationship—presumably of a trade-off—between the

degree of noncommutativity of a pair of compatible effects E, F and their degrees

of unsharpness without which they could not be compatible. This trade-off will be

investigated for pairs of specific observables in later chapters, and it will be found,

for example, in the case of qubit observables, that in addition to unsharpness and

the degree of noncommutativity, other features are relevant for the compatibility


To conclude this chapter, we present a simple scheme of turning a set of effects or

observables into a jointly measurable set. First we observe that in line with Remark

11.3, a set of effects {E 1 , . . . , E n } can be turned into a compatible set simply by

“scaling down” each effect with a suitable factor κ ∈ (0, 1). A trivial way of ensuring

the joint measurability of {κE 1 , . . . , κE n } is by choosing κ small enough such that

κE 1 + κE 2 + · · · + κE n ≤ I . However, it is not necessary for compatibility to force

the sum to be bounded by the identity operator; in fact, for two effects E, F and the

choice κ = 1 − μ = λ−1

0 , one will not have, in general, κE + κF ≤ I (see Exercise

13 of Chap. 14).

Proposition 11.6 Let E and F be two observables on a finite outcome set, and let

0 ≤ μ ≤ 1. Then μE + (1 − μ)T and (1 − μ)F + μT are compatible for any choice

of trivial observables T and T on the same outcome set.

Proof First, let p and p denote the probability distributions associated with T and

T . We define an observable G by the formula

G ( j, k) = μE ( j) p (k) + (1 − μ) p( j) F (k) .


For a fixed , the right hand side is clearly a probability distribution. Moreover,

→ G defines an affine mapping on S(H); therefore G is an observable. The

marginal observables are

G ( j, k) = μE ( j) + (1 − μ) p( j),


G ( j, k) = (1 − μ)F (k) + μ p (k).


The physical idea behind this construction is the following. For each run of the

measurement a coin is flipped and, depending on the result, one measures either E

or F in the input state . In this way one obtains a measurement outcome for either

E or F. In addition to this, one makes a random selection (according to either the

distribution p or p) of an outcome for the other observable. In this way one obtains

an outcome for both observables E(μ) = μE + (1 − μ)T and F(1−μ) = (1 − μ)F + μT

simultaneously. The overall joint observable is the one given in formula (11.5).

11.5 Compatibility, Convexity, and Coarse-Graining


Remark 11.4 The method of turning incompatible pairs of observables into compatible ones by adding random noise can be used to define a measure of the degree

of incompatibility. We illustrate this for binary observables or simply for their generating effects E, F. The degree of incompatibility of E, F, denoted inc(E, F) is

1 − λ∗ , where λ∗ is the supremum of the numbers λ ∈ [0, 1] such that




(I − E) = λE + (1 − λ) 21 I







(I − F) = λF + (1 − λ) 21 I



E (λ) =




are compatible. It can be shown that λ∗ is in fact the maximum of this set of λ

(Exercise). Then E, F are compatible if and only if inc(E, F) = 0.

Using the joint measurability condition for a pair of unbiased qubit effects reported

here as Proposition 14.1, it has been shown by Banik et al. [9] that any pair of effects

E, F in a finite dimensional

Hilbert space is turned into a compatible pair E (λ) , F (λ)

for some λ ≥ 1/ 2. This limiting value is necessary: if, say E, F are spectral




√ of the Pauli operators σ1 , σ2 , then E , F are compatible if and only

if λ = 1/ 2.

Refinements of the above definition can be used to compare the degree of incompatibility inherent in quantum mechanics with that of a set of more general probabilistic theories [9–11].

Remark 11.5 It is important to note that a joint measurement of a set of appropriately

rescaled effects {κE i } does not constitute a joint measurement of the original set of

effects {E i }, despite the fact that the the probabilities of the E i can be computed once

one has determined those of the κE i . These sets of effects represent physically different collections of experimental events. The latter set can occur as possible outcomes

within a common measurement, the former cannot if the E i are incompatible.

11.6 Exercises

1. A collection of effects is called compatible if they occur in the range of a common

observable. Show that effects E, F are compatible if and only if there are effects

G, H , such that G ≤ E ≤ H, G ≤ F ≤ H, G + H = E + F, or, equivalently,

if and only if there is an effect G such that E + F − I ≤ G ≤ E, F.

2. Prove Proposition 11.4.

3. Prove that two effects in a Hilbert space H are compatible if and only if they are

compatible as effects in the closed subspace spanned by the union of their ranges.

its orthonormal basis. Define orthonormal

4. Let H = C3 and let {|1 , |2 , |3 } be √

unit vectors ψ1 = |1 + |2 + |3 / 3, ψ2 = |1 + α|2 + α2 |3 / 3 and

ψ3 = |1 + α2 |2 + α|3 / 3 where α = exp(2πi/3). Show that the set of

effects {A1 , . . . , A6 } = { 21 |1 1|, 21 |2 2|, 21 |3 3|, 21 |ψ1 ψ1 |, 21 |ψ2 ψ2 |,


11 Joint Measurability




ψ3 |} constitutes a 6-outcome (rank-1) observable A. Define a (rank-2)

observable {B1 , B2 , B3 } = { 21 |2 2| + 21 |3 3|, 21 |1 1| + 21 |3 3|, 21 |1 1| +


|2 2|}. Since the ranges of A and B belong to the range of A the observables


A and B are coexistent. Show that B is not a smearing of A. Hence, A and B are

not jointly measurable. This example is from [1].

5. Let E, F be two effects. Show that the set

λ λE + (1 − λ) 21 I and λF + (1 − λ) 21 I are compatible

is closed.


1. Pellonpää, J.-P.: On coexistence and joint measurability of rank-1 quantum observables. J.

Phys. A 47(5), 052002, 12 (2014)

2. Reeb, D., Reitzner, D., Wolf, M.M.: Coexistence does not imply joint measurability. J. Phys.

A 46, 462002 (2013)

3. Lahti, P., Pulmannová, S.: Coexistent observables and effects in quantum mechanics. Rep.

Math. Phys. 39(3), 339–351 (1997)

4. Lahti, P., Pulmannová, S.: Coexistence versus functional coexistence of quantum observables.

Rep. Math. Phys. 47(2), 199–212 (2001)

5. J. von Neumann. Mathematische Grundlagen der Quantenmechanik. Die Grundlehren der

mathematischen Wissenschaften, Band 38. Springer-Verlag, Berlin, (1968, 1996). (Reprint of

the 1932 original). English translation: Mathematical Foundations of Quantum Mechanics.

Princeton University Press, Princeton (1955, 1996)

6. Ylinen, K.: On a theorem of Gudder on joint distributions of observables. In: Symposium on

the Foundations of Modern Physics (Joensuu, 1985), pp. 691–694. World Scientific Publishing,

Singapore (1985)

7. Wolf, M.M., Perez-Garcia, D., Fernandez, C.: Measurements incompatible in quantum theory

cannot be measured jointly in any other no-signaling theory. Phys. Rev. Lett. 103, 230402


8. Beneduci, R., Busch, P.: Incompatibility of effects and Bell inequality violations in general

probabilistic models. Unpublished note (2012)

9. Banik, M., Gazi, M.R., Ghosh, S., Kar, G.: Degree of complementarity determines the nonlocality in quantum mechanics. Phys. Rev. A 87(5), 052125 (2013)

10. Busch, P., Heinosaari, T., Schultz, J., Stevens, N.: Comparing the degrees of incompatibility

inherent in probabilistic physical theories. EPL (Europhys. Lett.) 103(1), 10002 (2013)

11. Stevens, N., Busch, P.: Steering, incompatibility, and Bell-inequality violations in a class of

probabilistic theories. Phys. Rev. A 89(2), 022123 (2014)

Chapter 12

Preparation Uncertainty

The probabilistic structure of quantum mechanics is a reflection of the fact that observations on quantum physical objects typically yield uncertain outcomes. Formally

this uncertainty is encoded in the probability distribution of an observable in the state

of the physical system. It is a fundamental feature of quantum mechanics that there

are pairs of observables for which the degrees of uncertainty in the associated probability distributions cannot both be arbitrarily small in the same states. This feature

constitutes one aspect of the uncertainty principle and is expressed in the form of

trade-off inequalities called preparation uncertainty relations.

Another aspect of the uncertainty principle manifests itself as a limitation of

the accuracy with which incompatible observables can be jointly measured. Such

limitations are expressed in the form of trade-off relations for measurement errors

and are called measurement uncertainty relations or error-disturbance relations. In

subsequent chapters we will study a number of examples illustrating that measurement uncertainty is a necessary consequence of preparation uncertainty in quantum


The statement of preparation and measurement uncertainty relations requires the

definition of appropriate measures of uncertainty, measurement error and disturbance. The present chapter presents measures of uncertainty based on the probability

distributions of observables and gives examples of preparation uncertainty relations

that are found to be fundamental for associated measurement uncertainty relations.

We will first make precise what it means for the value of an observable to be definite

or indeterminate in a given state, and then proceed to briefly review uncertainty

relations based on uncertainty measures such as standard deviation and overall width.

© Springer International Publishing Switzerland 2016

P. Busch et al., Quantum Measurement, Theoretical and Mathematical Physics,

DOI 10.1007/978-3-319-43389-9_12



12 Preparation Uncertainty

12.1 Indeterminate Values of Observables

All measures of uncertainty considered here are adapted to observables with value

spaces in R. Moreover, we consider only observables E whose first moment operators E[1] = R x dE(x) are selfadjoint. We recall that such an observable E is sharp

exactly when its second moment operator is the square of its first moment operator, that is, the intrinsic noise operator N (E) = E[2] − E[1]2 = 0 (Theorem 8.5).

According to Corollary 9.1 the sharp observable A is characterised as the one with

the least variance among all observables E for which E[1] = A.

The values of an observable E as the possible measurement outcomes are exactly

the elements of the support supp(E) of E. Thus a real number x ∈ R is a possible value

of E if and only if for any > 0 there is a state such that E ((x − , x + )) = 0. If

E is sharp then supp(E) = σ(A), the spectrum of A = E[1]; in this case the condition

that x is a possible value may be expressed as the statement that for any > 0 there

exists a state with E ((x − , x + )) = 1.

We say that E is definite or has a (definite) value if one of its values is obtained with

(probabilistic) certainty if measured. An observable E thus has a value x ∈ supp(E)

in a state exactly when E is the point measure δx . This may occur if and only if the

effect E({x}) is nonzero and has eigenvalue 1. If E is sharp, then E = A has a definite

value in a vector state ϕ if and only if ϕ is an eigenstate of A, that is, Aϕ = xϕ.

If ϕ is not an eigenstate, then several different eigenvalues may occur with nonzero

probability in a measurement of A. The values of the observable A are then said to

be indeterminate.1

Quantum uncertainty thus manifests itself in the randomness of measurement

outcomes. The degree to which an observable is indeterminate can be quantified in

terms of the width of its associated probability distribution in the given state.

12.2 Measures of Uncertainty

Standard Deviation

The most familiar measure of the width of a probability measure μ : B(R) → [0, 1]

is the standard deviation, given by the square root of the variance of μ,

1 We

will not dwell here on the distinction between the objective indeterminateness and subjective

uncertainty of the values of an observable, which is explained in detail in [1]. Suffice it to say that in

pure states the values are objectively indeterminate if their probabilities are not 1 or 0. A situation

of subjective uncertainty (or ignorance) can be modelled by a state that is a mixture of eigenstates

associated with at least two distinct eigenvalues, but even then one has to recognise that any mixed

state can be decomposed in uncountably many ways into its pure components (see Theorem 9.2).

A mixed state can arise as a reduced state of an entangled pure state, in which case application of

the ignorance interpretation with respect to any particular decomposition is inconsistent with the

indeterminacy represented by that entangled state.

12.2 Measures of Uncertainty

Δ(μ)2 = Var (μ) =




x dμ(x ) dμ(x) = μ[2] − μ[1]2 ,

provided that the integrals exist and are finite; otherwise we write Δ(μ) = ∞. One

has Δ(μ) = 0 exactly when μ is the point measure at μ[1].

We will encounter many situations where a given probability distribution is randomised further (or smeared) by convolution with another probability measure. For

probability measures of convolution form, μ ∗ ν, Lemma 8.2 gives

Δ(μ), Δ(ν) ≤ Δ(μ ∗ ν) =

Δ(μ)2 + Δ(ν)2 ≤ Δ(μ) + Δ(ν),

provided the relevant integrals defining all expressions are finite.

By the standard deviation of an observable E : B(R) → L(H) in a state we

mean the standard deviation of the probability measure E , which again is welldefined provided the first and second moments, E [1] and E [2], exist and are finite.

If = |ϕ ϕ| is a pure state then these conditions simply mean that ϕ belongs in the

domain of the operator E[2]; for a mixed state these conditions are more involved (see

Sect. 9.3). In the following we do not always mention these obvious requirements.

For a vector state ϕ we write Δ(Eϕ ) or Δ(E, ϕ). If E is sharp, we will also write

Δ(A, ) or Δ(A, ϕ), respectively, where A = E[1].

We have Δ(E ) = 0 exactly when the probability measure E is a point measure

at E [1], that is, E has the value E [1] in the state . We recall also that if E is sharp

then for any > 0 there is a state such that Δ(E ) < , that is, inf{Δ(E ) | ∈

S(H)} = 0, and the lower bound is reached if and only if A = E[1] has eigenvalues.

More specifically, for any point x in the spectrum of A, there are sequences of vector

states ϕn such that ϕn |Aϕn = x and Δ(A, ϕn ) → 0 as n → ∞.

Using the intrinsic noise operator N (E) = E[2] − E[1]2 , the variance Var(E ) can

be written as Var(E ) = Var(E[1], ) + tr N (E) , see Eq. (9.15). We emphasise

once more that this equation presents a splitting of the variance of E into two terms

that are not accessible through the measurement of E only.

For the special class of observables E that are convolutions of a spectral measure

A and a probabilty measure μ (with finite variance), one has Var(E ) = Var(μ) +

Var(A ). In this case the intrinsic noise contribution to the variance of the distribution

E is constant (state independent) and equal to Var(μ).


The standard deviation of a probability measure is a special case of the so-called

α-spreads. Let d be the Euclidean metric on R, d(x, y) = |x − y|. For 1 ≤ α < ∞,

the deviation of order α, or α-deviation, of a probability measure μ from a point

y ∈ R—or equivalently, from the point measure δ y at y—is defined as2

case α = ∞ can also be included in what follows, but as it would require separate considerations at various points, we omit this possibility and refer the reader to [2] for more detail.

2 The


12 Preparation Uncertainty



d(x, y)α dμ(x)

Δα (μ, δ y ) =



The α-spread, or minimal deviation of order α, of μ is then defined as

Δα (μ) = inf Δα (μ, δ y ) = inf




|x − y| dμ(x)





For α = 2 we may use both notations Δ2 (μ) and Δ(μ). When (12.1) is interpreted

as distance, (12.2) represents the smallest distance of μ to the set of point measures.

The point y to which a given measure is “closest” depends on α. For the absolute

deviation (α = 1) this is the median, for the standard deviation (α = 2) it is the mean

value. Like the standard deviation, the α-spread scales with the underlying metric:

for the measure μ(λ) , defined via μ(λ) (X ) = μ(λX ), X ∈ B(R), and fixed λ > 0, one




Δα μ(λ) = Δα (μ).


Lemma 12.1 For any α and for any two probability measures μ, ν

Δα (μ), Δα (ν) ≤ Δα (μ ∗ ν) ≤ Δα (μ) + Δα (ν).


Proof For α = 2 the result was already noted. For a general α we follow [2] to

indicate a proof. Note that Δα (μ, δ y ) is the standard α-norm · μ,α in L α (R, μ) of

the function x → x − y = f y (x), that is

Δα (μ, δ y ) = f y

μ,α .


For the first inequality in (12.4) we use translation invariance and concavity of

Δα by considering ν ∗ μ as a convex combination of translates of μ with weight

ν. For the second inequality in (12.4), using Lemma 8.2 (with its notation) and the

Minkowski inequality we get

Δα (μ ∗ ν, δx +y ) = f x +y


= f x +y ◦ φ


|x + y − (x + y )|α d(ν × μ)(x, y)

|x − x |α d(ν × μ)(x, y)

= fx


+ fy






|y − y |α d(ν × μ)(x, y)

= Δα (μ, δx ) + Δα (ν, δ y ).

The desired inequality (12.4) now follows by minimising over x and y .

For an observable F = μ ∗ E one thus has for any state ,

Δα (E ) ≤ Δα (μ ∗ E ) ≤ Δα (μ) + Δα (E ).


12.2 Measures of Uncertainty


Overall Width

Let μ : B(R) → [0, 1] be a probability measure. For any ∈ (0, 1) we consider the

set D of all intervals X ⊂ R for which μ(X ) ≥ 1 − . Since [−n, n] ∈ D for n ∈ N

large enough, the infimum of the lengths |X | of the intervals X ∈ D is defined as a

real number. This number,

Wε (μ) = inf {|X | | μ(X ) ≥ 1 − ε},



is the overall width of the probability measure μ. As with the α-deviation, the overall

width scales with the underlying metric of the value space.

Clearly the infimum does not change if we confine our attention to compact (closed

and bounded) intervals. We show that this infimum is actually a minimum.

Lemma 12.2 There is a compact interval X ⊂ R such that μ(X ) ≥ 1 −

W (μ) = |X |.


Proof There are sequences of real numbers an ≤ bn such that μ([an , bn ]) ≥ 1 −

and bn − an → W (μ). We may assume that bn − an ≤ W (μ) + 1. The sequences

(an ) and (bn ) are bounded, for otherwise we would get a sequence of intervals

[an , bn ] eventually disjoint from any fixed [− p, p], contradicting the requirement

μ([an , bn ]) ≥ 1 − . Passing to suitable subsequences we may therefore assume that

the limits a = limn an and b = limn bn with a ≤ b exist. Clearly b − a = W (μ). We

claim that μ([a, b]) ≥ 1 − . Again passing to subsequences if necessary it is easily

shown that we may assume that both sequences (an ), (bn ) are monotone. If (an ) is

increasing and (bn ) is decreasing, then μ([a, b]) = limn μ([an , bn ]) ≥ 1 − . If (an )

is decreasing and (bn ) is increasing, then μ([a, b]) ≥ μ([a1 , b1 ]) ≥ 1 − . The two

remaining cases are essentially similar, and we only consider the case where both

sequences are increasing. Then μ([an , b]) ≥ μ([an , bn ]) ≥ 1 − , and so μ([a, b]) =

limn μ([an , b]) ≥ 1 − .

If Wε (μ) = 0, then μ({x}) = λ ≥ 1 − ε for some x ∈ R, meaning that μ has a

discrete part. For a continuous observable E, like position or momentum, one thus

has Wε (E ) = 0 for any ε and .

For the overall width of a convolution of measures μ, ν it is straightforward to

verify the following bound:

Wε (μ ∗ ν) ≥ max{Wε (μ), Wε (ν)}.


There is a simple connection between the overall width and the α-deviation of a

probability measure μ on R, which arises as an expression of Chebyshev’s inequality,

given here in the form

μ {x ∈ R | |x − y| ≥ δ} ≤


|x − y|α dμ(x)

(α ≥ 1).



12 Preparation Uncertainty

This translates readily into

μ {x ∈ R | |x − y| < δ} ≥ 1 −

Δα (μ, δ y )α

≡ 1 − ε,


and (since δ = Δα (μ, δ y )/(ε1/α )) this is equivalent to

Wε (μ) ≤ 2

Δα (μ, δ y )




Δα (μ)




Consequently, then one also has

Wε (μ) ≤ 2

By Lemma 12.1 this also gives an upper bound for Wε (μ ∗ ν) in (12.7).

12.3 Examples of Preparation Uncertainty Relations

Preparation uncertainty relations express the mutual dependence of the widths of the

measurement outcome distributions or two or more observables and they are usually

given as inequalities limiting from below either the product or the sum of the widths

of the distributions. Here we consider some of the most typical examples of such


Uncertainty Relations for Products of Standard Deviations

Consider now any two real observables E and F, with A = E[1] and B = F[1].

The standard uncertainty relation follows by application of the Cauchy–Schwarz

inequality to Δ(A, ϕ)Δ(B, ϕ):

Δ(Eϕ ) Δ(Fϕ ) ≥ Δ(A, ϕ) Δ(B, ϕ) ≥


ϕ | (AB − B A)ϕ .



It is well known that a more refined analysis yields a stronger form, which also

involves a covariance term:

Δ(Eϕ )2 Δ(Fϕ )2 ≥ Δ(A, ϕ)2 Δ(B, ϕ)2



ϕ | (AB − B A)ϕ





ϕ | (AB + B A)ϕ − 2 ϕ | Aϕ


ϕ | Bϕ



The above uncertainty relations can be generalised to arbitrary states (where due

care has to be applied in the specification of the domain of states for which the

expectation values are well defined) [3].

12.3 Examples of Preparation Uncertainty Relations


There is a sharpening of the inequality (12.10). Let E and F be two real observables

such that their first moment operators E[1] = A and F[1] = B are selfadjoint, and

let A, B be their spectral measures. Assume that E, F are jointly measurable and

let G be an observable on (Ω, A) with functions f i : Ω → R, i = 1, 2, such that

E = G f1 , F = G f2 , see Theorem 11.1. Let M be a (measurement type) dilation of G

into a spectral measure acting on H ⊗ K such that

ϕ ⊗ φ | M( f 1−1 (X ))ϕ ⊗ φ = ϕ | E(X )ϕ

ϕ ⊗ φ | M( f 2−1 (Y ))ϕ ⊗ φ = ϕ | F(Y )ϕ

for all ϕ ∈ H. By the multiplicativity of M one then has

f 1 (ω) f 2 (ω)Mϕ⊗φ (dω) = L( f 1 , M)ϕ ⊗ φ | L( f 2 , M)ϕ ⊗ φ

with the mutually commuting selfadjoint operators Aˆ = L( f 1 , M) and Bˆ = L( f 2 , M)

being such that

ˆ ⊗ P[φ] = A ⊗ P[φ] and I ⊗ P[φ] Bˆ I ⊗ P[φ] = B ⊗ P[φ],

I ⊗ P[φ] AI

see Theorem 7.10. A direct computation gives:

f 1 (ω) f 2 (ω)Mϕ⊗φ (dω) = Aϕ | Bϕ + ( Aˆ − A ⊗ I )ϕ ⊗ φ ( Bˆ − B ⊗ I )ϕ ⊗ φ

= Bϕ | Aϕ + ( Bˆ − B ⊗ I )ϕ ⊗ φ ( Aˆ − A ⊗ I )ϕ ⊗ φ .

By the Cauchy–Schwarz inequality one finally obtains

| Aϕ | Bϕ − Bϕ | Aϕ | ≤ 2 ( Aˆ − A ⊗ I )ϕ ⊗ φ

( Bˆ − B ⊗ I )ϕ ⊗ φ .

The squared norms are the expectations of the noise operators N (E) and N (F) in

the vector state ϕ. Hence one has the following proposition (see, e.g. [4, Theorems 2

and 3]).

Proposition 12.1 Let E, F be any two real observables, and assume their first

moment operators A, B are selfadjoint. If E, F are jointly measurable, then

ϕ | N (E)ϕ

ϕ | N (F)ϕ ≥



Aϕ | Bϕ − Bϕ | Aϕ




An immediate consequence is the following.

Corollary 12.1 Let E, F be any two real observables, and assume their first moment

operators A, B are selfadjoint. If E, F are jointly measurable, then


12 Preparation Uncertainty

Δ(Eϕ )Δ(Fϕ ) ≥



Aϕ | Bϕ − Bϕ | Aϕ .


Proof This is a consequence of (9.16).

It is a fundamental feature of quantum mechanics that there are pairs of sharp

observables A and B such that their uncertainty product Δ(A ) Δ(B ) has a stateindependent, strictly positive lower bound. The best known example is given by

canonically conjugate pairs such as position Q and momentum P for which one has

Δ(Q ) Δ(P ) ≥




This inequality is generalised in Proposition 15.1 into a trade-off inequality for all


The following proposition gives further structural insight into uncertainty relations

with constant bound for the product of 2-spreads.

Proposition 12.2 Let A and B be any two sharp observables for which there is a

positive constant c such that

inf{Δ(A ) Δ(B ) | ∈ S(H)} ≥ c > 0.


(a) A and B are totally noncommutative, that is, com(A, B) = {0};

(b) A = A[1] and B = B[1] are unbounded;

(c) No eigenvector of A (if any) is in the domain of B (and vice versa).

Proof Assume that the Hilbert space K = com(A, B) is not trivial (i.e. K = {0})

and let P be the projection from H onto K. Then the truncated spectral measures

A P : B(R) → L(K), X → A P (X ) = PA(X )P and B P = P BP commute so that

inf{Δ(A P , ) Δ(B P , ) | ∈ S(K)} = 0 (Exercise). Let ϕ ∈ K. Then A P (X )ϕ =

A(X )ϕ implies Δ(A P , ϕ) = Δ(A, ϕ). Similarly, Δ(B P , ϕ) = Δ(B, ϕ). Hence,

0 ≤ inf{Δ(A ) Δ(B ) | ∈ S(H)} ≤ inf{Δ(A P , ) Δ(B P , ) | ∈ S(K)} = 0,

yielding a contradiction and proving (a). The proofs of (b) and (c) are left as Exercises.

This proposition indicates that the expression of an uncertainty relation in terms of

the product of the standard deviations is not always the optimal way of representing

the idea that the two deviations cannot both be small in the same state. Indeed, for

sharp observables A and B, with A or B bounded, the product Δ(A ) Δ(B ) can be

made arbitrarily small by choosing one of the quantities, for instance, Δ(A ) to be

sufficiently small, yielding thus no information on the standard deviation of the other

observable. In such a case a lower bound on the sum of the uncertainties could be

more informative.

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

5 Compatibility, Convexity, and Coarse-Graining

Tải bản đầy đủ ngay(0 tr)