way as initial-value problems except that the value of the function and its derivatives are given at two values of the independent variable instead of one. The general form of a second-order (two-point) boundary-value problem is
d2y
dy
a0 x y
a1 x
dx2
dx
dy
k1 y a k2
a
Α, h1 y b
dx
a2 x
f x,a dy
b
Β
h2
dx
(9.1)
where k1 , k2 , Α, h1 , h2 , and Β are constants and at least one of k1 , k2 and at least one
of h1 , h2 is not zero.
Note that if Α Β 0, then we say the problem has homogeneous boundary
conditions. We also consider boundary-value problems that include a parameter
in the differential equation. We solve these problems, called eigenvalue problems,
in order to investigate several useful properties associated with their solutions.
EXAMPLE 9.1.1: Solve
y
y
y 0
0, 0 < x < Π
0, y Π
0.
SOLUTION: Because the characteristic equation is k2 1 0 with roots
i, a general solution of y
y 0 is y c1 cos x c2 sin x and it
k1,2
c1 sin x c2 cos x. Applying the boundary conditions,
follows that y
c2 0. Then, y c1 cos x. With this solution, we have
we have y 0
c1 sin Π 0 for any value of c1 . Therefore, there are inﬁnitely
y Π
many solutions, y c1 cos x, of the boundary-value problem, depending
on the choice of c1 . In this case, we are able to use DSolve to solve the
boundary-value problem
In[1823]:= sol
Out[1823]=
DSolve y x
y x
0 ,y x ,x
y Π
y x
C 1 Cos x
0, y 0
0
We conﬁrm that the boundary conditions are satisﬁed for any value of
C[1] by graphing several solutions with Plot in Figure 9-1.
Figure 9-1 The boundary-value problem has inﬁnitely many solutions
From the result in the example, we notice a difference between initial-value problems
and boundary-value problems: an initial-value problem (that meets the hypotheses of
the Existence and Uniqueness Theorem) has a unique solution while a boundaryvalue problem may have one solution, more than one solution, or no solution.
EXAMPLE 9.1.2: Solve
y
y
0, 0 < x < Π
0, y Π
y 0
1.
SOLUTION: Using the general solution obtained in the previous
c2
0, so
example, we have y
c1 cos x c2 sin x. As before, y 0
c1 sin Π 0 1, the boundary
y c1 cos x. However, because y Π
conditions cannot be satisﬁed with any choice of c1 . Therefore, there is
no solution to the boundary-value problem.
As indicated in the general form of a boundary-value problem, the boundary conditions in these problems can involve the function and its derivative. However,
this modiﬁcation to the problem does not affect the method of solution.
EXAMPLE 9.1.3: Solve
y
y 0
y
0, 0 < x < 1
3y 0
0, y 1
y1
1.
See Chapter 4 and
Theorem 2.
730
Chapter 9 Eigenvalue Problems and Fourier Series
SOLUTION: The characteristic equation is k2
1
0 with roots
1. Hence, a general solution is y
c1 ex c2 e x with derivak1,2
tive y
c1 ex c2 e x . Applying y 0
3y 0
0 yields y 0
3y 0
4c1 2c2 0. Because y 1 y 1
1,
c1 c2 3 c1 c2
y 1
c1 e1
y1
c2 e
1
c1 e1
c2 e
1
2c1 e
1,
1
and c2
2e
unique solution y
1
. Thus, the boundary-value problem has the
e
1 x 1 x 1 x 1
e
e
e x 1 , which we conﬁrm with
2e
2e
e
Mathematica. See Figure 9-2.
so c1
In[1826]:= sol
Out[1826]=
DSolve
y 1
1
y x
2
y x
y 1
1 x
2
y x
0, y 0
1 ,y x ,x
3y 0
0,
2x
In[1827]:= Plot y x /.sol, x, 0, 1 ,
AspectRatio Automatic
9.1.2 Eigenvalue Problems
If a value of the parameter
leads to the trivial solution,
then the value is not
considered an eigenvalue of
the problem.
We now consider eigenvalue problems, boundary-value problems that include
a parameter. Values of the parameter for which the boundary-value problem has
a nontrivial solution are called eigenvalues of the problem. For each eigenvalue,
the nontrivial solution that satisﬁes the problem is called the corresponding eigenfunction.
0.3
0.2
0.1
0.2
0.4
0.6
0.8
1
-0.1
Figure 9-2 The boundary-value problem has a unique solution
Because of the importance of eigenvalue problems, we express these problems in
the general form
a2 x y
a1 x y
a0 x
Λ y
(9.2)
0, a < x < b,
where a2 x
0 on a, b and the boundary conditions at the endpoints x
x b can be written as
k1 y a
k2 y a
0
and
h1 y b
h2 y b
0
a and
(9.3)
for the constants k1 , k2 , h1 , and h2 where at least one of h1 , h2 and at least one of k1 ,
k2 is not zero. Equation (9.2) can be rewritten by letting
px
e
a1 x / a2 x dx
,
qx
a0 x
px,
a2 x
and
sx
px
.
a2 x
(9.4)
735
736
Chapter 9 Eigenvalue Problems and Fourier Series
By making this change, equation (9.2) can be rewritten as the equivalent equation
d
dy
px
q x Λs x y 0,
(9.5)
dx
dx
which is called a Sturm–Liouville equation and along with appropriate boundary
conditions is called a Sturm–Liouville problem. This particular form of the equation is known as self-adjoint form, which is of interest because of the relationship
of the function s x and the solutions of the problem.
EXAMPLE 9.1.7: Place the equation x2 y
adjoint form.
SOLUTION: In this case, a2 x
px
e
and s x
d 2 dy
x
dx
dx
a1 x / a2 x dx
px
a2 x
Λy
e
x2
x2
2x/ x2 dx
x2 , a1 x
2 ln x
e
2xy
Λy
0, x > 0, in self-
2x, and a0 x
0. Hence,
a0 x
px
0,
x , qx
a2 x
2
1, so the self-adjoint form of the equation is
0. We see that our result is correct by differentiating.
Solutions of Sturm–Liouville problems have several interesting properties, two of
which are included in the following theorem.
Theorem 33 (Linear Independence and Orthogonality of Eigenfunctions). If ym x
and yn x are eigenfunctions of the regular Sturm–Liouville problem
dy
d
px
q x Λs x y 0
dx
dx
0, h1 y b h2 y b
k1 y a k2 y a
(9.6)
0.
where m n, ym x and yn x are linearly independent and the orthogonality condition
b
s x ym x yn x dx 0 holds.
a
Because we integrate the product of the eigenfunctions with the function s x in
the orthogonality condition, we call s x the weighting function.
EXAMPLE 9.1.8: Consider the eigenvalue problem y
Λy
0, 0 <
x < p, subject to y 0
0 and y p
0 that we solved in Example 9.1.4.
sin Πx/ p and y2
sin 2Πx/ p are
Verify that the eigenfunctions y1
linearly independent. Also, verify the orthogonality condition.
9.2 Fourier Sine Series and Cosine Series
737
SOLUTION: We can verify that y1 sin Πx/ p and y2
linearly independent by computing the Wronskian.
sin 2Πx/ p are
In[1835]:= Clear x, p
caps
ws
Out[1835]=
Sin
Πx
2Πx
, Sin
p
p
Simplify Det
Πx
p
2 Π Sin
caps,
x caps
3
p
We see that the Wronskian is not the zero function by evaluating it for
a particular value of x; we choose x p/ 2.
p
2
In[1836]:= ws/.x
2Π
Out[1836]=
p
Because W y1 , y2 is not zero for all values of x, the two functions are
Λy 0,
linearly independent. In self-adjoint form, the equation is y
p
with s x
1. Hence, the orthogonality condition is 0 ym x yn x dx 0,
m n, which we verify for y1 and y2 .
p
In[1837]:=
Sin
0
Πx
p
Sin
2Πx
p
Of course, these two
properties hold for any
choices of m and n, m n.
x
Out[1837]= 0
9.2 Fourier Sine Series and Cosine Series
9.2.1 Fourier Sine Series
Recall the eigenvalue problem
y
y0
Λy
0
0, y p
0
that was solved in Example
nΠ/ p 2 , n
1, 2, . . . , with
9.1.4. The eigenvalues of this problem are Λ
Λn
sin nΠx/ p , n 1, 2, . . . .
corresponding eigenfunctions Φn x
We will see that for some functions y f x , we can ﬁnd coefﬁcients cn so that
cn sin
f x
n 1
nΠx
.
p
(9.7)
738
Chapter 9 Eigenvalue Problems and Fourier Series
A series of this form is called a Fourier sine series. To make use of these series, we
must determine the coefﬁcients cn . We accomplish this by taking advantage of the
orthogonality properties of eigenfunctions stated in Theorem 33.
Λy
0 is in self-adjoint form, we have
Because the differential equation y
p
that s x
1. Therefore, the orthogonality condition is 0 sin nΠx/ p sin mΠx/ p dx,
m n. In order to use this condition, multiply both sides of f x
n 1 cn sin nΠx/ p
by the eigenfunction sin mΠx/ p and s x
1. Then, integrate the result from x 0 to
x p (because the boundary conditions of the corresponding eigenvalue problem
are given at these two values of x). This yields
p
f x sin
0
p
mΠx
dx
p
cn sin
0
n 1
mΠx
nΠx
sin
dx.
p
p
Assuming that term-by-term integration is allowed on the right-hand side of the
equation, we have
p
f x sin
0
p
mΠx
dx
p
cn sin
0
n 1
mΠx
nΠx
sin
dx.
p
p
Recall that the eigenfunctions Φn x , n
1, 2, . . . are orthogonal, so
sin mΠx/ p dx 0 if m 0. On the other hand if m n,
p
sin
0
p
mΠx
nΠx
sin
dx
p
p
p
sin nΠx/ p
nΠx
dx
p
2nΠx
1 cos
dx
p
2nΠx p p
p
sin
.
2nΠ
p 0 2
sin2
0
1 p
2 0
1
x
2
In[1838]:=
p
0
nΠx 2
x
p
p Sin 2 n Π
4nΠ
Sin
0
p
Out[1838]=
2
p
Therefore, each term in the sum n 1 cn 0 sin nΠx/ p sin mΠx/ p dx equals zero