PG course on \SPECIAL FUNCTIONS AND THEIR SYMMETRIES
PG course on
SPECIAL FUNCTIONS AND THEIR SYMMETRIES
Vadim KUZNETSOV
Course Outline
1 Gamma and Beta functions
1.1 Introduction
1.2 Gamma function
1.3 Beta function
1.4 Other beta integrals
1.4.1 Second beta integral
1.4.2 Third (Cauchy's) beta integral
1.4.3 A complex contour for the beta integral
1.4.4 The Euler reflection formula
1.4.5 Doublecontour integral
2 Hypergeometric functions
2.1 Introduction
2.2 Definition
2.3 Euler's integral representation
2.4 Two functional relations
2.5 Contour integral representations
2.6 The hypergeometric differential equation
2.7 The RiemannPapperitz equation
2.8 Barnes' contour integral for F(a,b;c;x)
3 Orthogonal polynomials
3.1 Introduction
3.2 General orthogonal polynomials
3.3 Zeros of orthogonal polynomials
3.4
Gauss quadrature
3.5 Classical orthogonal polynomials
3.6 Hermite polynomials
4 Separation of variables and special functions
4.1 Introduction
4.2 SoV for the heat equation
4.3 SoV for a quantum problem
4.4 SoV and integrability
4.5 Another SoV for the quantum problem
5 Integrable systems and special functions
5.1 Introduction
5.2 CalogeroSutherland system
5.3 Integral transform
5.4 Separated equation
5.5 Integral representation for Jack polynomials
Index
1 Gamma and Beta functions
1.1 Introduction
This course is about special functions and their properties.
Many known functions could be called special. They certainly include
elementary functions like exponential and more generally,
trigonometric, hyperbolic functions and their inverses, logarithmic
functions and polylogarithms, but the class also expands into
transcendental functions like Lamé and Mathieu functions.
Usually one deals first with a special function of one variable before
going into a study of its multivariable generalisation, which
is not unique and which opens up a link with the theory of
integrable systems.
We will restrict ourselves to hypergeometric functions which are
usually defined by series representations.
Definition 1
A
hypergeometric series is a series ∑_{n=0}^{∞} a_{n} with a_{n+1}/a_{n} a rational function of n.
Bessel, Legendre, Jacobi functions, parabolic cylinder functions,
3j and 6jsymbols
arising in quantum mechanics and many more classical special functions
are partial cases of the hypergeometric functions. However, the
trancendental functions ``lying in the land beyond Bessel'' are out of the
hypergeormetric class and, thereby, will not be considered in this course.
Euler, Pfaff and Gauss first introduced and studied hypergeometric series,
paying special attention to the cases when a series can be summed into
an elementary function. This gives one of the motivations for studying
hypergeometric series, i.e. the fact that the elementary functions and
several other important functions in mathematics can be expressed
in terms of hypergeometric functions.
Hypergeometric functions can also be described as solitions of
special differential equations, the hypergeometric differential
equations. Riemann was first who exploited this idea and introduced
a special symbol to classify hypergeometic functions by singularities
and exponents of differential equations they satisfy. In this way
we come up with an alternative definition of a hypergeometric
function.
Definition 2
A
hypergeometric function is a solution of a Fuchsian differential
equation which has at most three regular singularities.
Notice that transcendental special functions of the Heun class,
socalled Heun functions which are ``beyond Bessel'',
are defined as special solutions of a generic linear second order
Fuchsian differential equation with four regular singularities.
Of course, when talking about 3 or 4 regular singularities
it means that the number of singularities can be less than that, either
by trivilising the equation or by merging any two regular singularities
in order to obtain an irregular singular point thus leading
to corresponding confluent cases of hypergeomeric or Heun functions.
So, Bessel and parabolic cylinder functions are special cases of the
confluent and doubleconfluent hypergeometric function,
while Lamé and Mathieu functions are special cases of Heun
and confluent Heun functions, respectively. When a maximal allowed
number of singularities of a differential equation grows
it results in a more trancendental special function associated
with it. A short introduction to the theory of Fuchsian equations
with n regular singularities will be given later on in the course.
In the first decade of the XX^{th} century E.W. Barnes introduced
yet another approach to hypergeometric functions based on contour
integral representations. Such representations are important
because they can be used for derivation of many relations between
hypergeometric functions and also for studying their asymptotics.
The whole class of hypergeometric functions is very distinguished
comparing to other special functions, because only for this class
one can have explicit series and integral representations, contiguous
and connection relations, summation and transformation formulas,
and many other beautiful equations relating one hypergeometric
function with another. This is a class of functions for which one can
probably say that any meaningful formula can be written explicitly;
it does not say though that it is always easy to find the one. Also
for that reason this is the class of functions to start from and to put as
a basis of an introductory course in special functions.
The main reason of many applications of hypergeometric functions
and special functions in general is their usefulness. Summation
formulas find their way in combinatorics; classical orthogonal
polynomials give explicit bases in several important Hilbert spaces
and lead to constructive Hamonic Analysis with applications in
quantum physics and chemistry; qhypergeometric series
are related to elliptic and thetafunctions and therefore find
their application in integration of systems of nonlinear
differential equations and in some areas of numeric analysis
and discrete mathematics.
For this part of the course the main reference is the recent book
by G.E. Andrews, R. Askey and R. Roy ``Special Functions'',
Encyclopedia of Mathematics and its Applications 71,
Cambridge University Press, 1999. The book by
N.M. Temme ``Special functions: an introduction to the classical
functions of mathematical physics'', John Wiley & Sons, Inc., 1996,
is also recommended as well as the classical reference:
E.T. Whittaker and G.N. Watson ``A course of modern analysis'',
Cambridge University Press, 1927.
1.2 Gamma function
The Gamma function Γ(x)
was discovered by Euler in the late 1720s
in an attempt to find an analytical continuation
of the factorial function. This function is a
cornerstone of the theory of special functions.
Thus Γ(x) is a
meromorphic function equal to (x−1)! when
x is a positive integer. Euler found its representation
as an infinite integral and as a limit of a finite product.
Let us derive the latter representation following
Euler's generalization of the factorial.
Figure 1: The graph of absolute value of Gamma function of a complex
variable z=x+iy from
MuPAD
computer algebra system.
There are seen poles at z=0,−1,−2,−3,−4,….
Let x and n be nonnegative integers. For any a ∈ C define
the shifted factorial (a)_{n} by
(a)_{n}=a(a+1)…(a+n−1) for n > 0, (a)_{0}=1. 
 (1) 
Then, obviously,
x!= 
(x+n)!
(x+1)_{n}

= 
n!(n+1)_{x}
(x+1)_{n}

= 
n!n^{x}
(x+1)_{n}

· 
(n+1)_{x}
n^{x}

. 
 (2) 
Since
lim_{n→∞} 
(n+1)_{x}
n^{x}

=1, 
 (3) 
we conclude that
x!=lim_{n→∞} 
n!n^{x}
(x+1)_{n}

. 
 (4) 
The limit exists for ∀x ∈ C such that x ≠ −1,−2,−3,…. for

n!n^{x}
(x+1)_{n}

= 

n
n+1


x


n ∏
j=1



1+ 
x
j


−1



1+ 
1
j


x


 (5) 
and


1+ 
x
j


−1



1+ 
1
j


x

=1+ 
x(x−1)
2j^{2}

+O 

1
j^{3}


. 
 (6) 
Definition 3
For ∀x ∈ C, x ≠ 0,−1,−2,…,
the
gamma function Γ(x) is defined by
Γ(x)=lim_{k→∞} 
k!k^{x−1}
(x)_{k}

. 
 (7) 
Three immediate consequences are
Γ(1)=1, Γ(x+1)=xΓ(x) and Γ(n+1)=n!. 
 (8) 
>From the definition it follows that the gamma function has poles
at zero and the negative integers, but 1/Γ(x) is an entire
function with zeros at these points. Every entire function has a
product representation.
Theorem 4

1
Γ(x)

=xe^{γx} 
∞ ∏
n=1




1+ 
x
n


e^{−x/n} 

, 
 (9) 
where γ is Euler's constant given by
γ = lim_{n→∞} 

n ∑
k=1


1
k

−log n 

. 
 (10) 
PROOF.


lim_{n→∞} 
x(x+1)…(x+n−1)
n!n^{x−1}


 

lim_{n→∞} x 

1+ 
x
1




1+ 
x
2


… 

1+ 
x
n


e^{−xlog n} 
 

lim_{n→∞} xe^{x(1+[ 1/2]+…+[ 1/n]−log n)} 
n ∏
k=1




1+ 
x
k


e^{−x/k} 


 

xe^{γx} 
∞ ∏
n=1




1+ 
x
n


e^{−x/n} 

. 


The infinite product in (9) exists because


1+ 
x
n


e^{−x/n}= 

1+ 
x
n




1− 
x
n

+ 
x^{2}
2n^{2}

… 

= 1− 
x^{2}
2n^{2}

+O 

1
n^{3}


. 
 (11) 
^{[¯]}
1.3 Beta function
Definition 5
The beta integral is defined for ℜ x > 0, ℜ y > 0 by
B(x,y)=  ⌠ ⌡

1
0

t^{x−1}(1−t)^{y−1}dt. 
 (12) 
One may also speak of the beta function B(x,y), which is obtained
from the integral by analytic continuation.
The beta function can be expessed in terms of gamma functions.
Theorem 6
B(x,y)= 
Γ(x)Γ(y)
Γ(x+y)

. 
 (13) 
PROOF.
From the definition of beta integral we
have the following contiguous relation between three functions
B(x,y+1)=B(x,y)−B(x+1,y), ℜ x > 0, ℜ y > 0. 
 (14) 
However, integration by parts of the integral in the left hand side gives
Combining the last two we get the functional equation of the form
Iterating this equation we obtain
B(x,y)= 
(x+y)_{n}
(y)_{n}

B(x,y+n). 
 (17) 
Rewrite this relation as
B(x,y)= 
(x+y)_{n}
n!n^{x+y−1}


n!n^{y−1}
(y)_{n}

 ⌠ ⌡

n
0

t^{x−1} 

1− 
t
n


n+y−1

dt. 
 (18) 
As n→∞, we have
B(x,y)= 
Γ(y)
Γ(x+y)

 ⌠ ⌡

∞
0

t^{x−1}e^{−t}dt. 
 (19) 
Set y=1 to arrive at

1
x

=  ⌠ ⌡

1
0

t^{x−1}dt=B(x,1) = 
Γ(1)
Γ(x+1)

 ⌠ ⌡

∞
0

t^{x−1}e^{−t}dt. 
 (20) 
Hence
Γ(x)=  ⌠ ⌡

∞
0

t^{x−1}e^{−t}dt, ℜ x > 0. 
 (21) 
This is the integral representation for the gamma function, which
appears here as a byproduct. Now use it to prove the theorem
for ℜ x > 0 and ℜ y > 0 and then use the standard
argument of analytic continuation to finish the proof.
^{[¯]}
An important corollary is an integral representation for the gamma function
which may be taken as its definition for ℜ x > 0.
Corollary 7
For ℜ x > 0
Γ(x)=  ⌠ ⌡

∞
0

t^{x−1}e^{−t}dt. 
 (22) 
Use it to explicitly represent the poles and the analytic continuation
of Γ(x):
Γ(x)=  ⌠ ⌡

1
0

t^{x−1}e^{−t}dt+  ⌠ ⌡

∞
1

t^{x−1}e^{−t}dt 

= 
∞ ∑
n=0


(−1)^{n}
(n+x)n!

+  ⌠ ⌡

∞
1

t^{x−1}e^{−t}dt. 

The integral is an intire function and the sum gives the poles at x=−n,
n=0,1,2, … with the residues equal to (−1)^{n}/n!.
Several other useful forms of the beta integral can be derived
by a change of variables. For
example, take t=sin^{2}θ in (12) to get
 ⌠ ⌡

π/2
0

sin^{2x−1}θ cos^{2y−1}θ dθ = 
Γ(x)Γ(y)
2Γ(x+y)

. 

Put x=y=1/2. The result is
The substitution t=(u−a)/(b−a) gives
 ⌠ ⌡

b
a

(b−u)^{x−1}(u−a)^{y−1}du=(b−a)^{x+y−1} 
Γ(x)Γ(y)
Γ(x+y)

, 

which can be rewritten in the alternative form:
 ⌠ ⌡

b
a


(b−u)^{x−1}
Γ(x)


(u−a)^{y−1}
Γ(y)

du= 
(b−a)^{x+y−1}
Γ(x+y)

. 

1.4 Other beta integrals
There are several kinds of integral representations for the beta function.
All of them can be brought to the following form
 ⌠ ⌡

C

[l_{1}(t)]^{p}[l_{2}(t)]^{q}dt, 

where l_{1}(t) and l_{2}(t) are linear functions of t, and C
is an appropriate curve. The representation (12) is called
Euler's first beta integral. Fot it, the curve consisits of a line segment
connecting the two zeros of lfunctions. We introduce now four
more beta integrals. For the second beta integral, the curve is a half
line joining one zero with infinity while the other zero is not
on this line. For the third (Cauchy's) beta integral, it is a line
with zeros on opposite sides. For the last two beta integrals, the curve
is a complex contour. In the first case, it starts and ends at one
zero and encircles the other zero in positive direction. In the
second case, the curve
is a double loop winding around two zeros, once in a positive direction
and the second time in the negative direction.
1.4.1 Second beta integral
Set t=s/(s+1) in (12) to obtain the second beta integral
with integration over half line,
 ⌠ ⌡

∞
0


s^{x−1}
(1+s)^{x+y}

ds= 
Γ(x)Γ(y)
Γ(x+y)

. 
 (23) 
1.4.2 Third (Cauchy's) beta integral
The beta integral due to Cauchy is defined by
C(x,y)=  ⌠ ⌡

∞
−∞


dt
(1+it)^{x}(1−it)^{y}

= 
π2^{2−x−y}Γ(x+y−1)
Γ(x)Γ(y)

, ℜ(x+y) > 1. 

PROOF.
To prove this, first show that integration by parts gives
Also,
C(x,y)=  ⌠ ⌡

∞
−∞


(−1−it)+2
(1+it)^{x}(1−it)^{y+1}

=2C(x,y)−C(x−1,y+1). 

Last two combine to give the functional equation
C(x,y)= 
2y
x+y−1

C(x,y+1) . 

Iteration gives
C(x,y)= 
2^{2n}(x)_{n}(y)_{n}
(x+y−1)_{2n}

C(x+n,y+n) . 

Now,
C(x+n,y+n)=  ⌠ ⌡

∞
−∞


dt
(1+t^{2})^{n}(1+it)^{x}(1−it)^{y}

. 

Set t→ t/√n and let n→ ∞.
^{[¯]}
The substitution t=tanθ leads to the integral:
 ⌠ ⌡

π/2
0

cos^{x+y−2}θ cos(x−y)θ dθ = 
π2^{1−x−y}Γ(x+y−1)
Γ(x)Γ(y)

, ℜ(x+y) > 1. 

1.4.3 A complex contour for the beta integral
Consider the integral
I_{x,y}= 
1
2πi

 ⌠ ⌡

(1+)
0

w^{x−1}(w−1)^{y−1}dw, 

with ℜ x > 0 and y ∈ C. The contour starts and ends at the origin,
and encircles the point 1 in positive direction. The phase of w−1 is zero
at positive points larger than 1. When ℜ y > 0 we can deform
the contour along (0,1). Then we obtain I_{x,y}=B(x,y)sin(πy )/π.
It follows that
B(x,y)= 
1
2isinπy

 ⌠ ⌡

(1+)
0

w^{x−1}(w−1)^{y−1}dw. 

The integral is defined for any complex value of y. For y=1,2,…,
the integral vanishes; this is canceled by the infinite values of the term
in front of the integral.
There is a similar contour integral representing the gamma function.
Let us first prove Hankel's contour integral for the
reciprocal gamma function, which is one of the most beautiful and
useful representations for this function. It has the following form:

1
Γ(z)

= 
1
2πi

 ⌠ ⌡

L

s^{−z}e^{s}ds, z ∈ C. 
 (24) 
The contour of integration L is the Hankel contour
that runs from −∞, arg s=−π, encircles the origin in positive
direction and terminates at −∞, now with arg s=+π. For this
we also use the notation ∫_{−∞}^{(0+)}. The multivalued
function s^{−z} is assumed to be real for real values of z and s, s > 0.
A proof of (24) follows immediately from the theory of Laplace
transforms: from the wellknown integral

Γ(z)
s^{z}

=  ⌠ ⌡

∞
0

t^{z−1} e^{−st} dt 

(24) follows as a special case of the inversion formula.
A direct proof follows from a special choice of the contour L:
the negative real axis. When ℜ z < 1 we can pull the contour
onto the negative axis, where we have

1
2πi



−  ⌠ ⌡

0
∞

(se^{−iπ})^{−z}e^{−s}ds−  ⌠ ⌡

∞
0

(se^{+iπ})^{−z}e^{−s}ds 

= 
1
π

sinπz Γ(1−z). 

Using the reflection formula (cf. next subsection
for a proof),
we see that this is indeed the left
hand side of (24). In a final step the principle
of analytic continuation is used to show that (24)
holds for all finite complex values of z. Namely, both
the left and the righthand side of (24) are
entire functions of z.
Another form of (24) is
Γ(z)= 
1
2isinπz

 ⌠ ⌡

L

s^{z−1} e^{s} ds. 

1.4.4 The Euler reflection formula
The Euler reflection formula (25) connects the gamma function
with the sine function. In a sense, it shows that 1/Γ(x) is
`half of the sine function'. To prove the formula (25), set
y=1−x, 0 < x < 1 in (23) to obtain
Γ(x)Γ(1−x)=  ⌠ ⌡

∞
0


t^{x−1}
1+t

dt. 

To compute the integral, consider the following contour integral
where C consists of two circles about the origin of radii
R and ε respectively, which are joined along
the negative real axis from −R to −ε. Move along
the outer circle in the counterclockwise direction, and along
the inner circle in the clockwise direction. By the residue theorem
 ⌠ ⌡

C


z^{x−1}
1−z

dz=−2πi, 

when z^{x−1} has its principal value. Thus
−2πi=  ⌠ ⌡

π
−π


iR^{x}e^{ixθ}
1−Re^{iθ}

dθ+  ⌠ ⌡

ε
R


t^{x−1}e^{ixπ}
1+t

dt+  ⌠ ⌡

−π
π


iε^{x}e^{ixθ}
1−εe^{iθ}

dθ+  ⌠ ⌡

R
ε


t^{x−1}e^{−ixπ}
1+t

dt. 

Let R→∞ and ε→0 so that the first and
third integrals tend to zero and the second and fourth combine
to give (25) for 0 < x < 1. The full result follows by analytic
continuation.
1.4.5 Doublecontour integral
We have seen that it is possible to replace the integral for Γ(z)
along a half line by a contour integral which converges for all values
of z. A similar process can be carried out for the beta integral.
Let P be any point between 0 and 1. We have the following
Pochhammer's extension of the beta integral:
 ⌠ ⌡

(1+,0+,1−,0−)
P

t^{x−1}(1−t)^{y−1}dt = 
−4π^{2}e^{πi(x+y)}
Γ(1−x)Γ(1−y)Γ(x+y)

. 

Here the contour starts at P, encircles the point 1 in the positive
(counterclockwise) direction,
returns to P, then encircles the origin in the positive direction,
and returns to P. The 1−,0− indicates that now the path
of integration is in the clockwise direction, first
around 1 and then 0. The formula is proved by the same method
as Hankel's formula. Notice that it is true for any complex
x and y: both sides are entire functions of x and y.
2 Hypergeometric functions
2.1 Introduction
In this Lecture we give the definition and main properties of
the Gauss (F=_{2}F_{1}) hypergeometric function and shortly
mention its generalizations, the _{p}F_{q} generalized
and _{p}φ_{q} basic (or q) hypergeometric functions.
Almost all of the elementary functions of mathematics and some
not very elementary, like the error function erf(x) and
dilogarithm function Li_{2}(x), are special cases of the
hypergeometric functions, or they can be expressed
as ratios of hypergeometric functions.
We will first derive Euler's fractional integral representation
for the Gauss hypergeometric function F, from which
many identities and transformations will follow. Then we
talk about hypergeometric differential equation, as a general
linear second order differential equation having three regular
singular points. We derive contiguous relations satisfied by the function F.
Finally, we explain the Barnes approach to the hypergeometric functions
and BarnesMellin contour integral representation for the function F.
2.2 Definition
Directly from the definition of a hypergeometric series ∑c_{n}, on factorizing
the polynomials in n, we obtain

c_{n+1}
c_{n}

= 
(n+a_{1})(n+a_{2})…(n+a_{p})x
(n+b_{1})(n+b_{2})…(n+b_{q})(n+1)

. 

Hence, we can get a more explicit definition.
Definition 1
The
(generalized) hypergeometric series is defined by the
following series representation
_{p}F_{q} 

 ;x 

= 
∞ ∑
n=0


(a_{1})_{n}…(a_{p})_{n}
(b_{1})_{n}…(b_{q})_{n}


x^{n}
n!

. 

Sometimes, we will use other notation: _{p}F_{q}(a_{1},…, a_{p};b_{1},…,b_{q};x).
If we apply the ratio test to
determine the convergence of the series,


c_{n+1}
c_{n}


≤ 
xn^{p−q−1}(1+a_{1}/n)…(1+a_{p}/n)
(1+1/n)(1+b_{1}/n)…(1+b_{q}/n)

, 

then we get the following theorem.
Theorem 2
The series _{p}F_{q}(a_{1},…, a_{p}; b_{1},…,b_{q};x) converges absolutely
for all x if p ≤ q and for x < 1 if p=q+1, and it diverges for all
x ≠ 0 if p > q+1 and the series does not terminate.
PROOF.
It is clear that c_{n+1}/c_{n}→0 as n→∞ if
p ≤ q. For p=q+1, lim_{n→∞} c_{n+1}/c_{n}=x,
and for p > q+1, c_{n+1}/c_{n}→∞ as n→ ∞.
This proves the theorem.
^{[¯]}
The case x=1 when p=q+1 is of interest. Here we have the following
conditions for convergence.
Theorem 3
The series _{q+1}F_{q}(a_{1},…, a_{q+1}; b_{1},…,b_{q};x) with x=1
converges absolutely if ℜ (∑b_{i}−∑a_{i}) > 0. The series converges conditionally
if x=e^{iθ} ≠ 1 and 0 ≥ ℜ (∑b_{i}−∑a_{i}) > −1 and the series
diverges if ℜ (∑b_{i}−∑a_{i}) ≤ −1.
PROOF.
Notice that the shifted factorial can by expressed as a ratio
of two gamma functions:
By the definition of the gamma function

lim
n→∞


Γ(n+x)
Γ(n+y)

n^{y−x}= 
Γ(x)
Γ(y)


lim
n→∞


(x)_{n}
(y)_{n}

n^{y−x} = 
Γ(x)
Γ(y)

· 
Γ(y)
Γ(x)

=1. 

The coefficient of nth term in _{q+1}F_{q}

(a_{1})_{n}…(a_{q+1})_{n}
(b_{1})_{n}…(b_{q})_{n}n!

∼ 

n^{∑a − ∑b −1} 

as n→∞. The statements about absolute convergence
and divergence follow immediately. The part of the theorem concerning
conditional convergence can by proved by summation by parts.
^{[¯]}
The _{2}F_{1} series was studied
extensively by Euler, Pfaff,
Gauss, Kummer and Riemann.
Examples:
log(1+x)=x _{2}F_{1} 

 ;−x 

; 

tan^{−1}x=x _{2}F_{1} 

 ;−x^{2} 

; 

sin^{−1}x=x _{2}F_{1} 

 ;x^{2} 

; 

(1−x)^{−a}=_{1}F_{0} 

 ;x 

; 

sinx=x _{0}F_{1} 

 ;−x^{2}/4 

; 

cosx=_{0}F_{1} 

 ;−x^{2}/4 

; 

e^{x}= _{0}F_{0} 

 ;x 

. 

The next set of examples uses limits:
e^{x}= 
lim
b→∞

_{2}F_{1} 

 ;x/b 

; 

coshx= 
lim
a,b→∞

_{2}F_{1} 

 ;x^{2}/(4ab) 

; 

_{1}F_{1} 

 ;x 

= 
lim
b→∞

_{2}F_{1} 

 ;x/b 

; 

_{0}F_{1} 

 ;x 

= 
lim
a,b→∞

_{2}F_{1} 

 ;x/(ab) 

. 

The example of log(1−x)=−x _{2}F_{1}(1,1;2;x) shows that though the series
converges for x < 1, it has a continuation as a singlevalued function in the
complex plane from which a line joining 1 to ∞ is deleted. This describes
the general situation; a _{2}F_{1} function has a continuation to the complex
plane with branch points at 1 and ∞.
Definition 4
The
(Gauss) hypergeometric function _{2}F_{1}(a,b;c;x) is defined by the
series

∞ ∑
n=0


(a)_{n}(b)_{n}
(c)_{n}n!

x^{n} 

for x < 1, and by continuation elsewhere.
2.3 Euler's integral representation
Theorem 5
If ℜc > ℜb > 0, then
_{2}F_{1} 

 ;x 

= 
Γ(c)
Γ(b)Γ(c−b)

 ⌠ ⌡

1
0

t^{b−1}(1−t)^{c−b−1}(1−xt)^{−a}dt 
 (26) 
in the x plane cut along the real axis from 1 to ∞. Here it is understood
that argt=arg(1−t)=0 and (1−xt)^{−a} has its principal value.
PROOF.
Suppose x < 1. Expand (1−xt)^{−a} by the binomial theorem

Γ(c)
Γ(b)Γ(c−b)


∞ ∑
n=1


(a)_{n}
n!

x^{n}  ⌠ ⌡

1
0

t^{n+b−1}(1−t)^{c−b−1}dt. 

Since for ℜb > 1, ℜ(c−b) > 1 and x < 1 the series

∞ ∑
n=0

U_{n}(t), U_{n}(t)=x^{n} 
(a)_{n}
n!

t^{b+n−1}(1−t)^{c−b−1} 

converges uniformly with respect to t ∈ [0,1], we are able to interchange
the order of integration and summation for these values of b, c and x.
Now, use the beta integral to prove the result for x < 1.
Since the integral is analytic in the cut plane, the theorem
holds for x in this region as well; also we apply the analytic continuation
with respect to b and c in order to arrive at the conditions announced in the
formulation of the theorem.
^{[¯]}
Hence we have obtained the analytic continuation of F, as a function
of x, outside the unit disc, but only when ℜc > ℜb > 0. It is important
to note that we view _{2}F_{1}(a,b;c;x) as a function of four complex
variables a, b, c, and x instead of just x. It is easy to see
that [ 1/Γ(c)] _{2}F_{1}(a,b;c;x) is an entire function
of a, b, c if x is fixed and x < 1, for in this case the
series converges uniformly in every compact domain of the a, b, c
space.
Gauss found evaluation of the series in the point 1.
Theorem 6
For ℜ(c−a−b) > 0

∞ ∑
n=0


(a)_{n}(b)_{n}
(c)_{n}n!

=_{2}F_{1} 

 ;1 

= 
Γ(c)Γ(c−a−b)
Γ(c−a)Γ(c−b)

. 

PROOF.
Let x→ 1^{−} in Euler's integral for _{2}F_{1}. Then when ℜc > ℜb > 0
and ℜ(c−a−b) > 0 we get

Γ(c)
Γ(b)Γ(c−b)

 ⌠ ⌡

1
0

t^{b−1} (1−t)^{c−a−b−1}dt = 
Γ(c)Γ(c−a−b)
Γ(c−a)Γ(c−b)

. 

The condition ℜc > ℜb > 0 may be removed by continuation.
^{[¯]}
Corollary 7
(ChuVandermonde)
_{2}F_{1} 

 ;1 

= 
(c−b)_{n}
(c)_{n}

, n=0,1,2,…. 

2.4 Two functional relations
The hypergeometric function satisfies a great number of relations.
The most simple and obvious is the symmetry a↔ b.
Let us prove two more relations
_{2}F_{1} 

 ;x 

=(1−x)^{−a}_{2}F_{1} 

 ; 
x
x−1


(Pfaff), 
 (27) 
_{2}F_{1} 

 ;x 

=(1−x)^{c−a−b}_{2}F_{1} 

 ;x 

(Euler). 
 (28) 
First relation is proved through the change of variable t=1−s in Euler's
integral formula. The second relation follows by using the first relation
twice.
The righthand series in Pfaff's transformation converges for x/(x−1) < 1.
This condition is implied by ℜx < 1/2; so we have a continuation
of the series _{2}F_{1}(a,b;c;x) to this region by Pfaff's formula.
Now, let us rewrite Euler's transformation as
(1−x)^{a+b−c}_{2}F_{1} 

 ;x 

=_{2}F_{1} 

 ;x 

. 

Equate the coefficient of x^{n} on both sides to get

n ∑
j=0


(a)_{j}(b)_{j}(c−a−b)_{n−j}
j!(c)_{j}(n−j)!

= 
(c−a)_{n}(c−b)_{n}
n!(c)_{n}

. 

Rewrite this as:
Theorem 8 (PfaffSaalschütz)
_{3}F_{2} 

 ;1 

= 
(c−a)_{n}(c−b)_{n}
(c)_{n}(c−a−b)_{n}

. 

The PfaffSaalschütz identity can be written as
(c)_{n}(c+a+b)_{n} _{3}F_{2} 

 ;1 

= (c+a)_{n}(c+b)_{n}. 

This is a polynomial identity in a, b, c. Dougall (1907) took the view
that both sides of this equation are polynomials of degree n in a.
Therefore, the identity is true if both sides are equal for n+1 distinct values
of a. By the same method he proved a more general identity:
_{7}F_{6} 

a,1+ 
1
2

a,−b,−c,−d,−e,−n 
 
1
2

a, 1+a+b, 1+a+c, 1+a+d, 1+d+e, 1+a+n 

 ;1 



= 
(1+a)_{n}(1+a+b+c)_{n}(1+a+b+d)_{n}(1+a+c+d)_{n}
(1+a+b)_{n}(1+a+c)_{n}(1+a+d)_{n}(1+a+b+c+d)_{n}

, 

where 1+2a+b+c+d+e+n=0 and n is a positive integer.
Taking the limit n→∞ we get
_{5}F_{4} 

 
1
2

a, 1+a+b, 1+a+c, 1+a+d 

 ;1 



= 
Γ(1+a+b)Γ(1+a+c)Γ(1+a+d)Γ(1+a+b+c+d)
Γ(1+a)Γ(1+a+b+c)Γ(1+a+b+d)Γ(1+a+c+d)



when ℜ(a+b+c+d+1) > 0. Now, take d=−a/2 to get
Dixon's summation formula
_{3}F_{2} 

 ;1 

= 
Γ(1+ 
1
2

a)Γ(1+a+b)Γ(1+a+c)Γ(1+ 
1
2

a+b+c) 
Γ(1+a)Γ(1+ 
1
2

+b)Γ(1+ 
1
2

a+c)Γ(1+ 
1
2

a+b+c) 

. 

2.5 Contour integral representations
A more general integral representation for the _{2}F_{1} hypergeometric
function is the loop integral defined by
_{2}F_{1} 

 ;x 

= 
Γ(c)Γ(1+b−c)
2πiΓ(b)

 ⌠ ⌡

(1+)
0

t^{b−1}(t−1)^{c−b−1}(1−xt)^{−a}dt, ℜb > 0. 

The contour starts and terminates at t=0 and encircles the point t=1 in the
positive direction. The point 1/x should be outside the contour. The manyvalued
functions of the integrand assume their principal branches: arg(1−xt) tends to zero
when x→0, and argt, arg(t−1) are zero at the point
where the contour cuts the real positive axis (at the right of 1). Observe that
no condition on c is needed, whereas in (26) we need ℜ(c−b) > 0. The proof
of the above representation runs as for (26), with the help of the corresponding
loop integral for the beta function.
Alternative representation involves a contour encircling the point 0:
_{2}F_{1} 

 ;x 

= 
Γ(c)Γ(1−b)
2πiΓ(c−b)

 ⌠ ⌡

(0+)
1

(−t)^{b−1}(1−t)^{c−b−1}(1−xt)^{−a}dt, ℜc > ℜb. 

Using the doubleloop (or Pochhammer's) contour integral one can derive the following
representation

1
Γ(c)

_{2}F_{1} 

 ;x 

=− 
e^{−i πc}
4Γ(b)Γ(c−b)sinπb sinπ(c−b)



×  ⌠ ⌡

(1+,0+,1−,0−)
P

t^{b−1}(1−t)^{c−b−1}(1−xt)^{−a}dt. 

Here we have following conditions:
arg(1−x) < π, argt=arg(1−t)=0 at the starting point P
of the contour, and (1−xt)^{−a}=1 when x=0. Note that there
are no conditions on a, b, or c.
2.6 The hypergeometric differential equation
Let us introduce the differential operator ϑ = x d/dx.
We have
ϑ(ϑ+c−1) x^{n} = n(n+c−1) x^{n}. 

Hence
ϑ(ϑ+c−1) _{2}F_{1}(a,b;c;x) = x(ϑ+a)(ϑ+b) _{2}F_{1}(a,b;c;x) . 

In explicit form it reads
x(1−x)F′′+[c−(a+b+1)x)F′−abF=0, 
 (29) 
F=F(a,b;c;x)=_{2}F_{1}(a,b;c;x). 

This is the
hypergeometric differential equation,
which was given by Gauss.
It is easy to show that, in addition to F(a,b;c;x), a second
solution of (29) is given by
x^{1−c}F(a−c+1,b−c+1; 2−c;x). 

When c=1 it does not give a new solution, but, in general,
the second solution of (29) appears to be of the form
PF(a,b;c;x)+Qx^{1−c}F(a−c+1,b−c+1; 2−c;x), 
 (30) 
where P and Q are independent of x.
Next we observe that with the help of (29) and (30)
we can express a hypergeometric function with argument
1−x or 1/x in terms of functions with argument x. For example,
when in (29) we introduce a new variable x′=1−x
we obtain a hypergeometric differential equation, but now with parameters
a, b and a+b−c+1. Hence, besides the solutions in (30)
we have F(a,b;a+b−c+1;1−x) as a solution as well. Any three solutions
have to be linearly dependent. Therefore we get
F(a,b;a+b−c+1;1−x) = PF(a,b;c;x)+Qx^{1−c}F(a−c+1,b−c+1; 2−c;x). 

To find P and Q we substitute z=0 and z=1.
If we also use Pfaff's and Euler's transformations we
can get the following list of relations:


 

+B(1−x)^{c−a−b}F(c−a,c−b;c−a−b+1;1−x) 
 (31)  

C(−x)^{−a}F(a,1−c+a;1−b+a;1/x) 
 

+D(−x)^{−b}F(b,1−c+b;1−a+b;1/x) 
 (32)  

C(1−x)^{−a}F(a,c−b;a−b+1;1/(1−x)) 
 

+D(1−x)^{−b}F(b,c−a;b−a+1;1/(1−x)) 
 (33)  

Ax^{−a}F(a,a−c+1;a+b−c+1;1−1/x) 
 

+Bx^{a−c}(1−x)^{c−a−b}F(c−a,1−a;c−a−b+1;1−1/x). 
 (34) 

Here
A= 
Γ(c)Γ(c−a−b)
Γ(c−a)Γ(c−b)

, B= 
Γ(c)Γ(a+b−c)
Γ(a)Γ(b)

, 

C= 
Γ(c)Γ(b−a)
Γ(b)Γ(c−a)

, D= 
Γ(c)Γ(a−b)
Γ(a)Γ(c−b)

. 

Since the Pfaff's formula (27) gives a continuatiuon of _{2}F_{1}
from x < 1 to ℜx < [ 1/2], then (31) gives the continuation to
ℜx > [ 1/2] cut along the real axis from x=1
to x=∞. The cut comes from the branch points of the factor
(1−x)^{c−a−b}. Analogously, (32) holds when arg(−x) < π;
(33) holds when arg(1−x) < π; (34) holds when
arg(1−x) < π and argx < π.
2.7 The RiemannPapperitz equation
The hypergeometric differential equation (29) for the function
_{2}F_{1} has three regular singular points, at 0, 1
and ∞ with exponents 0, 1−c; 0, c−a−b; and a, b
respectively. Its Riemann symbol has the following form:
In fact, this equation is a generic equation that has only three
regular singularities.
Theorem 9
Any homogeneous linear differential equation of the second order
with at most three singularities, which are regular singular points,
can be transformed into the hypergeometric differential equation
(29).
PROOF.
Let us only sketch the proof. First we consider the equation

d^{2}f
dz^{2}

+p(z) 
df
dz

+q(z)f=0 

and assume that it has only three finite regular singular points
ξ, η and ζ with the exponents (α_{1},α_{2}),
(β_{1},β_{2}) and (γ_{1},γ_{2}). Then we find that
such equation can be always brought into the form
f′′+ 

1−α_{1}−α_{2}
z−ξ

+ 
1−β_{1}−β_{2}
z−η

+ 
1−γ_{1}−γ_{2}
z−ζ


f′ 
 (35) 
− 

α_{1}α_{2}
(z−ξ)(η−ζ)

+ 
β_{1}β_{2}
(z−η)(ζ−ξ)

+ 
γ_{1}γ_{2}
(z−ζ)(ξ−η)



(ξ−η)(η−ζ)(ζ−ξ)
(z−ξ)(z−η)(z−ζ)

f=0. 

Next we introduce the following
fractional linear transformation:
x= 
(ζ−η)(z−ξ)
(ζ−ξ)(z−η)

, 

and also a `gaugetransformation' of the function f:
This transformation changes singularities to 0, 1 and ∞.
The exponents in these points are
(0,α_{2}−α_{1}), (0,γ_{2}−γ_{1}), (α_{1}+β_{1}+γ_{1}, α_{1}+β_{2}+γ_{1}). 

It is easy to check that we arrive at the hypergeometric
differential equation (29) for the function F(x) with the
following parameters:
a=α_{1}+β_{1}+γ_{1}, b=α_{1}+β_{2}+γ_{1}, c=1+α_{1}−α_{2}. 

^{[¯]}
Equation (35) is called the RiemannPapperitz equation.
2.8 Barnes' contour integral for F(a,b;c;x)
The pair of Mellin transformations (direct and inverse)
is defined by
F(s)=  ⌠ ⌡

∞
0

x^{s−1}f(x)dx, f(x)= 
1
2πi

 ⌠ ⌡

c+i∞
c−i∞

x^{−s}F(s)ds. 

It is true for some class of functions. For example,
Γ(s)=  ⌠ ⌡

∞
0

x^{s−1}e^{−x}dx, e^{−x}= 
1
2πi

 ⌠ ⌡

c+i∞
c−i∞

x^{−s}Γ(s)ds, c > 0. 

This can be proved by Cauchy's residue theorem. Take
a rectangular contour L with vertices c±iR,
c−(N+[ 1/2])±iR, where N is a positive integer.
The poles of Γ(s) inside this contour are at
0,1,…,N and the residues are (−1)^{j}/j!.
Now let R and N tend to ∞.
The Mellin transform of the hypergeometric function
is
 ⌠ ⌡

∞
0

x^{s−1}_{2}F_{1} 

 ;−x 

dx = 
Γ(c)
Γ(a)Γ(b)


Γ(s)Γ(a−s)Γ(b−s)
Γ(c−s)

. 

Theorem 10

Γ(a)Γ(b)
Γ(c)

_{2}F_{1} 

 ;x 

= 
1
2πi

 ⌠ ⌡

i∞
−i∞


Γ(a+s)Γ(b+s)Γ(−s)
Γ(c+s)

(−x)^{s}ds, 

arg(−x) < π. The path of integration is curved, if necessary,
to separate the poles s=−a−n, s=−b−n, from the poles s=n, where
n is an integer ≥ 0. (Such a contour can always be drawn
if a and b are not negative integers.)
3 Orthogonal polynomials
3.1 Introduction
In this lecture we talk about general properties of
orthogonal polynomials and about classical orthogonal
polynomials, which appear to be hypergeometric
orthogonal polynomials. One way to link the hypergeometric
function to orthogonal polynomials is through a formula
of Jacobi. Multiply the hypergeometric equation
by x^{c−1}(1−x)^{a+b−c} and write it in the following
form

d
dx

[x(1−x)x^{c−1}(1−x)^{a+b−c}y′] = abx^{c−1}(1−x)^{a+b−c}y. 

From

d
dx

_{2}F_{1} 

 ;x 

= 
ab
c

_{2}F_{1} 

 ;x 

, 

by induction,

d
dx

[x^{k}(1−x)^{k}My^{(k)}] = (a+k−1)(b+k−1)x^{k−1}(1−x)^{k−1}My^{(k−1)}, 

where M=x^{c−1}(1−x)^{a+b−c}. Then

d^{k}
dx^{k}

[x^{k}(1−x)^{k}My^{(k)}] = (a)_{k}(b)_{k} My. 

Substitute
y^{(k)}= 
(a)_{k}(b)_{k}
(c)_{k}

_{2}F_{1} 

 ;x 

, 

to get

d^{k}
dx^{k}



x^{k}(1−x)^{k}M_{2}F_{1} 

 ;x 


=(c)_{k}M _{2}F_{1} 

 ;x 

. 

Put b=−n, k=n, then
_{2}F_{1} 

 ;x 

= 
x^{1−c}(1−x)^{c+n−a}
(c)_{n}


d^{n}
dx^{n}

[x^{c+n−1}(1−x)^{a−c}]. 

This is Jacobi's formula.
Set x=(1−y)/2, c=α+1, and a=n+α+β+1
to get
_{2}F_{1} 

 ; 
1−y
2


= 
(1−y)^{−α}(1+y)^{−β}
(α+1)_{n}2^{n}


d^{n}
dx^{n}

[(1−y)^{n+α}(1+y)^{n+β}]. 
 (36) 
Definition 1
The
Jacobi polynomial of degree n is defined by
P^{(α,β)}_{n}(x):= 
(α+1)_{n}
n!

_{2}F_{1} 

 ; 
1−x
2


. 
 (37) 
Its orthogonality relation is as follows:
 ⌠ ⌡

+1
−1

P_{n}^{(α,β)}(x)P_{m}^{(α,β)}(x)(1−x)^{α}(1+x)^{β}dx 

= 
2^{α+β+1}Γ(n+α+1)Γ(n+β+1)
(2n+α+β+1)Γ(n+α+β+1)n!

δ_{mn}. 

Formula (36) is called Rodrigues formula for Jacobi polynomials.
3.2 General orthogonal polynomials
Consider the linear space P of polynomials of
the real variable x with real coefficients.
A set of orthogonal polynomials is defined by the interval
(a,b) and by the measure dμ(x)=w(x)dx of orthogonality.
The positive function w(x), with the property that
 ⌠ ⌡

b
a

w(x)x^{k}dx < ∞, ∀k=0,1,2,…, 

is called the weight function.
Definition 2
We say that a sequence of polynomials {p_{n}(x)}_{0}^{∞},
where p_{n}(x) has exact degree n, is orthogonal with respect
to the weight function w(x) if
 ⌠ ⌡

b
a

p_{n}(x)p_{m}(x)w(x)dx=h_{n}δ_{mn}. 

Theorem 3
A sequence of orthogonal polynomials {p_{n}(x)} satisfies
the threeterm recurrence relation
p_{n+1}(x)=(A_{n}x+B_{n})p_{n}(x)−C_{n}p_{n−1}(x) for n ≥ 0, 

where we set p_{−1}(x)=0. Here A_{n}, B_{n}, and C_{n} are real
constants, n=0,1,2,…, and A_{n−1}A_{n}C_{n} > 0, n=1,2,….
If the highest coefficient of p_{n}(x) is k_{n}, then
A_{n}= 
k_{n+1}
k_{n}

, C_{n+1}= 
A_{n+1}
A_{n}


h_{n+1}
h_{n}

. 

An important consequence of the recurrence relation is
the
ChristoffelDarboux formula.
Theorem 4
Suppose that p_{n}(x) are normalized so that h_{n}=1.
The

n ∑
m=0

p_{m}(y)p_{m}(x) = 
k_{n}
k_{n+1}


p_{n+1}(x)p_{n}(y)−p_{n+1}(y)p_{n}(x)
x−y

. 

Corollary 5
p_{n+1}′(x)p_{n}(x)−p_{n+1}(x)p_{n}′(x) > 0 ∀x.
3.3 Zeros of orthogonal polynomials
Theorem 6
Suppose that {p_{n}(x)} is a sequence of orthogonal
polynomials with respect to the weight function w(x)
on the interval [a,b]. Then p_{n}(x) has n simple
zeros in [a,b].
Theorem 7
The zeros of p_{n}(x) and p_{n+1}(x) separate each other.
3.4
Gauss quadrature
Theorem 8
There are positive numbers λ_{1},…,λ_{n}
such that for every polynomial f(x) of degree at most
2n−1
 ⌠ ⌡

b
a

f(x)w(x)dx= 
n ∑
j=1

λ_{j}f(x_{j}), 

where x_{j}, j=1,…,n, are zeros of the polynomial
p_{n}(x) from the set of polynomials orthogonal
with respect to the weight function w(x), and
λ_{j} have the form
λ_{j}=  ⌠ ⌡

b
a


p_{n}(x)w(x)dx
p′_{n}(x_{j})(x−x_{j})

. 

3.5 Classical orthogonal polynomials
The orthogonal polynomials associated with the names of Jacobi,
Gegenbauer, Chebyshev, Legendre, Laguerre and Hermite
are called the
classical orthogonal polynomials.
The following properties are characteristic of the classical
orthogonal polynomials:
(i) the family {p_{n}′} is also an orthogonal system;
(ii) p_{n} satisfies a second order linear differential
equation
where A and B do not depend on n and λ_{n}
does not depend on x;
(iii) there is a
Rodrigues formula of the form
p_{n}(x)= 
1
K_{n}w(x)



d
dx


n

[w(x)X^{n}(x)], 

where X is a polynomial in x with coefficients not depending on n,
and K_{n} does not depend on x.
As was said earlier all classical orthogonal polynomials are, in fact,
hypergeometric polynomials, in the sense that they can be expressed
in terms of the hypergeometric function. With the Jacobi polynomials
expressed by (37), all the other appear to be either partial cases
or limits from hypergeometric function to confluent
hypergeometric function.
Gegenbauer polynomials:
C_{n}^{γ}(x)= 
(2γ)_{n}

P_{n}^{(γ−[ 1/2],γ−[ 1/2])}(x), 
 (38) 
Chebyshev polynomials:
T_{n}(x)= 
n!

P_{n}^{(−[ 1/2],−[ 1/2])}(x), 
 (39) 
Legendre polynomials:
P_{n}(x)=P_{n}^{(0,0)}(x), 
 (40) 
Laguerre polynomials:
L^{α}_{n}(x)= 

 

_{1}F_{1}(−n;α+1;x); 
 (41) 
Hermite polynomials:



(−1)^{n}(2n)!
n!

_{1}F_{1} 

−n; 
1
2

;x^{2} 

, 
 (42)  


(−1)^{n}(2n+1)!
n!

2x _{1}F_{1} 

−n; 
3
2

;x^{2} 

. 
 (43) 

3.6 Hermite polynomials
Hermite polynomials are orthogonal on (−∞,+∞) with the
e^{−x2} as the weight function.
This weight function is its own
Fourier transform:
e^{−x2}= 
1

 ⌠ ⌡

∞
−∞

e^{−t2}e^{2ixt}dt. 
 (44) 
Hermite polynomials can be defined by the Rodrigues formula:
H_{n}(x)=(−1)^{n}e^{x2} 
d^{n}e^{−x2}
dx^{n}

. 

It is easy to check that H_{n}(x) is a polynomial of degree n.
If we repeatedly differentiate (44) we get

d^{n}e^{−x2}
dx^{n}

= 
(2i)^{n}

 ⌠ ⌡

∞
−∞

e^{−t2}t^{n}e^{2ixt}dt. 

Hence
H_{n}(x)= 
(−2i)^{n}e^{x2}

 ⌠ ⌡

∞
−∞

e^{−t2}t^{n}e^{2ixt}dt. 

It is easy now to prove the orthogonality property,
 ⌠ ⌡

∞
−∞

e^{−x2} H_{n}(x)H_{m}(x)dx = 2^{n}n!  √

π

δ_{mn}, 

using the Rodrigues formula and integrating by parts.
The Hermite polynomials have a simple generating
function

∞ ∑
n=0


H_{n}(x)
n!

r^{n}=e^{2xr−r2}. 
 (45) 
Recurrence relation has the form
H_{2n+1}(x)−2xH_{n}(x)+2nH_{n−1}(x)=0. 

From the integral representation we can derive
the Poisson Kernel for the Hermite polynomials

∞ ∑
n=0


H_{n}(x)H_{n}(y)
2^{n}n!

r^{n} = (1−r^{2})^{−1/2}e^{[2xyr−(x2+y2)r2]/(1−r2)}. 

The following integral equation for r < 1 can be derived
from the Poisson kernel by using orthogonality:

1

 ⌠ ⌡

∞
−∞


e^{[2xyr−(x2+y2)r2]/(1−r2)}

H_{n}(y) dy=H_{n}(x)r^{n}. 

Let r→ i and we have, at least formally,

1

 ⌠ ⌡

∞
−∞

e^{ixy}e^{−y2/2}H_{n}(y) dy=i^{n}e^{−x2}H_{n}(x). 

Hence, e^{−x2}H_{n}(x) is an eigenfunction of the
Fourier transform with eigenvalue i^{n}. This can be proved
by using the Rodrigues formula for H_{n}(x).
4 Separation of variables and special functions
4.1 Introduction
This lecture is about some applications of special
functions. It will also give some answer to the question:
Where special functions come from?
Special functions usually appear when solving linear partial differential
equations (PDEs), like heat equation, or when solving spectral
problems arising in quantum mechanics, like finding eigenfuntions of
a Schödinger operator. Many equations of this kind, including
many PDEs of mathematical physics can be solved by
the method of Separation of Variables (SoV). We will give
an introduction to this very powerful method and also will see
how it fits into the theory of special functions.
Definition 1
Separation of Variables M is a transformation which
brings a function ψ(x_{1},…,x_{n}) of many variables
into a factorized form
M: ψ→ ϕ_{1}(y_{1})·…·ϕ_{n}(y_{n}). 

Functions ϕ_{j}(y_{j}) are usually some known
special functions of one variable. The transformation M
could be a change of variables from {x} to {y}, but
could also be an
integral transform. Usually the function
ψ satisfies a simple linear PDE.
4.2 SoV for the heat equation
Let the complex valued function q(x,t) satisfy the
heat
equation
iq_{t}+q_{xx}=0, x,t ∈ [0,∞). 
 (46) 
q(x,0)=q_{1}(x), q(0,t)=q_{2}(t), 

where q_{1}(x) and q_{2}(t) are given functions decaying sufficiently
fast for large x and large t, respectively.
Divide (46) by q(x,t) and rewrite it as
iq_{t}=k^{2}q, q_{xx}=−k^{2}q, 

which is a separation of variables, since there is a factorized solution
of last two equations:
q_{k}(x,t)=e^{−ikx−ik2t}. 

Notice that this gives solution to (46) ∀k ∈ C. Because
our equation is linear, the following integral is also a solution to
the equation (46)
q(x,t)=  ⌠ ⌡

L

e^{−ikx−ik2t}ρ(k)dk, 
 (47) 
where L is some contour in the complex kplane, and
the function ρ(k) (`spectral data') can be expressed
in terms of certain integral transforms of q_{1}(x)
and q_{2}(t), in order to satisfy the initial data.
This is just a simple demonstration of the method of
Separation of Variables, also called Ehrenpreis principle
when applied to such kind of problems. It is interesting
to note that all solutions of (46) can be given
by (47) with the appropriate choice of the contour
L and the function ρ(x).
4.3 SoV for a quantum problem
Now consider another simple problem that comes
from quantum mechanics, namely: the linear spectral
problem for the stationary Schödinger operator
describing bound states of the 2dimensional
harmonic oscillator. That is, consider ψ(x_{1},x_{2}) ∈ L^{2}(R^{2}) which is an eigenfunction of the following
differential operator:
H=− 

∂^{2}
∂x_{1}^{2}

+ 
∂^{2}
∂x_{2}^{2}


+x_{1}^{2}+x_{2}^{2}, 

Hψ(x_{1},x_{2})=hψ(x_{1},x_{2}). 

This problem can be solved by the straightforward application
of the method of SoV without any intermediate transformations.
We get
H_{1}ψ = 

− 
∂^{2}
∂x_{1}^{2}

+x_{1}^{2} 

ψ(x_{1},x_{2})=h_{1}ψ(x_{1},x_{2}), 

H_{2}ψ = 

− 
∂^{2}
∂x_{2}^{2}

+x_{2}^{2} 

ψ(x_{1},x_{2})=h_{2}ψ(x_{1},x_{2}), 

Then
ψ(x_{1},x_{2})=ψ(x_{1})ψ(x_{2}), 

where


− 
∂^{2}
∂x_{i}^{2}

+x_{i}^{2} 

ψ(x_{i})=h_{i}ψ(x_{i}). 

Squareintegrable solution is expressed in terms of the Hermite
polynomials
ψ(x_{i}) ∈ L^{2}(R)⇔ ψ(x_{i}) = e^{−xi2/2}H_{n}(x_{i}), h=2n_{i}+1, n_{i}=0,1,2,…. 

Notice that
Hence, we get the basis in L^{2}(R^{2}) of the form
ψ_{n1n2}(x_{1},x_{2})=e^{−(x12+x22)/2}H_{n1}(x_{1})H_{n2}(x_{2}), 

The functions {ψ_{n1n2}} constitute an orthogonal set
of functions in R^{2}
 ⌠ ⌡

∞
−∞

ψ_{n1n2}(x_{1},x_{2})ψ_{m1m2}(x_{1},x_{2}) = a_{n1n2}δ_{n1m1}δ_{n2m2}. 

Every function f ∈ L^{2}(R^{2}) can be decomposed into a series
with respect to these basis functions
f(x_{1},x_{2})= 
∞ ∑
m,n=0

f_{mn}ψ_{mn}(x_{1},x_{2}). 

4.4 SoV and integrability
As we have seen, SoV can provide a very constructive way to
finding general solutions of some PDE's. Of course, a PDE
in question should possess some unique property to allow
application of the above technique. This extra quality
is called integrability. In very rough terms it means
existence of several commuting operators, like
H_{1} and H_{2} above. This new notion will become
clearer in the next example.
Now, we can give slightly more precise definition of SoV,
when applied to the spectral problems like the one in the
previous subsection.
Definition 2
SoV is a transformation of the multidimensional spectral
problem into a set of 1dimensional spectral problems.
4.5 Another SoV for the quantum problem
It might be surprising at first, but SoV is not unique.
To demonstrate this, let us construct another solution
of the same oscillator problem.
Consider functions Θ(u):
Θ(u) = 
x_{1}^{2}
u−a

+ 
x_{2}^{2}
u−b

− 1 = − 
(u−u_{1})(u−u_{2})
(u−a)(u−b)

, 
 (48) 
where u,a,b ∈ R are some parameters, and u_{1}, u_{2}
are zeros of Θ(u).
Taking the residues at u=a and u=b in both sides of (48), we have
x_{1}^{2} = 
(u_{1}−a)(u_{2}−a)
b−a

, x_{2}^{2} = 
(u_{1}−b)(u_{2}−b)
a−b

. 
 (49) 
The variables u_{1} and u_{2} are called elliptic coordinates
in R^{2}, because by definition they satisfy the equation

x_{1}^{2}
u−a

+ 
x_{2}^{2}
u−b

=1 
 (50) 
with the roots u_{1},u_{2} given by the equations
u_{1} + u_{2} = a + b + x_{1}^{2} + x_{2}^{2}, u_{1}u_{2} = ab + bx_{1}^{2} + ax_{2}^{2}. 

Let all variables satisfy the inequalities
a < u_{1} < b < u_{2} < ∞. 
 (51) 
Now introduce the functions
ψ_{→λ}(x_{1},x_{2}):=x_{1}^{k1}x_{2}^{k2}e^{−(x12+x22)/2} 
n ∏
i=1

Θ(λ_{i}), 

where k_{i}=0,1 and λ_{1},…,λ_{n} ∈ R are indeterminates.
Theorem 3
Functions ψ_{→λ}(x_{1},x_{2})
are eigenfunctions of the operator H iff {λ_{i}} satisfy
the following algebraic equations

∑
j ≠ i


1
λ_{i}−λ_{j}

− 
1
2

+ 
λ_{i}−a

+ 
λ_{i}−b

=0, i=1,…,n. 
 (52) 
Parameters λ_{i} have the
following properties (generalized Stieltjes theorem):
i) they are simple, ii) they are placed along the real axis inside
the intervals (51), iii) they are the
critical points of the function G,


G(λ_{1},…,λ_{n}) = exp(− 
1
2

(λ_{1}+…+λ_{n})) 
 

× 
n ∏
p=1

(λ_{p}−a)^{k1/2+1/4}(λ_{p}−b)^{k2/2+1/4} 
∏
r > p

(λ_{r}−λ_{p}). 
 (53) 

This Theorem gives another basis for the oscillator problem.
In this case we have two commuting operators G_{1} and G_{2}:
G_{1}=− 
∂^{2}
∂x_{1}^{2}

+x_{1}^{2}+ 
1
a−b



x_{1} 
∂
∂x_{2}

−x_{2} 
∂
∂x_{1}


2

, 

G_{2}=− 
∂^{2}
∂x_{2}^{2}

+x_{2}^{2}− 
1
a−b



x_{1} 
∂
∂x_{2}

−x_{2} 
∂
∂x_{1}


2

, 

which are diagonal on this basis. Notice that
?
5 Integrable systems and special functions
5.1 Introduction
Separation of variables (SoV) for linear partial differential
operators of two varaibles D_{x1,x2} can be defined
by the following procedure. Assume that by some
transformation we could transform the operator
D_{x1,x2} into the form:
D_{x1,x2}→ D_{y1,y2}= 
1
ϕ_{1}(y_{1})−ϕ_{2}(y_{2})

(D^{(1)}_{y1} −D^{(2)}_{y2}), 
 (54) 
where ϕ_{i}(y_{i}) are some functions of one variable and
D^{(i)}_{yi} are some ordinary differential operators.
It could be done by changing variables (coordinate transform)
{x}→ {y}, but it could also involve an
integral transform.
Then, we can introduce another operator G_{y1,y2} such that
D^{(1)}_{y1} − ϕ_{1}(y_{1}) D_{y1,y2}=G_{y1,y2} = D^{(2)}_{y2} − ϕ_{2}(y_{2}) D_{y1,y2}. 

Notice, that D and G commute
The operator G that appeared in the procedure of SoV
is called operator of constant of separation.
The above definition is
easily expanded to the more variable case. Essential
step is to keep bringing the operator into the `separable form'
(54)
that allows to introduce more and more operators
of constants of separation. If one could break the operator
down to a set of onevariable operators, then separation of variables
is done. This obviously requires that the number of
operators G will be equal to the number of variables
minus 1,
and also that they commute between themselves
and with the operator D. The latter condition defines
an
integrable system. So, we can say that necessary
condition for an operator to be separable is that it
can be supplemented by a full set of mutually commuting
operators (G), or in other words the operator has
to belong to an integrable family of operators.
As we have seen in the previous Lecture, special functions
of one variable often appear when one separates variables
in linear PDEs in attempt to find a general solution in
terms of a large set of factorized partial (or separated)
solutions. Usually, the completeness of the set of
separated solutions can also be proved, so that we
can indeed expand any solution of our equation,
which is a multivariable `special function', into
a basis of separated (onevariable) special
functions.
There are two aspects of this procedure.
First, the separated functions of one variable
will satisfy ODEs, so that we can,
in principle, `classify' the initial multivariable
special function by the procedure of separation
of variables and by the type of the obtained
ODEs. It is clear that
when some regularity conditions are set on the
class of allowed transformations which used in
a SoV, one should expect a good correspondence
between complexity of both functions, the multivariable
and any of the corresponding onevariable special functions.
In the example of isotropic harmonic oscillator, we had
a trivial separation of variables first (in Cartesian
coordinates), which gave us a basis as a product
of Hermite polynomials. Hence, we might conclude
that the operator H is, in a sense, a twodimensional
analogue of the hypergeometric differential operator,
because one of its separated bases is given in terms
of hypergeometric functions (Hermite polynomials).
Curiously, the second separation, in elliptic coordinates,
led to the functions of the Heun type, which is beyond
the hypergeometric class. The explanation of this
seeming contradiction is that the operator H is
`degenerate' in the sense that it separates
in many coordinate systems. To avoid this
degeneracy, one could disturb this operator by
adding some additional terms that will break
one separation, but will still allow the other.
Therefore generically, if an operator can be separated,
it usually separates by a unique transformation, leading
to a unique set of separated special functions
of one variable.
The second aspect of the problem is understanding
what are the sufficient
conditions of separability? Or, which integrable operators
can be separated and which can not? A very close
question is: what class of transformations should
be allowed when trying to separate an operator?
In order to demonstrate the point about a class
of transformations, take a square of the
Laplace operator:


∂^{2}
∂x_{1}^{2}

+ 
∂^{2}
∂x_{2}^{2}


2

. 

Of course, this operator is integrable, although
there is a Theorem saying that one can not
separate this operator by any coordinate
transform {x}→ {y}:
y_{1}=y_{1}(x_{1},x_{2}), y_{2}=y_{2}(x_{1},x_{2}). 

It means that although one can find some partial
solutions in the factorized form, they will never
form a basis. There is no statement like that
if one allows integral transforms, which means
that this operator might be still separable
in a more general sense.
It is interesting to note that the operators that are separable
through a change of coordinates, although being very important
in applications, constitute a very small subclass of all
separable operators. Correspondingly, the class of special
functions of many variables, which is related to integrable systems,
is much larger then its
subclass that is reducible to onevariable special
functions by a coordinate change of variables.
Below we give an example of the integrable system
that can not be separated by a coordinate change of variables,
but is neatly separable by a special integral transform.
5.2 CalogeroSutherland system
In this subsection,
an integral operator M is constructed performing a separation
of variables for the 3particle quantum CalogeroSutherland (CS) model.
Under the action of M the CS eigenfunctions (Jack polynomials
for the root system A_{2}) are transformed to the factorized form
φ(y_{1})φ(y_{2}), where φ(y) is a trigonometric polynomial
of one variable
expressed in terms of the _{3}F_{2} hypergeometric series. The
inversion of M produces a new integral representation for the
A_{2} Jack polynomials.
The set of commuting differential operators
defining the integrable system called (3particle) CalogeroSutherland
model is generated by the following partial differential operators


 

−(∂_{1}∂_{2}+∂_{1}∂_{3}+∂_{2}∂_{3}) −g(g−1)(sin^{−2}q_{12}+sin^{−2}q_{13}+sin^{−2}q_{23}), 
 

i∂_{1}∂_{2}∂_{3} +ig(g−1)(sin^{−2}q_{23} ∂_{1} +sin^{−2}q_{13} ∂_{2}+sin^{−2}q_{12} ∂_{3}), 
 

q_{ij}=q_{i}−q_{j}, ∂_{i}= 
∂
∂q_{i}

, 

or, by the equivalent set, acting on Laurent polynomials in
variables t_{j}=e^{2iqj}, j=1,2,3:


 

−(∂_{1}∂_{2}+∂_{1}∂_{3}+∂_{2}∂_{3}) 
 

g[cotq_{12}(∂_{1}−∂_{2})+cotq_{13}(∂_{1}−∂_{3}) +cotq_{23}(∂_{2}−∂_{3})] 
 

 

 

−ig[cotq_{12}(∂_{1}−∂_{2})∂_{3} +cotq_{13}(∂_{1}−∂_{3})∂_{2} +cotq_{23}(∂_{2}−∂_{3})∂_{1} ] 
 

+2ig^{2}[ (1+cotq_{12}cotq_{13})∂_{1} +(1−cotq_{12}cotq_{23})∂_{2} +(1+cotq_{13}cotq_{23})∂_{3} ] 


the vacuum function being
Ω( 
→
q

)=sinq_{12}sinq_{13}sinq_{23}^{g}. 
 (55) 
Their eigenvectors Ψ_{→n}, resp. J_{→n},
Ψ_{→n}( 
→
q

)=Ω( 
→
q

)J_{→n}( 
→
q

), 
 (56) 
are parametrized
by the triplets of integers {n_{1} ≤ n_{2} ≤ n_{3}} ∈ Z^{3},
the corresponding eigenvalues being
h_{1}=2(m_{1}+m_{2}+m_{3}), h_{2}=4(m_{1}m_{2}+m_{1}m_{3}+m_{2}m_{3}), h_{3}=8m_{1}m_{2}m_{3}, 
 (57) 
where,
m_{1}=n_{1}−g, m_{2}=n_{2}, m_{3}=n_{3}+g. 
 (58) 
5.3 Integral transform
We will denote the separating operator acting on
Ψ_{→n} as K, and the one acting on
the Jack polynomials J_{→n} as M.
To describe both operators, let us introduce
the following notation.
x_{1}=q_{1}−q_{3}, x_{2}=q_{2}−q_{3}, Q=q_{3}, 

x_{±}=x_{1}±x_{2}, y_{±}=y_{1}±y_{2}. 

We shall study the action of K locally, assuming
that q_{1} > q_{2} > q_{3} and hence x_{+} > x_{−}.
The operator K:Ψ(q_{1},q_{2},q_{3})→~Ψ(y_{1},y_{2};Q) is defined
as an integral operator

~
Ψ

(y_{1},y_{2};Q)=  ⌠ ⌡

y_{+}
y_{−}

dξ K(y_{1},y_{2};ξ) Ψ 

y_{+}+ξ
2

+Q, 
y_{+}−ξ
2

+Q,Q 


 (59) 
with the kernel
K=κ 

sin 

ξ+y_{−}
2


sin 

ξ−y_{−}
2


sin 

y_{+}+ξ
2


sin 

y_{+}−ξ
2



siny_{1}siny_{2}sinξ


g−1


 (60) 
where κ is a normalization coefficient to be fixed later.
It is assumed in (59) and (60)
that y_{−} < x_{−}=ξ < y_{+}=x_{+}.
The integral converges when g > 0 which will always be assumed
henceforth.
The motivation for such a choice of K takes its origin
from considering the problem in the classical limit
(g→∞) where there exists effective prescription
for constructing a separation of variables for an integrable system
from the poles of the socalled BakerAkhiezer function.
Theorem 1
Let H_{k}Ψ_{n1n2n3}=h_{k}Ψ_{n1n2n3}.
Then the function ~Ψ_{→n}=KΨ_{→n}
satisfies the differential equations
Q 
~
Ψ

→n

=0, Y_{j} 
~
Ψ

→n

=0, j=1,2 
 (61) 
where

Y_{j}=i∂_{yj}^{3}+h_{1}∂_{yj}^{2}−i 

h_{2}+3 
g(g−1)
sin^{2}y_{j}


∂_{yj} 
 

− 

h_{3}+ 
g(g−1)
sin^{2}y_{j}

h_{1} +2ig(g−1)(g−2) 
cosy_{j}
sin^{3} y_{j}


. 
 (63) 

The proof is based on the following proposition.
Proposition 2
The kernel K satisfies the differential equations



i∂^{3}_{yj}+H_{1}^{*}∂^{2}_{yj} −i 

H_{2}^{*}+ 
3g(g−1)
sin^{2} y_{j}


∂_{yj} 
 

− 

H_{3}^{*}+H_{1}^{*} 
g(g−1)
sin^{2} y_{j}

+2ig(g−1)(g−2) 
cosy_{j}
sin^{3} y_{j}



K=0, 


where H_{n}^{*} is the Lagrange adjoint of H_{n}
 ⌠ ⌡

ϕ(q)(Hψ)(q) dq=  ⌠ ⌡

(H^{*}φ)(q)ψ(q) dq 



 

−∂_{q1}∂_{q2}−∂_{q1}∂_{q3}−∂_{q2}∂_{q3} −g(g−1)[sin^{−2}q_{12}+sin^{−2}q_{13}+sin^{−2}q_{23}], 
 

−i∂_{q1}∂_{q2}∂_{q3}−ig(g−1) [sin^{−2}q_{23} ∂_{q1}+sin^{−2}q_{13} ∂_{q2} +sin^{−2}q_{12}∂_{q3}]. 


The proof is given by a direct, though tedious calculation.
To complete the proof of the theorem 5.1, consider
the expressions QKΨ_{→n} and Y_{j}KΨ_{→n}
using the formulas (59) and (60) for K.
The idea is to use
the fact that Ψ_{→n} is an eigenfunction of
H_{k} and replace h_{k}Ψ_{→n} by H_{k}Ψ_{→n}.
After integration by parts in the variable ξ the operators
H_{k} are replaced by their adjoints H_{k}^{*} and the result is zero
by virtue of proposition 5.2.
The following theorem gives the separation of variables.
Theorem 3
The function ~Ψ_{n1n2n3} is factorized

~
Ψ

n_{1}n_{2}n_{3}

(y_{1},y_{2};Q) = e^{ih1Q}ψ_{n1n2n3}(y_{1})ψ_{n1n2n3}(y_{2}). 
 (64) 
The factor ψ_{→n}(y) allows further factorization
ψ_{→n}(y)=(siny)^{2g}φ_{→n}(y) 
 (65) 
where φ_{→n}(y) is a Laurent polynomial in t=e^{2iy}
φ_{→n}(y)= 
n_{3} ∑
k=n_{1}

t^{k} c_{k}( 
→
n

;g). 
 (66) 
The coefficients c_{k}(→n;g) are rational functions of
k, n_{j} and g. Moreover, φ_{→n}(y)
can be expressed
explicitely in terms of the hypergeometric function _{3} F_{2}
as
φ_{→n}(y)=t^{n1}(1−t)^{1−3g}_{3} F_{2} 

 ;t 


 (67) 
where
a_{j}=n_{1}−n_{4−j}+1−(4−j)g, b_{j}=a_{j}+g. 
 (68) 
Note that, by virtue of the theorem 5.1, the function
~Ψ_{→n}(y_{1},y_{2};Q) satisfies an ordinary differential
equation in each variable. Since
Qf=0 is a first order differential equation
having a unique, up to a constant factor, solution f(Q)=e^{ih1Q},
the dependence on Q is factorized. However, the differential
equations Y_{j}ψ(y_{j})=0 are of third order and have
three linearly independent solutions. To prove the theorem
5.3 one needs thus to study the ordinary
differential equation



i∂_{y}^{3}+h_{1}∂_{y}^{2}−i 

h_{2}+3 
g(g−1)
sin^{2}y


∂_{y} 
 

− 

h_{3}+ 
g(g−1)
sin^{2}y

h_{1} +2ig(g−1)(g−2) 
cosy
sin^{3} y



ψ = 0. 
 (69) 

and to select its special solution corresponding to ~Ψ.
The proof will take several steps. First, let us eliminate from Ψ and
~Ψ the vacuum factors Ω, see (56),
and, respectively

~
Ψ

(y_{1},y_{2};Q)=ω(y_{1})ω(y_{2}) 
~
J

(y_{1},y_{2};Q), ω(y)=sin^{2g}y. 
 (70) 
Conjugating the operator K with the vacuum factors
M=ω_{1}^{−1}ω_{2}^{−1}KΩ:J→ 
~
J


 (71) 
we obtain the integral operator

~
J

(y_{1},y_{2};Q)=  ⌠ ⌡

y_{+}
y_{−}

dξ M(y_{1},y_{2};ξ) J 

y_{+}+ξ
2

+Q, 
y_{+}−ξ
2

+Q,Q 


 (72) 
with the kernel

M(y_{1},y_{2};ξ)=K(y_{1},y_{2};ξ) 
Ω 

y_{+}+ξ
2

+Q, 
y_{+}−ξ
2

+Q,Q 

ω(y_{1})ω(y_{2})


 

= κsinξ 


sin 

ξ+y_{−}
2


sin 

ξ−y_{−}
2



g−1



sin 

y_{+}+ξ
2


sin 

y_{+}−ξ
2



2g−1


[ siny_{1}siny_{2}]^{3g−1}

. 
 (73) 

Proposition 4
Let S be a trigonometric
polynomial in q_{j}, i.e. Laurent polynomial in t_{j}=e^{2iqj},
which is symmetric w.r.t. the transpositon q_{1}↔ q_{2}.
Then ~S=MS is a trigonometric polynomial symmetric w.r.t.
y_{1}↔ y_{2}.
5.4 Separated equation
To complete the proof of the theorem 5.3
we need to learn more about the separated equation (69).
Eliminating from ψ the vacuum factor ω(y)=sin^{2g}y
via the substitution ψ(y)=φ(y)ω(y) one obtains

[i∂_{y}^{3}+(h_{1}+6igcoty)∂_{y}^{2} 
 

+(−i(h_{2}+12g^{2})+4gh_{1}coty+3ig(3g−1)sin^{−2}y)∂_{y} 
 

+(−(h_{3}+4g^{2}h_{1})−2ig(h_{2}+4g^{2})coty +g(3g−1)h_{1}sin^{−2}y)]φ = 0. 
 (74) 

The change of variable t=e^{2iy} brings the last equation to
the Fuchsian form:
[∂_{t}^{3}+w_{1}∂_{t}^{2}+w_{2}∂_{t}+w_{3}]φ = 0 
 (75) 
where


 


(3g^{2}−3g+1)+ 
1
2

(2g−1)h_{1}+ 
1
4

h_{2} 
t^{2}

+ 
3g(3g−1)
(t−1)^{2}

− 
g(9(g−1)+2h_{1})
t(t−1)

, 
 

− 
g^{3}+ 
1
2

g^{2}h_{1}+ 
1
4

gh_{2}+ 
1
8

h_{3} 
t^{3}

+ 
1
2

g((h_{2}+4g^{2})(t−1)−(3g−1)h_{1}) 
t^{2}(t−1)^{2}

. 


The points t=0,1,∞ are regular singularities with the exponents

  
 
ρ ∈ {n_{1},n_{2}+g,n_{3}+2g} 

 
−σ ∈ {n_{1}−2g,n_{2}−g,n_{3}} 




The equation (75) is reduced
by the substitution φ(t)=t^{n1}(1−t)^{1−3g}f(t)
to the standard _{3}F_{2} hypergeometric form
[t∂_{t}(t∂_{t}+b_{1}−1)(t∂_{t}+b_{2}−1) −t(t∂_{t}+a_{1})(t∂_{t}+a_{2})(t∂_{t}+a_{3})]f=0, 
 (76) 
the parameters a_{1}, a_{2}, a_{3}, b_{1}, b_{2} being given by the formulas
(68) which read
a_{1}=n_{1}−n_{3}+1−3g, a_{2}=n_{1}−n_{2}+1−2g, a_{3}=1−g, 

b_{1}=n_{1}−n_{3}+1−2g, b_{2}=n_{1}−n_{2}+1−g. 

Proposition 5
Let the parameters h_{k} be given by (57),
(58) for a
triplet of integers {n_{1} ≤ n_{2} ≤ n_{3}} and g ≠ 1,0,−1,−2,…. Then the equation (75) has
a unique, up to a constant factor, Laurentpolynomial solution
φ(t)= 
n_{3} ∑
k=n_{1}

t^{k} c_{k}( 
→
n

;g), 
 (77) 
the coefficients c_{k}(→n;g) being rational functions of
k, n_{j} and g.
Proof. Consider first the hypergeometric series
for _{3}F_{2} which converges for t < 1.
Using for a_{j} and b_{j} the
expressions (68) one notes that
a_{j+1}=b_{j}+n_{4−j}−n_{3−j} and therefore

(a_{j+1})_{k}
(b_{j})_{k}

= 
(b_{j}+k)_{n4−j−n3−j}
(b_{j})_{n4−j−n3−j}

. 

The expression

(a_{2})_{k}(a_{3})_{k}
(b_{1})_{k}(b_{2})_{k}

= 
(b_{1}+k)_{n3−n2}(b_{2}+k)_{n2−n1}
(b_{1})_{n3−n2}(b_{2})_{n2−n1}

= P_{n3−n1}(k) 

is thus a polynomial in k of degree n_{3}−n_{1}.
So we have
_{3}F_{2}(a_{1},a_{2},a_{3};b_{1},b_{2};t) = 
∞ ∑
k=0


(a_{1})_{k}
k!

P_{n3−n1}(k)t^{k} 

from which it follows that
_{3}F_{2}(a_{1},a_{2},a_{3};b_{1},b_{2};t) = 
~
P

n_{3}−n_{1}

(t)(1−t)^{3g−1} 

where ~P_{n3−n1}(t) is a polynomial of degree n_{3}−n_{1}
in t.
To prove now the proposition 5.5 it is sufficient to
notice that the hypergeometric series _{3}F_{2}(a_{1},a_{2},a_{3};b_{1},b_{2};t)
satisfies the same equation (76) as f(t) and
therefore the Laurent polynomial F_{→n}(t) constructed
above satisfies the equation (75).
The uniqueness follows from the fact that all the linearly
independent solutions to (75) are nonpolynomial which
is seen from the characteristic exponents.
Now everything is ready to finish the proof of the theorem 5.3.
Since the function ~J_{n1 n2 n3} (y_{1}, y_{2}; Q) satisfies
(74) in variables y_{1,2} and is a Laurent polynomial
it inevitably has the factorized form

~
J

n_{1}n_{2}n_{3}

(y_{1},y_{2};Q) = e^{ih1Q}φ_{n1n2n3}(y_{1})φ_{n1n2n3}(y_{2}) 
 (78) 
by virtue of the proposition 5.5.
5.5 Integral representation for Jack polynomials
The formula (78) presents an interesting
opportunity to construct a new integral representation of the
Jack polynomial J_{→n} in terms of the _{3}F_{2}
hypergeometric polynomials φ_{→n}(y) constructed above.
To achieve this goal, it is necessary to invert explicitely the
operator M:J→~J.
It is possible to show that this problem (after changing variables)
is equivalent to
the problem of finding an inverse of the following integral transform:

~
s

(η_{−})=  ⌠ ⌡

η_{+}
η_{−}

dξ_{−} 
(ξ_{−}−η_{−})^{g−1}
Γ(g)

s(ξ_{−}) 
 (79) 
which is known as RiemannLiouville integral of fractional order
g. Its inversion is formally given by changing sign
of g
s(ξ_{−})=  ⌠ ⌡

ξ_{+}
ξ_{−}

dη_{−} 
(η_{−}−ξ_{−})^{−g−1}
Γ(−g)


~
s

(η_{−}) 
 (80) 
and is called fractional differentiation operator.
We will not give details of this calculation, just giving the
final result. The formula for M^{−1}:~J→ J is
J(x_{+},x_{−};Q)=  ⌠ ⌡

x_{+}
x_{−}

dy_{−} \checkM(x_{+},x_{−};y_{−}) 
~
J

(x_{+},y_{−};Q) 
 (81) 
\checkM=\checkκ 
siny_{−} 

sin 

x_{+}+y_{−}
2


sin 

x_{+}−y_{−}
2



3g−1



sin 

y_{−}+x_{−}
2


sin 

y_{−}−x_{−}
2



g+1

[sinx_{1}sinx_{2}]^{2g−1} 


 (82) 
where
\checkκ = 
Γ(2g)
2Γ(−g)Γ(3g)

. 
 (83) 
The operators M (and M^{−1}) are normalized by M: 1→ 1.
For the kernel of K^{−1} we have respectively
\checkK=\checkκ 
sin^{g} x_{−}siny_{−} 

sin 

x_{+}+y_{−}
2


sin 

x_{+}−y_{−}
2



g−1



sin 

y_{−}+x_{−}
2


sin 

y_{−}−x_{−}
2



g+1

[sinx_{1}sinx_{2}]^{g−1} 

. 
 (84) 
The formulas (78), (81), (82)
provide a new integral representation for Jack polynomial
J_{→n} of three variables in terms of the _{3}F_{2}
hypergeometric polynomials φ_{→n}(y).
It is remarkable that for positive integer g the operators
K^{−1}, M^{−1} become differential operators
of order g. In particular, for g=1 we have K^{−1}=∂/∂y_{−}.
References
 [1]

George E. Andrews, Richard Askey, and Ranjan Roy.
Special functions.
Cambridge University Press, Cambridge, 1999.
 [2]

Willard Miller, Jr.
Lie Theory and Special Functions.
Academic Press, New York, 1968.
Mathematics in Science and Engineering, Vol. 43.
 [3]

N. Ja. Vilenkin.
Special Functions and the Theory of Group Representations.
American Mathematical Society, Providence, R. I., 1968.
Translated from the Russian by V. N. Singh. Translations of
Mathematical Monographs, Vol. 22.
Index (showing section)
 (Gauss) hypergeometric function,
2.2
 (generalized) hypergeometric series,
2.2
 Chebyshev polynomials, 3.5
 ChristoffelDarboux formula,
3.2
 classical orthogonal polynomials,
3.5
 fractional linear transformation,
2.7
 gamma function, 1.2
 Gauss quadrature, 0.0,
3.4
 Gegenbauer polynomials, 3.5
 Hermite polynomials, 3.5
 hypergeometric differential equation,
2.6
 hypergeometric function, 1.1
 hypergeometric series, 1.1
 integrable system, 5.1
 integral transform, 4.1,
5.1
 Jacobi polynomial, 3.1

 Laguerre polynomials, 3.5
 Legendre polynomials, 3.5
 Rodrigues formula, 3.5
 Separation of Variables, 4.1
 SoV, 4.4

File translated from
T_{E}X
by
T_{T}H,
version 3.13.
On 22 May 2003, 12:56.