Tải bản đầy đủ - 0 (trang)

1 From Quantum Theory to Computational Chemistry. A Brief Account of Developments

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

Abstract: Quantum chemical calculations rely on a few fortunate circumstances, like usually small relativistic and negligible electrodynamic (QED) corrections, and large nucleito-electrons mass ratio. Unprecedented progress in computer technology has revolutionized

quantum chemistry, making it a valuable tool for experimenters. It is important for computational chemistry to elaborate methods that look at molecules in a multiscale way, provide its

global and synthetic description, and compare this description with those for other molecules.

Only such a picture can free researchers from seeing molecules as a series of case-by-case studies. Chemistry is a science of analogies and similarities, and computational chemistry should

provide the tools for seeing this.

Introduction – Exceptional Status of Chemistry

Contemporary science fails to explain the largest-scale phenomena taking place in the universe,

such as the speeding up of the galaxies (supposedly due to the undefined “black energy”) and

the nature of the lion’s share of the universe’s matter (and also unknown “dark matter”).

Quantum chemistry is in a far better position, which may be regarded even as exceptional in the sciences. The chemical phenomena are explainable down to individual molecules

(which represent the subject of quantum chemistry) by current theories. It turned out, by comparing theory and experiment, that the solution to the Schrödinger equation (Schrödinger

a, b, c, d) offers in most cases a quantitatively correct picture. Only molecules with very

heavy atoms, due to the relativistic effects becoming important, need to be treated in a special

way based on the Dirac theory (Dirac a, b). This involves an approximate Hamiltonian in

the form of the sum of Dirac Hamiltonians for individual electrons, and the electron–electron

interactions in the form of the (non-relativistic) Coulomb terms, a common and computationally successful practice ignoring, however, the resulting resonance character of all the

eigenvalues (Brown and Ravenhall ; Pestka et al. ). When, very rarely, higher accuracy is needed, one may eventually include the quantum electrodynamics (QED) corrections, a

procedure currently far from routine application, but still feasible for very small systems (Łach

et al. ).

This success of computational quantum chemistry is based on a few quite fortunate

circumstances (for references see, e.g., Piela ):

•

•

•

•

Atoms and molecules are built of only two kinds of particles: nuclei and electrons.

Although nuclei have non-zero size (electrons are regarded as point-like particles), the size

is so small that its influence is below chemical accuracy (Łach et al. ). Therefore, all the

constituents of atoms and molecules are treated routinely as point charges.

The QED corrections are much smaller than energy changes in chemical phenomena

(e.g., : ) and may be safely neglected in most applications (Łach et al. ).

The nuclei are thousands times heavier than electrons and therefore, except in some special situations, they move thousands times slower than electrons. This makes it possible

to solve the Schrödinger equation for electrons, assuming that the nuclei do not move,

i.e., their positions are fixed in space (“clamped nuclei”). This concept is usually presented within the so called adiabatic approximation. In this approximation the motion

of the nuclei is considered in the next step, in which the electronic energy (precalculated for

any position of the nuclei), together with a usually small diagonal correction for coupling the

nuclei-electrons motion, plays the role of the potential energy surface (PES). The total wave

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

function is assumed to be a product of the electronic wave function and of the function

describing the motion of the nuclei. The commonly used Born–Oppenheimer (Born and

Oppenheimer ) approximation (B-O) is less accurate than the adiabatic one, because it

neglects the above-mentioned diagonal correction, making the PES independent of nuclear

masses. Using the PES concept one may introduce the crucial idea of the spatial structure

of a molecule, defined as those positions of the nuclei that assure a minimum of the PES.

This concept may be traced back to Hund (a, b, c). Moreover, this structure corresponds to a certain ground-state electron density distribution that exhibits atomic cores,

atom–atom bonds, and atomic lone pairs.

It is generally believed that the exact analytical solution to the Schrödinger equation

for any atom (except the hydrogen-like atom) or molecule is not possible. Instead, some

reasonable approximate solutions can be obtained, practically always involving calculation of

a large number of molecular integrals, and some algebraic manipulations on matrices built of

these integrals. The reason for this is efficiency of what is known as algebraic approximation

(“algebraization”) of the Schrödinger equation. The algebraization is achieved by postulating a

certain finite basis set {Φ i }i=M

i= and expanding the unknown wave function as a linear combination of the “known” Φ i with unknown expansion coefficients. Such an expansion can be

encountered in the one-electron case (e.g., linear combination of atomic orbitals introduced

by Bloch ), or/and in the many-electron case, e.g., the total wave function expansion in

Slater determinants, related to configurations (Slater ), or in the explicitly correlated manyelectron functions (Hylleraas ). It is assumed for good quality calculations (arguments are

as a rule of numerical character only) that a finite M chosen is large enough to produce sufficient

accuracy, with respect to what would be with M = ∞ (exact solution). The above-mentioned

integrals appear because, after the expansion is inserted into the Schrödinger equation, one

makes the scalar products (they represent the integrals, which should be easy to calculate) of

the expansion with Φ , Φ , . . . , Φ M , consecutively. In this way the task of finding the wave function by solving the Schrödinger equation is converted into an algebraic problem of finding the

expansion coefficients, usually by solving some matrix equation. It remains to take care of the

choice of the basis set {Φ i } i=M

i= . The choice represents a technical problem, but unfortunately it

contains a lot of arbitrariness and, at the same time, is one of the most important factors influencing cost and quality of the computed solution. Application of functions Φ i based on the

Gaussian-type one-electron orbitals (GTO) (Boys et al. ) provides a low cost/quality ratio

and this fact is considered as one of the most important factors that has made computational

chemistry so efficient.

Algebraization involves as a rule a large M and therefore the whole procedure requires fast

computing facilities. These facilities changed over time, from very modest manual mechanical calculators at the beginning of the twentieth century to what we consider now as powerful

supercomputers. Almost immediately after formulation of quantum mechanics in , Douglas Hartree published several papers (Hartree ) presenting his manual calculator-based

solutions for atoms of rubidium and chlorine. However amazing it looks now, these were

self-consistent ab initio computations.

Computational chemistry contributed significantly to applied mathematics, because new methods had to be

invented in order to treat the algebraic problems of a previously unknown scale (like for M of the order of

billions), see, e.g., Roos ().

That is, derived from the first principles of (non-relativistic) quantum mechanics.

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

In , Walter Heitler and Fritz Wolfgang London clarified the origin of the covalent chemical bond (Heitler and London ), the concept crucial for chemistry. In the paper the authors

demonstrated, in numerical calculations, that the nature of the covalent chemical bond in H is

of quantum character, because the (semiquantitatively) correct description of H emerged only

after inclusion the exchange of electrons and between the nuclei in the formula a()b()

(a, b are the s atomic orbitals centered on nucleus a and nucleus b, respectively) resulting in

the wave function a()b()+a()b(). Thus, taking into account also the contribution of Hund

(a, b, c), is therefore the year of birth of computational chemistry.

Perhaps the most outstanding manual calculator calculations were performed in by

Hubert James and Albert Coolidge for the hydrogen molecule (James and Coolidge ).

This variational result has been the best one in the literature over a period of years.

The s marked the beginning of a new era – the time of programmable computers. Apparently, just another tool for number crunching became available. In fact, however, the idea of

programming made a revolution because it

•

•

•

•

•

•

Liberated humans from tedious manual calculations.

Offered large speed of computation, incomparable to any manual calculator. Also, the new

data storage tools soon became of massive character.

Resulted in more and more efficient programs, based on earlier versions (like staying “on the

shoulders of the giants”), offering possibilities to calculate dozens of molecular properties.

Allowed the dispersed, parallel and remote calculations.

Resulted in the new branch of chemistry: computational chemistry.

Allowed performing calculations by anyone, even those not trained in chemistry, quantum

chemistry, mathematics, etc.

The first ab initio Hartree–Fock calculations (based on ideas of Douglas Hartree ()

and Vladimir Fock (a, b)) on programmable computers for diatomic molecules were performed at the Massachusetts Institute of Technology in , using a basis set of Slater-type

orbitals. The first calculations with Gaussian-type orbitals were carried out by Boys and coworkers, also in (Boys et al. ). An unprecedented spectroscopic accuracy was obtained

for the hydrogen molecule in by Kołos and Roothaan (). In the early s the

era of gigantic programs began with the possibility to compute many physical quantities

at various levels of approximation. We currently live in an era with computational possibilities growing exponentially (the notorious “Moore law” of doubling the computer power

every years). This enormous progress revolutionized our civilization in a very short time.

The revolution in computational quantum chemistry changed chemistry in general, because

computations became feasible for molecules of interest for experimental chemists. The progress

has been accompanied by achievements in theory, however mainly of the character related to

computational needs. Today, fairly accurate computations are possible for molecules composed

It is difficult to define what computational chemistry is. Obviously, whatever involves calculations in chemistry might be treated as part of it. This, however, sounds like a pure banality. The same is true with the idea

that computational chemistry means chemistry that uses computers. It is questionable whether this problem needs any solution at all. If yes, the author sticks to the opinion that computational chemistry means

quantitative description of chemical phenomena at the molecular level.

Perhaps the best known is GAUSSIAN, elaborated by a large team headed by John Pople.

The speed as well as the capacity of computer, memory increased about billion times over a period of

years. This means that what now takes an hour of computations, would require in about , years of

computing.

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

of several hundreds of atoms, spectroscopic accuracy is achievable for molecules with a dozen

atoms, while the QED calculations can be performed for the smallest molecules only (few

atoms).

A Hypothetical Perfect Computer

Suppose we have at our disposal a computer that is able to solve the Schrödinger equation

exactly for any system and in negligible time. Thus, we have free access to the absolute detailed

picture of any molecule. This means we may predict with high accuracy and confidence the

value of any property of any molecule. We might be tempted to say that being able to give such

predictions is the ultimate goal of science: “We know everything about our system. If you want

to know more about the world, take other molecules and just compute, you will know.”

Let us consider a system composed of carbon nuclei, protons and electrons. Suppose we want to know the geometry of the system for the ground state. The computer answers

that it does not know what we mean by the term “geometry”. We are more precise now and say

that we are interested in the carbon–carbon (CC) and carbon–hydrogen (CH) distances. The

computer answers that it is possible to compute only the mean distances, and provides them

together with the proton–proton, carbon–electron, proton–electron and electron–electron distances, because it treats all the particles on an equal footing. We look at the CC and CH distances

and see that they are much larger than we expected for the CC and CH bonds in benzene. The

reason is that in our perfect wave function the permutational symmetry is correctly included.

This means that the average carbon–proton distance takes into account all carbons and all protons. The same with other distances. To deduce more we may ask for computing other quantities

like angles, involving three nuclei. Here, too, we will be confronted with numbers including

averaging over identical particles. These difficulties do not necessarily mean that the molecule

has no spatial structure at all, although this can also happen. The numbers produced would be

extremely difficult to translate into a D picture even for quite small molecules, not to mention

such a floppy molecule as a protein.

In many cases we would obtain a D picture we did not expect. This is because many

molecular structures we are familiar with represent higher-energy metastable electronic states

(isomers). This is the case in our example. When solving the time-dependent Schrödinger

equation, we are confronted with this problem. Let us use as a starting wave function the one

corresponding to the benzene molecule. In time-evolution we will stay probably with a similar geometry for a long time. However, there is a chance that after a long period the wave

function changes to that corresponding to three interacting acethylene molecules (three times

HCCH). The Born–Oppenheimer optimized ground electronic state corresponds to the benzene [−. au in the Hartree–Fock approximation for the --G(d) basis set]. The three

isolated acethylene molecules (in the same approximation) have the energy −. au, and

the molecule (also with the same formula C H ) H C − C ≡ C − C ≡ CH − . au.

Thus, the benzene molecule seems to be a stable ground-state, while the three acethylenes and

the diacethylene are metastable states within the same ground electronic state of the system.

In addition, we assume the computer is so clever, that it automatically rejects those solutions, which are

not square-integrable or do not satisfy the requirements of symmetry for fermions and bosons. Thus, all

non-physical solutions are rejected.

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

All the three physically observed realizations of the system C + H are separated by barriers;

this is the reason why they are observable.

What is, therefore, the most stable electronic ground state corresponding to the flask of benzene? This is a quite different question, which pertains to systems larger than a single molecule.

If we multiply the number of atoms in a single molecule of benzene by a natural number N,

we are confronted with new possibilities of combining atoms into molecules, not necessarily of

the same kind and possibly larger than C H . For a large N we are practically unable to find

all the possibilities. In some cases, when based on chemical intuition and limiting to simple

molecules, we may guess particular solutions. For example, to lower the energy for the flask of

benzene we may allow formation of the methane molecules and the graphite (the most stable

form of carbon). Therefore, the flask of benzene represents a metastable state.

Suppose we wish to know the dipole moment of, say, the HCl molecule, the quantity that

tells us important information about the charge distribution. We look up the output and we do

not find anything about dipole moment. The reason is that all molecules have the same dipole

moment in any of their stationary state Ψ, and this dipole moment equals to zero, see, e.g.,

Piela () p. . Indeed, the dipole moment is calculated as the mean value of the dipole

moment operator i.e., μ = ⟨Ψ∣ μˆ Ψ⟩ = ⟨Ψ∣ (∑i q i r i ) Ψ⟩, index i runs over all electrons and

nuclei. This integral can be calculated very easily: the integrand is antisymmetric with respect

to inversion and therefore μ = . Let us stress that our conclusion pertains to the total wave

function, which has to reflect the space isotropy leading to the zero dipole moment, because

all orientations in space are equally probable. If one applied the transformation r → −r only

to some particles in the molecule (e.g., electrons), and not to the other ones (e.g., the nuclei),

then the wave function will show no parity (it would be neither symmetric nor antisymmetric).

We do this in the adiabatic or Born–Oppenheimer approximation, where the electronic wave

function depends on the electronic coordinates only. This explains why the integral μ = ⟨Ψ∣ˆμ Ψ⟩

(the integration is over electronic coordinates only) does not equal zero for some molecules

(which we call polar). Thus, to calculate the dipole moment we have to use the adiabatic or the

Born–Oppenheimer approximation.

Now we decide to introduce the Born–Oppenheimer approximation (we resign from the

absolute correct picture) and to focus on the most important features of the molecule. The first,

most natural one, is the molecular geometry, this one that leads to a minimum of the electronic energy. The problem is that usually we have many such minima of different energy, each

minimum corresponding to its own electronic density distribution. Each such distribution corresponds to some particular chemical bonds pattern. In most cases the user of computers does

not even think of these minima, because he or she performs the calculations for a predefined

configuration of the nuclei and forces the system (usually not being aware of it) to stay in its

vicinity. This is especially severe for large molecules, such as proteins. They have an astronomic

number of stable conformations, but often we take one of them and perform the calculations

for this one. It is difficult to say why we select this one, because we rarely even consider the

other conformations. In this situation we usually take as the starting point a crystal structure

conformation (we believe in its relevance for a free molecule).

Bond patterns are almost the same for different conformers.

For a dipeptide one has something like ten energy minima, counting only the backbone conformations (and

not counting the side chain conformations for simplicity). For a very small protein of, say, a hundred amino

acids, the number of conformations is therefore of the order of , a very large number exceeding the

estimated number of atoms in the Universe.

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

Moreover, usually one starts calculations by setting a starting electronic density distribution.

The choice of this density distribution may influence the final electronic density and the final

geometry of the molecule. In routine computations one guesses the starting density according to the starting nuclear configuration chosen. This may seem to be a reasonable choice,

except when small deformation of the nuclear framework leads to large changes in the electronic

density.

In conclusion, in practice the computer gives the solution which is close to what the computing

person considers as “reasonable” and sets as the starting point.

Does Predicting Mean Understanding?

The existing commercial programs allow us to make calculations for molecules, treating each

molecule as a new task, as if every molecule represented a new world, which has nothing to do

with other molecules. We might not be satisfied with such a picture. We might be curious about

the following:

•

•

•

•

•

•

•

•

Living in the D space, does the system have a certain shape or not?

If yes, why the shape is of this particular kind?

Is the shape fairly rigid or rather flexible?

Are there some characteristic substructures in the system?

How do they interact?

How do they influence the calculated global properties, etc?

Are the same substructures present in other molecules?

Does the presence of the same substructures determine similar properties?

It is of fundamental importance for chemistry that we do not study particular cases, case

by case, but derive some general rules. Strictly speaking these rules are false because, due to

approximations made, they are valid to some extent only. However, despite this, they enable

chemists to operate, to understand, and to be efficient. If we relied uniquely on exact solutions of the Schrödinger equation, there would be no chemistry at all; people would lose the

power of rationalizing chemistry, in particular to design syntheses of new molecules. Chemists

rely on molecular spatial structure (nuclear framework), on the concepts of valence electrons,

chemical bonds, electronic lone pairs, importance of HOMO and LUMO energies, etc. All these

notions have no rigorous definition, but they still are of great importance in describing a model

of molecule. A chemist predicts that two OH bonds have similar properties, wherever they are

in molecule. Moreover, chemists are able to predict differences in the OH bonds by considering what the neighboring atoms are in each case. It is of fundamental importance in chemistry

that a group of atoms with a certain bond pattern (functional group) represents an entity that

behaves similarly, when present in different molecules.

We have at our disposal various scales at which we can look at details of the molecule under

study. In the crudest approach we may treat the molecule as a point mass, which contributes to

the gas pressure. Next we might become interested in the shape of the molecule, and we may

approximate it first as a rigid rotator and get an estimation of rotational levels we can expect.

Then we may leave the rigid body model and allow the atoms of the molecule to vibrate about

their equilibrium positions. In such a case we need to know the corresponding force constants.

This requires either choosing the structural formula (chemical bond pattern) of the molecule

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

together with taking the corresponding empirical force constants, or applying the normal

mode analysis, first solving the Schrödinger equation in the Born–Oppenheimer approximation

(we have a wide choice of the methods of solution). In the first case, we obtain an estimation of the vibrational levels, in the second, we get more reliable vibrational analysis,

especially for larger atomic orbital expansions. If we wish we may consider anharmonicity of

vibrations.

At the same time we obtain the electronic density distribution from the wave function Ψ

for N electrons

ρ(r) = N ∑σ = ∫ dτ dτ . . . dτ N ∣Ψ(r, σ , r , σ , . . . , r N , σ N )∣ .

According to the Hellmann–Feynman theorem (Feynman ; Hellmann ), ρ is sufficient

to compute the forces acting on the nuclei. We may compare the resulting ρ calculated at different levels of approximation, and even with the naive structural formula. The density distribution

ρ can be analyzed in the way known as Bader analysis (Bader ). First, we find all the critical

points, in which ∇ρ = . Then, one analyzes the nature of each critical point by diagonalizing

the Hessian matrix calculated at the point :

•

•

•

•

If the three eigenvalues are negative, the critical point corresponds to a maximum of ρ.

If two are negative and one positive, the critical point corresponds to a covalent bond.

If one is negative and two positive, the critical point corresponds to a center of an atomic

ring.

If all three are positive, the critical point corresponds to an atomic cavity.

The chemical bond critical points correspond to some pairs of atoms; there are other pairs

of atoms, which do not form bonds. The Bader analysis enables chemists to see molecules in

a synthetic way, nearly independent of the level of theory that has been used to describe it,

focusing on the ensemble of critical points. We may compare this density with the density of

other molecules, similar to ours, to see whether one can note some local similarities. We may

continue this, getting a more and more detailed picture down to the almost exact solution of

the Schrödinger equation.

It is important in chemistry to follow such a way, because at its beginning as well as at its end

we know very little about chemistry. We learn chemistry on the way.

The low-frequency vibrations may be used as indicators to look at possible instabilities of the molecule,

such as dissociation channels, formation of new bonds, etc. Moving all atoms, first according to a lowfrequency normal mode vibration and continuing the atomic displacements according to the maximum

gradient decrease, we may find the saddle point, and then, sliding down, detect the products of a reaction

channel.

The integration of ∣Ψ∣ is over the coordinates (space and spin ones) of all the electrons except one (in our

case the electron with the coordinates r, σ ) and in addition the summation over its spin coordinate (σ ).

As a result one obtains a function of the position of the electron in space: ρ(r). The wave function Ψ

is antisymmetric with respect to exchange of the coordinates of any two electrons, and, therefore, ∣Ψ∣ is

symmetric with respect to such an exchange. Hence, the definition of ρ is independent of the label of the

electron we do not integrate over. According to this definition, ρ represents nothing else but the density of

the electron cloud carrying N electrons, and is proportional to the probability density of finding an electron

at position r.

Strictly speaking the nuclear attractors do not represent critical points, because of the cusp condition (Kato

).

We may also analyze ρ using a “magnifying glass” represented by −Δρ.

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

Orbital Model

The wave function for identical fermions has to be antisymmetric with respect to exchange

of coordinates (space and spin ones) of any two of them. This means that two electrons

having the same spin coordinate cannot occupy the same position in space. Since wave functions are continuous this means that electrons of the same spin coordinate avoid each other

(“Fermi hole” or “exchange hole” about each of them). This Pauli exclusion principle does not

pertain to two electrons of opposite spin. However, electrons repel one another (Coulombic

force) at any finite distance, i.e., have to avoid one another because of their charge (“Coulomb

hole” or “correlation hole” around each of them). It turned out, references in Piela ()

p. , that the Fermi hole is by far more important than the Coulomb hole. A high-quality

wave function has to reflect the Fermi and the Coulomb holes. The corresponding mathematical expression should have the antisymmetrization operator in front, this will take care

of the Pauli principle (and introduce a Fermi hole). Besides this, it should have some parameters or mathematical structure controlling somehow the distance between any pair of electrons

(this will introduce the Coulomb repulsion). Since the Fermi hole is much more important, it is reasonable to consider first a wave function that takes care of the Fermi hole only.

The simplest way to take the Fermi hole into account is the orbital model (approximation).

Within the orbital model the most advanced is the Hartree–Fock method. In this method the

Fermi hole is taken into account by construction (antisymmetrizer). The Coulomb hole is not

present, because the Coulomb interaction is calculated through averaging the positions of the

electrons.

The orbital model is wrong, because it neglects the Coulomb hole. Being wrong, it has

however, enormous scientific power, because:

•

•

•

•

•

•

•

It allows one to see the electronic structure as contributions of individual electrons, with

their own “wave functions” i.e., orbitals with a definite mathematical form, symmetry,

energy (“orbital levels”), etc.

We take the Pauli exclusion principle into account by not allowing occupations of an orbital

by more than two electrons (if two, then of the opposite spin coordinates). The occupation

of all orbital levels is known as orbital diagram.

The orbital energy may be interpreted as the energy needed to remove an electron from

the orbital (assuming that all the orbitals do not change during the removing, Koopmans’

theorem, Koopmaans ).

Molecular electron excitations may often be identified with changing the electron occupancy in the orbital diagram.

We may even consider electron correlation (Coulomb hole), either by allowing different

orbitals for electrons of different spin, or considering a wave function expansion composed

of electron diagrams with various occupations.

One may trace the molecular perturbations to changes in the orbital diagram.

One may describe chemical reactions as a continuous change from a starting to a final

molecular diagram. Theory and computational experience bring some rules, like that only

those orbitals of the molecular constituents mix, which have similar orbital energies and

have the same symmetry. This leads to important symmetry selection rules for chemical reactions (Fukui and Fujimoto ; Woodward and Hoffmann ) and for optical

excitations (Cotton ).

•

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

The orbital model provides a language to communicate among chemists (including

quantum chemists). This language and the numerical experience supported by theory create a kind of quantum mechanical intuition coupled to experiment, which allows us to

compare molecules, to classify them, and to predict their properties in a semiquantitative

way. A majority of theoretical terms in quantum chemistry stem from the orbital theory.

Power of Computer Experiments

In experiments we always see quantities averaged over all molecules in the specimen, all orientations allowed, all states available for a given temperature, etc. In some experiments we are

able to specify the external conditions in such a way as to receive the signal from molecules in a

given state. Even in such a case the results are averaged over molecular vibrations, which introduce (usually quite small) uncertainty for the positions of the nuclei, close to the minimum of

the electronic energy (in the adiabatic or Born–Oppenheimer approximations).

This means that in almost all cases the experimenters investigate molecules close to the

minimum of the electronic energy (minimum of PES). What happens to the electronic structure

for other configurations of the nuclei is a natural question, sometimes of great importance (e.g.,

for chemical reactions). Only computational chemistry opens the way to see what would happen

to the energy and to the electronic density distribution if

•

•

•

•

Some internuclear distances increased, even to infinity (dissociation)

Some internuclear distances shortened, and the shortening may correspond even to collapsing the nuclei into a united nucleus, or approaching two atoms which in the minimum

of PES form or do not form a chemical bond. This allows us to investigate what happens

to the molecule under a gigantic pressure, etc

Some nuclei changed their mass or charge (beyond what one knows from experiment)

We apply to the system an electric field, whose character is whatever we imagine as appropriate. Sometimes such a field may approximate the influence of charge distributions in

neighboring molecules

This makes out of computational chemistry a quite unique tool allowing to give the answer

about the energy and electronic density distribution (bond pattern) for any system and for any

deformation of the system we imagine. This powerful feature can be used not only to see what

happens for a particular experimental situation, but also what would happen if we were able to

set the conditions much beyond any imaginable experiment.

Conclusions

What counts in computational chemistry is looking at molecules at various scales (using various models) and comparing the results for different molecules. If one could only obtain an

exact picture of the molecule without comparing the results for other molecules, we would

One has to be aware of a related mathematical trap. Applying even the smallest uniform electric field immediately transforms the problem into one with metastable energy (the global minimum corresponding to

dissociation of the system, with the energy equal to −∞), see, e.g., Piela (), p. .

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

be left with no chemistry. The power of chemistry comes from analogies and similarities,

as well as from trends rather than from the ability of predicting properties. Such ability is

certainly important for being efficient in any particular case, but predicting by computation

does not mean understanding. We need computers with their impressive speed, capacity, and

possibility to give us precise predictions, but also we need a language to speak about the computations, a model that simplifies the reality, but allows us to understand what we are playing

with in chemistry.

Acknowledgments

The author is very grateful to his friends, Professor Andrzej J. Sadlej and Professor Leszek Z.

Stolarczyk, for the joy of being with them, discussing all exciting aspects of chemistry, science

and beyond; a part of them is included in the present chapter.

References

Bader, R. F. W. (). Atoms in molecules. A quantum

theory. Oxford: Clarendon Press.

Bloch, F. (). PhD Thesis. University of Leipzig.

Born, M., & Oppenheimer, J. R. (). Zur Quantentheorie der Molekeln. Annalen Physik, , .

Boys, S. F., Cook, G. B., Reeves, C. M., & Shavitt,

I. (). Automatic fundamental calculations of

molecular structure. Nature, , .

Brown, G. E., & Ravenhall, D. G. (). On the interaction of two electrons. Proceedings of the Royal

Society A, , .

Cotton, F. A. (). Chemical applications of group

theory (rd ed.). New York: Wiley.

Dirac, P. A. M. (a). The quantum theory of

the electron. Proceedings of the Royal Society

(London), A, .

Dirac, P. A. M. (b). The quantum theory of the

electron. Part II. Proceedings of the Royal Society

(London), A, .

Feynman, R. P. (). Forces in molecules. Physical

Review, , .

Fock, V. (a). Näherungsmethode zur Lösung des

quantenmechanischen

Mehrkörperproblems.

Zeitschrift für Physik, , .

Fock, V. (b). “Selfconsistent field” mit Austausch

für Natrium. Zeitschrift für Physik, , .

Fukui, K., & Fujimoto, H. (). An MO-theoretical

interpretation of nature of chemical reactions.

I. Partitioning analysis of interaction energy.

Bulletin of the Chemical Society of Japan, , .

Hartree, D. R. (). The wave mechanics of an

atom with a non-coulomb central field. Part I.

Theory and methods. Proceedings of the Cambridge Philosophical Society, , .

Heitler, W., & London, F. W. (). Wechselwirkung neutraler Atome und homöopolare

Bindung nach der Quantenmechanik. Zeitschrift

fü Physik, , .

Hellmann, H. (). Einführung in die quantenchemie. Leipzig: Deuticke.

Hund, F. (a). Zur Deutung der Molekelspektren.

I. Zeitschrift für Physik, , .

Hund, F. (b). Zur Deutung der Molekelspektren.

II. Zeitschrift für Physik, , .

Hund, F. (c). Zur Deutung der Molekelspektren.

III. Zeitschrift für Physik, , .

Hylleraas, E. A. (). Neue Berechnung der Energie

des Heliums im Grundzustande, sowie des tiefsten Terms von Ortho-Helium. Zeitschrift für

Physik, , .

James, H. M., & Coolidge, A. S. (). The ground

state of the hydrogen molecule. Journal of Chemical Physics, , .

Kato, T. (). On the eigenfunctions of manyparticle systems in quantum mechanics. Communications on Pure and Applied Mathematics,

, .

Koopmaans, T. C. (/). Über die Zuordnung von Wellenfunktionen und Eigenwerten zu

den Einzelnen Elektronen Eines Atoms. Physica,

, .

Kołos, W., & Roothaan, C. C. J. (). Accurate

electronic wave functions for the H molecule.

Reviews of Modern Physics, , .

From Quantum Theory to Computational Chemistry. A Brief Account of Developments

Łach, G., Jeziorski, B., & Szalewicz, K. (). Radiative corrections to the polarizability of helium.

Physical Review Letters, , .

Pestka, G., Bylicki, M., & Karwowski, J. ().

Frontiers in quantum systems in chemistry and

physics. In P. J. Grout, J. Maruani, G. DelgadoBarrio, & P. Piecuch (Eds.), Dirac-Coulomb

equation: Playing with artifacts (pp. –).

Springer, New York/Heidelberg.

Piela, L. (). Ideas of quantum chemistry. Amsterdam: Elsevier.

Roos, B. O. (). A new method for large-scale CI

calculations. Chemical Physics Letters, , .

Schrödinger, E. (a). Quantisierung als Eigenwertproblem. Annalen Physik, , .

Schrödinger, E. (b). Quantisierung als Eigenwertproblem. Annalen Physik, , .

Schrödinger, E. (c). Quantisierung als Eigenwertproblem. Annalen Physik, , .

Schrödinger, E. (d). Quantisierung als Eigenwertproblem. Annalen Physik, , .

Slater, J. (). Cohesion in monovalent metals.

Physical Review, , .

Woodward, R. B., & Hoffmann, R. (). Selection

rules for sigmatropic reactions. Journal of the

American Chemical Society, , .

## Handbook of computational chemistry

## Foundations and General Scheme of Molecular Mechanics. Atoms as ElementaryUnits of the Matter

## A Bit of History. The ``Precomputer'' and Early Computer-Aided MM Calculations

## Molecular Mechanics on the First Steps of Molecular Biology. MolecularMechanics and Protein Physics

## Other Popular Force Fields and MM Software. CHARMM, OPLS, and GROMOS

Tài liệu liên quan

1 From Quantum Theory to Computational Chemistry. A Brief Account of Developments