A very good summary and explanation of what is “Computational Chemistry”
Computational science (computational physics, chemistry, biology, etc…), which is distinct from computer science, emerged in recent years as a third way of doing science, besides experimental and theoretical. It uses computers to perform extremely complex calculations or simulations, the outcome of which could never be guessed a priori even when the underlying theory is well known and the equations simple. One example is that of molecular dynamics simulations of the “melting” of argon clusters. The equations are simply those of Newton’s classical mechanics for N particles interacting via an additive 6-12 Lennard-Jones potential. But the details of how that “phase transition” takes place in small N-atom clusters can only be seen by performing a kind of computer experiment involving billions of evaluations of interatomic forces, updating atoms positions and velocities, and extracting meaningful quantities by time-averaging over thousands of time steps. Another example is computing precise energy differences for relatively large molecules. Typically, the trial wavefunction is a linear combination of a very large number of atom-centered basis functions and the Kohn-Sham or Schrödinger equation is cast in matrix form. The matrices involved in a typical calculation can be 200 by 200; some SCF calculations have been performed involving matrices with more than a thousand rows and columns. The results can not be guessed without diagonalizing such matrices, which is clearly impossible without a powerful computer.
As computers become more powerful and accessible, and as chemists develop better algorithms and software, a larger variety of problems in chemistry can be tackled by computational methods. Not so long ago, computational chemistry was almost synonymous with quantum chemistry, but this is not longer true. Today, computational methods are used in chemistry to study electronic structure by matrix equations and diagonalization or by quantum Monte Carlo methods, but also:
- to calculate properties at equilibrium by Metropolis Monte Carlo and similar methods;
- to simulate processes occuring on different time scales with various dynamical methods:
- for time scales of one or a few vibrational periods: wavepacket propagation
- for time scales going roughly from many vibrational periods up to one nanosecond: molecular dynamics
- for time scales from one nanosecond all the way up to seconds, days, etc… (no theoretical limit): dynamic Monte Carlo simulation methods (it is similar in spirit to the next method below)
- to model reaction kinetics by integrating sets of coupled ordinary differential equations;
- to model polymer conformations with self-avoiding random walks;
- to study molecular conformations by finding minimas of empirical potential energy functions;
- to predict the properties of non-existent molecules that look interesting candidates for synthesis by searching and analysing huge databases of known molecular structure and properties;
- and many more.
Why not call that theoretical chemistry instead of computational chemistry? Maybe it’s a matter of taste, but I think the latter is more appropriate to describe my work. I don’t theorize, I compute. I try to find new (better) approximations to existing theories, new algorithms, I use theories to calculate numbers that I can compare to experiment in order to explain experimental results, or that I can use to predict properties of unknown chemical species. That touches upon theory in that, if I do a good calculation (a good “translation” of theory into numbers) and it gives results that disagree with experiment, then it shows that the underlying theory is wrong, at least partially; the theory needs to be corrected. But my work usually stops there, I don’t try to fix the theory myself, “theoretical” chemists or physicists do. Of course, there is no sharp distinction between theory, computation, and experiment. There is a continuum in the kinds of research. It’s just that I feel that what I do is mostly “computational chemistry”, something like 95% computation, 4.7% theory, and 0.3% experiment.
Everybody knows that computers improved tremendously over the past 50 years: the speed of computers increased by a factor of 10 every 7 years roughly, a speed-up of about 10 millions since 50 years ago! Few people know that improvements in algorithms, numerical methods, and the approximations that we use have produced, maybe, an even greater speed-up of quantum chemistry calculations over those 50 years!! I believe that we would do much more with today’s knowledge of quantum chemistry methods and computers of fifty years ago, than with today’s computers and the quantum chemistry methods of 50 years ago. Among the things that made quantum chemistry methods more efficient, irrespective of the computers used to run the calculation, are:
- compact Gaussian basis sets
- efficient recurrence formula and algorithms to calculate molecular integrals (matrix elements)
- screening of molecular integrals: neglect of very small ones by quick calculation of upper bounds
- model (or effective) core potentials for the approximate description of core electrons
- convergence acceleration methods (DIIS, level-shift, smearing)
- chemistry-specific strategies in geometry optimization algorithms (e.g., internal coordinates)
- analytical derivative methods
- approximate treatment of electron exchange and correlation by the Kohn-Sham method
- fast-multipole methods for evaluating long-range Coulomb interactions
- order-N approximations to SCF methods
There is considerable work and knowledge that went into these developments, and a big part of modern chemistry research would be very different without those methods. In my view, nothing in that list can really be called “theory”, except maybe item 8 which can be viewed either as a new approximation, or part of a new theory: it’s all “computational chemistry”.