Introduction to Theoretical Physics

Introduction to Theoretical Physics

From First Principles to Classical Mechanics to General Relativity

Theoretical physics is the branch of physics that deals with developing and evolving theory to explain the fundamental nature of the universe. It is possibly the most important branch of physics in that without it physics would stagnate and no new discoveries or ideas would develop.

Theoretical physics is the earliest form of science and our earliest written records show that it began over 2,500 years ago in ancient Greece. The scholars of ancient Greece were the first we know of to attempt a thoroughgoing investigation of the universe. They did this through a systematic gathering of knowledge through the activity of human reason alone which we call today philosophy. And for many centuries afterward, the term philosophy was used as the equivalent of the word science which is a recent invention.

Greek natural philosophy was the first attempt to explain the universe without resorting to the supernatural. The Greek word for "natural" is physikos, which is where we get our modern word for the science of physics.

One of the first phenomena that the Greeks contemplated was motion. The Greek ideas on motion were put into sophisticated form by the philosopher Aristotle (384-322 B.C.). In Aristotle's theory, objects or elements had a natural place. He basically theorized that objects on earth made of earth, water, fire or air tended to return to their natural place. So on earth, matter had a tendency to remain at rest and when matter was put in motion, it came to rest. In the heavens, Aristotle's theory said that heavenly bodies were a fifth element where earthly laws did not apply. In the heavens, therefore, the laws included a theory of perpetual motion.

From this small beginning, theoretical physics had its foundation upon which the greats in physics from Kepler and Galileo to Isaac Newton and Albert Einstein and other modern physicists would begin to build physics as we know it today.

Any treatment of theoretical physics describes not only the history of the development of physics, but also the fundamental ideas and notions of the natural world that inspired physicists to create new models of it. Often new theories were based upon new facts coming to light, new methods of measurement and new observations made possible by the development of technology. However just as often new theory was and is based upon existing theory and is many times simply a clarification or adjustment to existing theories making them a more accurate model of reality. It is for this reason that Isaac Newton said: "If I have seen farther than others, it has been by standing on the shoulders of giants." Therefore, included in this discussion of theoretical physics are the prior theories that physicists built upon that led up to the mainstream theories that physics upholds today.

Theory
To understand theoretical physics, one must understand the concept of theory. Theories are simple models of complex systems. The universe is a complex system. In order to analyze the universe as the science of physics does, it must be broken down into simple principles and basic ideas that create a visualization or model. A theory must be useful in that it should approximate reality and make it understandable, measurable, and predictable.

Aristotle realized, there must be propositions that do not need, for whatever reason, to be proven. Such propositions he called the first principles (archai, principia) of demonstration.

To create a theory, one must begin with assumptions or postulates. An assumption is something accepted without proof. The assumptions are therefore the weak point of any theory. Due to this fact, William of Ockham circa 1300 A.D., a medieval English philosopher, emphasized that the fewer the assumptions, the more useful the theory.

The assumptions should then lead to a theory that contains explanations of reality and the observations of reality should be able to be explained and even predicted by the theory. Therefore the predictions of the theory can be tested.

A theory can however be disproved if it can be shown that two contradictory conclusions can be drawn from it. The scientific method used in modern times to test the viability of a theory is to draw a necessary conclusion from the theory and then check it against actual phenomena as rigorously as possible. This is called experimentation.

When assumptions become universally accepted as an established rule, they are called axioms and sometimes universal laws.

Mathematical Foundations
The most ancient written records from every culture show forms of measurement and basic arithmetic. Mathematics first arose out of the need to do calculations in commerce, to understand the relationships between numbers, to measure land, and to predict astronomical events. The history of mathematics ranges from Babylonian mathematics with its sexagesimal (base 60) system dating back 4,000 years, to Euclidean Geometry which dates to 300 B.C. and to modern calculus created by Isaac Newton and Leibniz in 1666.

The development of physics has always been intertwined with mathematics and measurements. Mathematics and measurement are the basic testing ground for determining the usefulness of a theory.

Mathematics can be broadly subdivided into the study of quantity, structure, space, and change (i.e. arithmetic, algebra, geometry and analysis). Each of these broad fields of mathematics has branched out into more abstract and complex forms due to the nature of the complex systems they must explain. Mathematical shorthand has been invented to create simple notations to denote not only simple values, but entire formulas, systems, or algorithms.

The test of theory in theoretical physics relies strongly upon mathematics. The necessary prerequisite mathematics include algebra, geometry, algebra 2, trigonometry, calculus (single variable and multivariable), analytic geometry, linear algebra, ordinary differential equations, partial differential equations, complex numbers, the complex plane and probability theory. Today much of theoretical physics relies on vector calculus. However, the mathematics itself is meaningless without the measurements that it represents, therefore, the international system of units must be understood as well which involve such measurements as the kilogram, second, metre, ampere, Kelvin, and mole.

In theoretical physics, if the mathematics of a theory is found to be inaccurate, the theory collapses. Therefore, the importance of mathematical rigor in theoretical physics cannot be overstated.

Arithmetic, Physics and Mathematics, or the Units of Measurement

Classical Mechanics

 * Where space is flat, h is zero, and c is infinite.

Light
The Greeks also had theories on light, its properties and its speed. They thought light did not have movement or speed, because if it did, the speed would be too great to contemplate. They concluded that light must simply either be there or not be there. Aristotle, the famous Greek natural philosopher, said that "...light is neither fire nor any kind whatsoever of body nor an efflux from any kind of body ... It is certainly not a body, ... clearly therefore, light is just the presence of that [or light is simply the presence of light itself]. Empedocles (and with him all others who used the same forms of expression) was wrong in speaking of light as 'traveling' or being at a given moment between the earth and its envelope, its movement being unobservable by us; that view is contrary both to the clear evidence of argument and to the observed facts; if the distance traversed were short, the movement might have been unobservable, but where the distance is from extreme East to extreme West [i.e. at dawn light fills the entire sky which is a great distance therefore], the draught upon our powers of belief is too great." -- Aristotle "On the Soul" Book 2, Part 7. (Note: comments in brackets added for clarification.)

This indicates that the Greeks believed the speed of light to be infinite i.e. it was present in all places at once when it appeared. This idea was not overturned even by Galileo who attempted to measure the speed of light in the 17th century and it continued to hold in Newton's day.

The Euclidean Space
Euclidean geometry mentioned earlier as a form of mathematics originating with the Greeks in about 300 B.C.E. is the mathematical abstraction and extension of ordinary three dimensional space. It can be described by the Cartesian system of coordinates.

In Euclid's Elements, Euclidean space is "flat". This flatness is obvious from Euclid's definition 23 which states: 'Parallel straight lines are straight lines which, being in the same plane and being produced indefinitely in both directions, do not meet one another in either direction.'

Motion of Objects
Classical mechanics is generally classified as pre-20th century mechanics. Aristotle's views of motion held sway for 2,000 years until an Italian scientist, Galileo Galilei (1564-1642) developed such ingenious and conclusive experimentation that it not only began the destruction of Aristotelian physics but demonstrated the absolute necessity of experimentation in science.

Galileo studied the physics of motion by performing experiments in which he was able to calculate the speed and acceleration of falling bodies. He developed new theories to explain motion based upon his experiments. Galileo made advancements in theoretical physics by postulating a theory of inertia contrary to any previous assumptions. Galileo asserted his theory of motion which is stated as his Principle of Inertia: "A body moving on a level surface will continue in the same direction at constant speed unless disturbed."

Isaac Newton (1642-1727) capitalized upon Galileo's observations and created theories based upon this notion that objects tend to maintain momentum. Newton was an English scientist who systemized and generalized the assumptions made by Galileo, extended the predictions of the theory, and put them in a way that could be described by mathematical formulae that could be tested.

A direct translation from the Latin of Newton's first law of motion with Newton's own explanation is:

"LAW I: Every body continues ever in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. -- Projectiles continue ever in their motions, so far as they are not retarded by the resistance of the air, or impelled downwards by the force of gravity. A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motions, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, continue ever their motions both progressive and circular for a much longer time."--end translation

Newton begins his Laws of Motion with Galileo's idea of inertia. Inertia in Newton's first law of motion is a concept and an assumption and therefore cannot be proven by experimentation or mathematics. (Inertia has two meanings today, and later, under the heading "inertia", the term inertia will be considered in its second definition which is measurable.) But the idea of "inertia" that is present in Newton's first law is an imagined concept of Galileo that once you get something into momentum, it stays that way until a force affects it. In Newton's first law, rest is the state of zero momentum or zero movement, so that inertia can be simply stated as the tendency to maintain movement. The idea comes before the law and creates the law. Without the idea of inertia there are no Newtonian laws of motion. Working from Galileo's principle, Newton's reasoning was that Aristotle postulated continual circular movement in the heavens and that laws on earth were different for Aristotle because Aristotle believed the earth was at rest, a fixed motionless reference frame, and at the center of the universe. Newton knew the earth was not at rest since by Newton's time the Copernican system with the sun at the center was the generally accepted theory. Therefore Newton realized that the same laws in the heavenly bodies had to apply for earthly bodies as well. Therefore the same maintenance of movement, inertia, that had been postulated to exist in celestial bodies had to exist on earth. It was this reasoning that made Newton believe that something on earth was making things come to rest. What Galileo called a disturbance, Newton called a force. Inertia was Newton's way of describing the "default state" of matter that we now consider a law of the universe. Aristotle had described the default state of earthly objects as "rest" (natural place). Newton contradicted or enlarged on the Aristotle default state and said that to the contrary the default state is "inertia" and upon that seminal idea built the laws of motion. Without the "idea", the mental abstraction of "inertia", Newton could have never formulated the Laws of Motion.

Newton could then form his first law extrapolate how much force it would take to change the momentum of an object. Newton therefore stated his Second Law of Motion thus:

"LAW II: The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed. -- If a force generates a motion, a double force will generate double the motion, a triple force triple the motion, whether that force be impressed altogether and at once, or gradually and successively. And this motion (being always directed the same way with the generating force), if the body moved before, is added to or subducted from the former motion, according as they directly conspire with or are directly contrary to each other; or obliquely joined, when they are oblique, so as to produce a new motion compounded from the determination of both."

Newton here is basically saying that the change in the momentum of an object is proportional to the amount of force exerted upon the object. He also states that the change in direction of momentum is determined by the angle from which the force is applied. Interestly too, Newton is restating in his further explanation another prior idea of Galileo being what we call today the Galilean transformation or the addition of velocities.

An interesting fact when studying Newton's Laws of Motion from the Principia is that Newton himself does not explicitly write formulae for his laws which was common in scientific writings of that time period. In fact, it is today commonly added when stating Newton's second law that Newton has said, "and proportional to the mass of the object." This however is not found in Newton's second law as directly translated above. In fact, the idea of mass is not introduced until the third law. However, it has been a common convention to describe law two of Newton in the mathematic formula F=ma where F is Force, a is acceleration and m is mass. This is actually a combination of laws two and three of Newton expressed in a very useful form. This formula in this form did not even begin to be used until the 18th century after Newton's death, but it is implicit in his laws.

Newton's Third Law of Motion states: "LAW III: To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts. -- Whatever draws or presses another is as much drawn or pressed by that other. If you press a stone with your finger, the finger is also pressed by the stone. If a horse draws a stone tied to a rope, the horse (if I may so say) will be equally drawn back towards the stone: for the distended rope, by the same endeavour to relax or unbend itself, will draw the horse as much towards the stone, as it does the stone towards the horse, and will obstruct the progress of the one as much as it advances that of the other. If a body impinge upon another, and by its force change the motion of the other, that body also (because of the equality of the mutual pressure) will undergo an equal change, in its own motion, toward the contrary part. The changes made by these actions are equal, not in the velocities but in the motions of the bodies; that is to say, if the bodies are not hindered by any other impediments. For, because the motions are equally changed, the changes of the velocities made toward contrary parts are reciprocally proportional to the bodies. This law takes place also in attractions, as will be proved in the next scholium."

The explanation of mass is expressed here for the first time in the words "reciprocally proportional to the bodies" which have now been traditionally added to Law 2 as "mutually proportional to the mass of the object." Thus we arrive at F=ma.

Newton's Third Law of Motion also expresses one of the fundamental symmetry principles of the universe. The idea is that forces in nature always exist in pairs. Newton's third law is often restated as, "For every action, there is an equal and opposite reaction."

Gravity
"I deduced that the forces which keep the planets in their orbs must be reciprocally as the squares of their distances from the centres about which they revolve, and thereby compared the force requisite to keep the moon in her orb with the force of gravity at the surface of the earth and found them to answer pretty nearly." -- Isaac Newton, 1667

Here Newton introduced a novel idea by imagining that the Earth's gravity influenced the moon, counter-balancing its orbital rotation about the earth. Newton deduces the inverse-square law. This inverse-square law had already been observed by Johannes Kepler in his book The Harmony of the World which contains what is now known as 'Kepler's Third Law', that for any two planets the ratio of the squares of their periods will be the same as the ratio of the cubes of the mean radii of their orbits.

Kepler had merely made an observation from studying the data of astronomer Tycho Brahe for some twenty years. Kepler did not understand why there was an inverse-square law, he merely stated this observation. What Newton did was explain that the inverse-square law was caused by the force of gravity between two massive objects.

Newton wrote a full treatment of his new physics including his laws of motion and gravity and its application to astronomy which was published in 1687 and entitled Philosophiae naturalis principia mathematica or commonly referred to as the Principia.

According to biographers J J O'Connor and E F Robertson, "the Principia is recognised as the greatest scientific book ever written. Newton analysed the motion of bodies in resisting and non-resisting media under the action of centripetal forces. The results were applied to orbiting bodies, projectiles, pendulums, and free-fall near the Earth. He further demonstrated that the planets were attracted toward the Sun by a force varying as the inverse square of the distance and generalised that all heavenly bodies mutually attract one another. Further generalisation led Newton to the law of universal gravitation: "... all matter attracts all other matter with a force proportional to the product of their masses and inversely proportional to the square of the distance between them."

Newton had based much of his theory upon the observations of Galilieo. Galileo had made a study of falling bodies and Galileo had observed that all bodies fall with the same acceleration. The force of gravity was known before Newton's time, however, it had not been systematically explained.

What Galileo hadn't envisioned was that which was made clear in Newton's universal law of gravity, that the acceleration was proportional to the masses of the objects. In other words, Galileo would have never conceived of objects falling to earth at the same speed had the objects had masses comparable to the earth. In other words, for objects on earth, the mass is so small compared to the mass of the earth that the mass of the object is negligible. Therefore, for gravity measurements on Earth, acceleration is independent of the mass or weight of an object, and due to this all objects fall to the earth at 9.8 m/s^2.

This same negligible gravity of one object compared to another orbiting object helped Kepler in his third law to discover through observation the inverse-square law of the planetary orbits. It is only due to the mass of the sun being so extraordinarily large compared to the masses of the planets that the mass of the planets becomes negligible in the formula. Since Kepler did not understand the relation between mass and gravity, he would have never observed the inverse-square law had earth been comparable in mass to the sun.

So here we see in the development of theoretical physics the intertwining of observations with the development of theories of force and mathematics to explain the observations. The theories advanced by Kepler and Galileo were correct on a simple level, but it was Newton who introduced force and the relation between force and mass in order to explain why these observations were accurate. Obviously, the modern theories of physics and the description of universal laws did not develop in a vacuum. Physicists built upon prior knowledge and prior ideas. From these prior ideas, they produced new models of the natural world and made new inferences. In their turn these new models made predictions and required new explanations so that sometimes the developing theories could only be explained by the introduction of a new force, or a new constant such as the gravitational constant, or a new universal law such as inertia. Even Aristotle was working from prior notions that he systematically arranged into the first elements of theoretical physics.

Constant Acceleration
Galileo's third principle is summarized for falling bodies as: "The distance fallen is proportional to the time squared $$dt^2\,$$. "

Galileo's third principle defines a constantly changing velocity. When there is a change in velocity, it is called acceleration. Therefore, bodies fall with a constant acceleration.

Circular Motion
To keep an object rotating in circular motion, it requires a centripetal force $$F = \frac{mv^2} {r} $$ where $$m$$ is the mass of the object, $$v$$ is the velocity of the object and $$r$$ is the radius of the circle.

Inertia
The philosopher René Descartes (1596 - 1650) used the idea of a greater God and an infinite universe with no special or privileged place to articulate the concept of inertia: a body at rest remains at rest, and one moving in a straight line maintains a constant speed and same direction unless it is deflected by a 'force'. We can see this is borrowed from Galileo's principle of inertia and was later also incorporated into Newton's first law of motion.

Possibly the hardest difficulty for science to overcome has been the influence of religious and philosophical implications upon the theoretical science of physics.

Momentum
$$p=mv$$

Force
$$F=ma$$

Energy
$$E=mc^2 $$

Wave Mechanics
$$ {\displaystyle \mathrm {i} \hbar {\frac {\partial }{\partial t}}|\,\psi (t)\rangle ={\hat {H}}|\,\psi (t)\rangle }$$

(time dependent Schrödinger equation)

Fluids
$$p=F/A $$

(p=pressure)

Heat, Temperature, Thermodynamics
$$Q=mc\Delta T $$

Electricity
$$V=IR $$

Magnetism
$$E=F/q $$

Special Relativity

 * Where space is flat and c is finite (i.e. flat spacetime).

Electromagnetism
Early researchers differed in their explanations of the fundamental nature of what we now call electromagnetic radiation. In 1690, Christian Huygens explained the laws of reflection and refraction on the basis of a wave theory. Sir Isaac Newton believed that light consists of particles which he designated corpuscles. In 1827 Thomas Young and Augustin Fresnel made experiments on interference that showed that a corpuscular theory of light was inadequate. Then in 1873 James Clerk Maxwell showed that by making an electrical circuit oscillate it should be possible to produce electromagnetic waves. His theory made it possible to compute the speed of electromagnetic radiation purely on the basis of electrical and magnetic measurements, and the computed value corresponded very closely to the empirically measured speed of light. In 1888, Heinrich Hertz made an electrical device that actually produced what we would now call microwaves — essentially radiation at a lower frequency than visible light. Everything up to that point suggested that Newton had been entirely wrong to regard light as corpuscular. Then it was discovered that when light strikes an electrical conductor it causes electrons to move away from their original positions. Greater intensities of light at one frequency can cause more electrons to move, but they will not move any faster. Higher frequencies of light can cause electrons to move faster. This appeared to raise a contradiction. Light had been shown by earlier experiments to be a wave and now in other experiments it appeared to behave as a particle.

Old quantum theory
Quantum mechanics developed from the study of electromagnetic waves, which means visible light seen in the colors of the rainbow, but also other waves including the more energetic waves like ultraviolet light, x-rays, and gamma rays plus the waves with longer wavelengths including infrared waves, microwaves and radio waves. We are not, however, speaking of sound waves, but only of the waves that travel at the speed of light. Also, when the word "particle" is used, it refers to elementary or subatomic particles.

Planck's constant
Without knowing it, Max Planck discovered that there is a definite relation between the frequency of light and the energy that it carries. Planck actually was studying how the radiation of a body was related to its temperature in an attempt to devise a new radiation law. Planck's investigation ended up describing the energy of a wave in mathematical terms. So he took a small segment of a wave or as he called it "an oscillator" to describe the wave's total energy. What he found concerning radiation and oscillators was really a description of electromagnetic waves. Waves have crests and troughs. A cycle is the return from one point such as the crest to the next crest. So Planck used the frequency of the wave, which means the number of cycles per second, to determine the energy of a wave. What he discovered was that, given any wave, a single number when multiplied by the frequency gives the energy of the wave. He found that this single number was the same for waves of all frequencies. This number is called Planck's constant and represented as the letter h in physics formulas. The value of h itself is exceedingly small, about 0.000000000000000000000000000000000662618 joule/seconds (or 6.62618 x 10-34 joule seconds in scientific notation).

However, when you investigate the energy of a wave in this manner, it seems that the wave is carrying its energy in a given number of little packets per second. This discovery then seemed to particalize a wave. These packets of energy along the wave were called quanta by Planck. Quantum mechanics began with the discovery that energy is delivered in these packets whose size is related to the color or frequency of the light.

There originally were two competing ways to describe light, either as a wave propagated through empty space, or as small particles traveling in a straight line. Because Planck showed that the energy of the wave is made up of packets, the particle analogy became useful to help understand how light delivers energy in multiples of certain set values. Light of one frequency delivers multiples of a certain unit amount designated as a quantum of energy. Nevertheless, the wave analogy is useful to help understand other phenomena. In 1905, Albert Einstein used Planck's constant to postulate that the energy in a beam of light occurs in concentrations that he called photons. A single photon is a single quantum of energy, but the energy associated with a photon varies depending upon the frequency of the wave. Photons can deliver more or less energy, depending on their frequency. This description sounds like Newton's corpuscular account except that a photon was said to have a frequency, and the energy of the photon was accounted proportional to its frequency. This explained all previous seeming contradictions in experiments where light sometimes displayed wave-like behavior and other times displayed particle-like behavior.

Both the idea of a wave and the idea of a particle are derived from our everyday experience. We cannot see photons. We can only investigate their properties indirectly. We look at some phenomena, such as the rainbow of colors that we see when a thin film of oil rests on the surface of a puddle of water, and we can explain that phenomenon to ourselves by saying that light is like waves. We look at other phenomena, such as the way a photoelectric meter in our camera works, and we explain it by analogy to particles colliding with the detection screen in the meter. In both cases we take concepts from our everyday experience and apply them to a world we have never seen. Neither form of explanation is entirely satisfactory. To remind us that both "wave" and "particle" are concepts imported from our macro world to explain the world of atomic-scale phenomena, some physicists such as George Gamow have used the term "wavicle" to refer to whatever it is that is really there. In the following discussion, "wave" and "particle" may both be used depending on which aspect of quantum mechanical phenomena is under discussion.

Reduced Planck's constant
Planck's constant originally represented the energy that a light wave carries as a function of its frequency. However, in 1925 when Werner Heisenberg developed his full quantum theory, which is discussed below, calculations involving wave analysis called Fourier series were fundamental, and so another version of Planck's constant showed up in the mathematical formula. This "reduced" version of Planck's constant includes a conversion factor to make calculations involving wave analysis easier to deal with. It showed up again in 1927 in the Uncertainty Principle, also discussed later in this article. Finally, this reduced Planck's constant appeared in Dirac's equation and was then given an alternate designation, "Dirac's constant." Therefore, it is appropriate to begin with an explanation of what this constant is, even though we haven't yet touched on the theories that made its use convenient.

As noted above, the energy of any wave is its frequency multiplied by Planck's constant. A wave is made up of crests and troughs. In a wave, a cycle is defined by the return from a certain position to the same position such as from the top of one crest to the next crest. A cycle actually is mathematically related to a circle, and both have 360 degrees. A degree is a unit of measure for the amount of turn needed to produce a certain arc at a given distance. A sine curve is generated by a point on the circumference of a circle as that circle rotates. (See a demonstration at: Rotation Applet) There are 2 pi radians per cycle in a wave, each having 180 degrees, which is similar to a circle where the radius times 2 times pi equals the circumference of the circle. Since one cycle is 2*pi*radian, when h is divided by 2pi this leaves radian. Therefore, dividing h by 2&pi; describes a constant that, when multiplied by the frequency of a wave, gives the energy in joules per radian per second. And h/2pi is h-bar or $$ \hbar = \frac{h}{2 \pi} \ $$.

The reduced Planck's constant is written in mathematical formulas as $$\hbar$$ and is simply called "h-bar". So the reduced Planck's constant gives the energy of a wave in units per radian instead of units per cycle. However, the fundamental packet of energy is a photon derived from h describing a cycle of one quantum of energy. The reduced Planck's constant merely divides the single quantum of energy into mathematically manageable parts.

The two constants h and h-bar are merely conversion factors between energy units and frequency units. The reduced Planck's constant is used more often than h alone (Planck's constant) in quantum mechanical mathematical formulas for many reasons, one of which is that angular velocity or angular frequency is measured in radians per second. When equations relevant to those problems are written in terms of $$\hbar$$, the $$2 \pi \ $$ factors in numerator and denominator can cancel out, saving a computation.

Bohr atom


In 1897 the particle called the electron was discovered. By means of the gold foil experiment physicists discovered that the nucleus was at the center of the atom. So at first all scientists believed that the atom must be like a miniature solar system. But that simple analogy predicted that electrons would, within about one hundredth of a microsecond, crash into the nucleus of the atom. This is because the nucleus has a positive charge and the electron has a negative charge, and unlike charges attract. The great question of the early 20th century was, "Why do electrons normally maintain a stable orbit around the nucleus?"

In 1913, Niels Bohr removed this substantial problem by applying the idea of discrete (non-continuous) quanta to the orbits of electrons. This account became known as the Bohr model of the atom. Bohr basically theorized that electrons can only inhabit certain orbits around the atom. These orbits could be derived even before the 1931 development of the electron microscope by looking at the spectral lines produced by atoms.

The way that Bohr quantized the orbits of the electrons was by assuming that the angular momentum of each of the orbits was derived from the value of h, Planck's constant. In 1913 Bohr considered the electron to be a small particle in a solar system type orbit. However, he took Planck's constant to be a fundamental division at the subatomic level. Bohr argued that the orbits were circular and that the only allowed orbits had the angular momentum of some integral number times h divided by 2$$\pi$$. We know that h/2$$\pi$$ is the reduced Planck's constant. Each orbit after the initial orbit has an angular momentum that is a multiple of h/2$$\pi$$.



Bohr's analysis of electron orbits as circular A little math on circular orbits. Bohr was very familiar with the dynamics of simple circular orbits in an inverse square field as described in classical mechanics. Simply explained: To find the acceleration of a circle, place it inside the shape of a square where tangents meet, then find the linear speed along one side of the square, then square the speed of one side to complete the speed of the entire square, then divide by the radius of the circle placed in the square to get the speed around the circle. Therefore, circular (centripetal) acceleration is v squared over r where v is speed and r is radius. The equation for the centripetal acceleration is $$a =v^2/r$$. That is, acceleration is inversely proportional to the radius of the circle. If the radius is doubled, then the acceleration is halved. Also, Kepler's Third Law is that the radius cubed equals the circumference of the orbit squared. It immediately follows that the radius of any n orbit is proportional to the orbit n squared, and the speed in that orbit is proportional to 1/n. Speed times radius gives angular momentum. That leaves n-squared over n. It then follows that the angular momentum for any orbit n is just proportional to n. Bohr argued then that the angular momentum in any orbit n was nKh, where h is Planck's constant and K is some multiplying factor, the same for all the orbits, which was later determined to be 1/2$$\pi$$. 

Bohr considered the circular orbit to be as one cycle in an oscillator (as in Planck's initial measurements to define the constant h) which is similar to one cycle in a wave. Therefore, the orbits per second of the electron around its nucleus would create a frequency. Multiplying the frequency of each orbit by Planck's constant h would give only certain allowed orbits, thus fixing the size of the orbits. Through analysis (shown at ) Bohr found that the first allowed orbit, called K, was equal to 1/2$$\pi$$. This was the argument Bohr used to establish that angular momentum for his model is quantized in units h/2$$\pi$$.

Bohr's theory showed electrons to be orbiting the nucleus of an atom in a way that was amazingly different from what we see in the world of our everyday experience. He showed that when an electron changed orbits it did not move in a continuous trajectory from one orbit around the nucleus to another. Instead, it suddenly disappeared from its original orbit and reappeared in another orbit. Each distance at which an electron can orbit is a function of a quantized amount of energy. The closer to the nucleus an electron orbits, the less energy it takes to remain in that orbital. Electrons that absorb a photon gain a quantum of energy, so they jump to an orbit that is farther from the nucleus, while electrons that emit a photon lose a quantum of energy and so jump to an inner orbital. Electrons cannot gain or lose a fractional quantum of energy, and so, it is argued, they cannot have a position that is at a fractional distance between allowed orbitals. Allowed orbitals were designated as whole numbers using the letter n with the innermost orbital being designated n = 1, the next out being n = 2, and so on. Any orbital with the same value of n is called an electron shell.

Bohr's model of the atom was essentially two-dimensional because it depicts electrons as particles in circular orbits. In this context, two-dimensional means something that can be described on the surface of a plane. One-dimensional means something that can be described by a line. Because circles can be described by their radius, which is a line segment, sometimes Bohr's model of the atom is described as one-dimensional.

It was discussed between De Broglie, Schroedinger, Bore and Einstein what model of the atom was best suited to describe it and Bohr's model was universally accepted as the only viable solution. Bohr however was making an assumption that since the first orbit K was found to have a value of 1/2r that all consecutive orbits should be multiplied by the first orbit, first by h, then 2h, then 3h, and so on. This was assumed without proof. The only circumstantial evidence was that the lines in the atomic spectrum only appeared in certain discrete places for each element, therefore, the orbits were in discrete places. Bohr assumed they were multiples of h (Planck's constant) because h was how wave energy had been quantized. The external link in the article [1], will show Bohr's model was made on unprovable assumptions as are most theories. However, if Bohr's assumptions were true, then they would predict that the electron wouldn't fall into the nucleus of every atom. So it was creating a theory to fit nature, not much different than when Aristotle said that all objects on earth come to rest in their own "natural place". In Bohr's time and even now, we still must believe that the orbits are quantized by h or else we have no other explanation for the electron not falling into the nucleus. In the case of outer electrons, the dividing by 2pi is not a result of mathematical ease to arrive at the reduced Planck's constant, but rather the result of the first orbit being 1/2pi so all the other orbits were multiplied by integers of h and the first orbit of 1/2pi. There were other theories floating around in the early twentieth century. After all, some force could be counter-balancing the electron away from the nucleus, but then electrons were also counter-balanced away from each other in their orbits, so there would have to be two new kinds of forces added for this type of theory. Choosing Bohr's atom became a matter of Ockham's Razor i.e. the least amount of new assumptions as h was already known.

Wave/Particle Duality
Niels Bohr determined that it is impossible to describe light adequately either by sole use of the wave analogy or of the particle analogy. Therefore he enunciated the principle of complementarity which is a theory of pairs, such as, the pairing of wave and particle or the pairing of position and momentum. Louis de Broglie worked out the mathematical consequences of these findings. In quantum mechanics, it was found that electromagnetic waves could react in certain experiments as though they were particles and in other experiments as though they were waves. It was also discovered that subatomic particles could sometimes be described as particles and sometimes as waves. This discovery led to the theory of wave-particle duality by Louis-Victor de Broglie in 1924, which states that waves and particles have properties of both at the same time.

The Bohr atom model was enlarged upon with the discovery by de Broglie that the electron had wave-like properties. Therefore for Bohr's atom the orbit is only stable if it meets the condition for a standing wave. A standing wave can be made if a string is fixed on both ends and made to vibrate, then the only waves that can occur are those with zero amplitude at those fixed ends. The waves appear to oscillate in place simply changing crest for trough in an up-and-down motion. These vibrations are called standing waves. If the string is made to form a circle in the same way that the wave of an electron forms a circle in its orbit in the Bohr atom, then a standing wave is formed when the wave fits the available space around the circle. In other words, no partial fragments of wave crests or troughs are allowed. The wave must be a continuous formation of crests and troughs all around the circle.

Full quantum mechanical theory
Werner Heisenberg developed the full quantum mechanical theory in 1925 at the young age of 24. Following his teacher, Niels Bohr, Werner Heisenberg began to work out a theory for the quantum behavior of electron orbitals. Because electron orbits could not be observed, Heisenberg went about creating a mathematical description of quantum mechanics built on what could be observed, that is, the light emitted from atoms in their atomic spectrum. He used a form of mathematics related to the mathematics of arrays of numbers known as "matrices." He worked from the observed atomic spectrum which only shows light emitted from electrons only at certain observed places. Instead of trying to explain every possible orbit, he began with the assumption that electron orbitals that cannot be observed in the spectrum do not exist. By this time it was known that the electron orbital was three-dimensional, but trying to work out the mathematics for a three-dimensional atom proved too complicated, so he imagined the electron orbital flattened out to one dimension. Heisenberg studied the electron orbital as a charged ball on a spring, an oscillator, whose motion was not quite regular called anharmonic. To see a picture of a charged ball on a spring see: Vibrating Charges. He used the laws of classical mechanics known in the macro world, then applied quantum properties which means discrete (non-continuous) properties to the motion which would leave the gaps between the orbitals so that the mathematics would represent only the observed electron orbitals. The multiplication in his formula resulted in a type of mathematics that was used in special arrays of numbers called matrices. This means in Heisenberg's quantum mechanical mathematics, the normal multiplication law of commutation where A x B = B X A didn't apply.

Heisenberg was approaching quantum mechanics from the particle-like perspective of an oscillating charged electron. Particles appeared to quantum jump due to Planck's constant which showed energy delivered in packets along the wave. Heisenberg was describing mathematically the intensity of a wave. The intensity is the energy per unit volume multiplied by the velocity at which the energy is moving. Amplitude is the maximum height of a wave crest or depth of a trough. Amplitudes of position and momentum that have a period of 2 pi like a cycle in a wave are called Fourier series variables. Heisenberg described the particle-like properties of the electron in a wave as having position and momentum in his matrix mechanics. When these amplitudes of position and momentum are measured and multiplied together, they give intensity. However, he found that when the position and momentum were multiplied together in that respective order, and then the momentum and position were multiplied together in that respective order, there was a difference or deviation in intensity between them of h/2$$\pi$$. As we have said earlier h/2$$\pi$$ is h-bar and describes one cycle in the wave. Heisenberg wouldn't understand the reason for this deviation for two more years, but for the time being he understood that the math worked and was an exact description of the quantum behavior of the electron. Max Born reviewed Heisenberg's paper and recognized that this type of multiplication that couldn't be described by the property of commutation known in normal arithmetic was called matrix mathematics.

No one prior to this had applied this type of mathematics to quantum mechanics and Heisenberg's new matrix theory was able for the first time to fully calculate the quantum behavior of the electron and was later applied to all subatomic particles.

Schroedinger wave equation
Because particles could be described as waves, later in 1925 Erwin Schroedinger analyzed what an electron would look like as a wave around the nucleus of the atom. He came up with a wave equation that describes each electron as having a wavefunction. He thus showed that the atom was not at all like a miniature solar system, but that the electron in a hydrogen atom was more like a wave that covered the entire sphere of its orbital all at once meaning it was three-dimensional. Each electron has its own unique wavefunction described in Schroedinger's equation by three properties (later Paul Dirac added a fourth). The three properties were 1.) which orbital meaning closer to the nucleus with less energy or further from the nucleus with more energy, 2.) the shape of the orbital meaning that orbitals were not just spherical but changed shapes, and 3.) the magnetic moment of the orbital which is a type of energy caused by the charge of the electron as it rotates around the nucleus.

These three properties were called collectively the wavefunction of the electron and are said to describe the quantum state of the electron. "Quantum state" means the collective properties of the electron describing what we can say about its condition at a given time. For the electron, the quantum state is described by its wavefunction which is designated in physics by the Greek letter $$\psi$$ (psi pronounced "sigh"). The three properties of Schroedinger's equation that describe the wavefunction of the electron and therefore also describe the quantum state of the electron as described in the previous paragraph are each called quantum numbers. The first property which describes the orbital was numbered according to Bohr's model where n is the letter used to describe the energy of each orbital. This is called the principal quantum number. The next quantum number that describes the shape of the orbital is called the azimuthal quantum number and it is represented by the letter l (lower case L). The shape is caused by the angular momentum of the orbital. Angular momentum measures an object's tendency to continue to spin. The azimuthal quantum number "l" represents the orbital angular momentum of the electron around the nucleus. However the shape of each orbital has its own letter as well. So for the letter "l" there are other letters to describe the shapes of "l". The first shape is spherical and is described by the letter s. The next shape is like a dumbbell and is described by the letter p. The other shapes of orbitals become more complicated and are described by the letters d, f, and g. The third quantum number of Schroedinger's equation describes the magnetic moment of the electron and is designated by the letter m and sometimes as the letter m with a subscript l because the magnetic moment depends upon the second quantum number l.

In May 1926 Schrödinger published a proof that Heisenberg's matrix mechanics and Shroedinger's wave mechanics gave equivalent results: mathematically they were the same theory. Both men claimed to have the superior theory. Heisenberg insisted on the existence of discontinuous quantum jumps in his particle-like examination of the oscillation of a charged electron giving more precise definitions and Schroedinger insisted that a theory based on continuous wave-like properties which he called "matter-waves" was better.

Uncertainty Principle
In 1927, Heisenberg made a new discovery from his quantum theory that had further practical consequences of this new way of looking at matter and energy on the atomic scale. In Heisenberg's matrix mechanics formula, Heisenberg had encountered an error or difference of h/2$$\pi$$ between position and momentum. This represented a deviation of one radian of a cycle when the particle-like aspects of the wave were examined. Heisenberg analyzed this difference of one radian of a cycle and divided the difference or deviation of one radian equally between the measurement of position and momentum. This had the consequence of being able to describe the electron as a point particle in the center of one cycle of a wave so that its position would have a standard deviation of plus or minus one-half of one radian of the cycle (1/2 of h-bar). A standard deviation can be either plus or minus the measurement i.e. it can add to the measurement or subtract from it. In three-dimensions a standard deviation is a displacement in any direction. What this means is that when a moving particle is viewed as a wave it is less certain where the particle is. In fact, the more certain the position of a particle is known, the less certain the momentum is known. This conclusion came to be called "Heisenberg's Indeterminacy Principle," or Heisenberg's Uncertainty Principle. To understand the real idea behind the uncertainty principle imagine a wave with its undulations, its crests and troughs, moving along. A wave is also a moving stream of particles, so you have to superimpose a stream of particles moving in a straight line along the middle of the wave. An oscillating ball of charge creates a wave larger than its size depending upon the length of its oscillation. Therefore, the energy of a moving particle is as large as the cycle of the wave, but the particle itself has a location. Because the particle and the wave are the same thing, then the particle is really located somewhere in the width of the wave. Its position could be anywhere from the crest to the trough. The math for the uncertainty principle says that the measurement of uncertainty as to the position of a moving particle is one-half the width from the crest to the trough or one-half of one radian of a cycle in a wave.

For moving particles in quantum mechanics, there is simply a certain degree of exactness and precision that is missing. You can be precise when you take a measurement of position and you can be precise when you take a measurement of momentum, but there is an inverse imprecision when you try to measure both at the same time as in the case of a moving particle like the electron. In the most extreme case, absolute precision of one variable would entail absolute imprecision regarding the other.

The consequences of the uncertainty principle were that the electron could no longer be considered as in an exact location in its orbital. Rather the electron had to be described by every point where the electron could possibly inhabit. By creating points of probable location for the electron in its known orbital, this created a cloud of points in a spherical shape for the orbital of a hydrogen atom which points gradually faded out nearer to the nucleus and farther from the nucleus. This is called a probability distribution. Therefore, the Bohr atom number n for each orbital became known as an n-sphere in the three dimensional atom and was pictured as a probability cloud where the electron surrounded the atom all at once.

This led to the further description by Heisenberg that if you were not making measurements of the electron that it could not be described in one particular location but was everywhere in the electron cloud at once. In other words, quantum mechanics cannot give exact results, but only the probabilities for the occurrence of a variety of possible results. Heisenberg went further and said that the path of a moving particle only comes into existence once we observe it. However, strange and counter-intuitive this may seem, quantum mechanics however does tell us the location of the electron's orbital, its probability cloud. Heisenberg was speaking of the particle itself, not its orbital which is in a known probability distribution.

Classical physics had shown since Newton that if you know the position of stars and planets and details about their motions that you can predict where they will be in the future. For subatomic particles, Heisenberg denied this notion showing that due to the uncertainty principle one cannot know the precise position and momentum of a particle at a given instant, so its future motion cannot be determined, but only a range of possibilities for the future motion of the particle can be described.

These notions arising from the uncertainty principle only arise at the subatomic level and were a consequence of wave-particle duality. As counter-intuitive as they may seem, quantum mechanical theory with its uncertainty principle has been responsible for major improvements in the world's technology from computer components to fluorescent lights to brain scanning techniques.

Wavefunction collapse
Schroedinger's wave equation with its unique wavefunction for a single electron is also spread out in a probability distribution like Heisenberg's quantized particle-like electron. This is because a wave is naturally a widespread disturbance and not a point particle. Therefore, Schroedinger's wave equation has the same predictions made by the uncertainty principle because uncertainty of location is built into the definition of a widespread disturbance like a wave. Uncertainty only needed to be defined from Heisenberg's matrix mechanics because the treatment was from the particle-like aspects of the electron. Schroedinger's wave equation shows that the electron is in the probability cloud at all times in its probability distribution as a wave that is spread out. Therefore, in Schroedinger's equation when you measure the position of an electron, it ceases to have wave-like properties. Without wave-like properties, none of Schroedinger's definitions of the electron being wave-like make sense anymore. The measurement of the position of the particle nullifies the wave-like properties and Schroedinger's equation then fails. Because the electron can no longer be described by its wavefunction when measured due to it becoming particle-like, this is called wavefunction collapse.

Eigenstates and eigenvalues
The term eigenstate is derived from the German word "eigen," which means "inherent" or "characteristic." The word eigenstate is descriptive of the measured state of some entity that possesses quantifiable characteristics such as position, momentum, etc. The state being measured and described must be an "observable" (i.e. something that can be observed and measured like position or momentum), and must have a definite value. In the everyday world, it is natural and intuitive to think of everything being in its own eigenstate. Everything appears to have a definite position, a definite momentum, a definite value of measure, and a definite time of occurrence. However, quantum mechanics affirms that it is impossible to pinpoint exact values for the momentum of a certain particle like an electron in a given location at a particular moment in time, or, alternatively, that it is impossible to give an exact location for such an object when the momentum has been measured. Due to the inevitable disturbance of the entity being measured by the very act of measurement, statements regarding both the position and momentum of particles can only be given in terms of a range of probabilities, a "probability distribution." Eliminating uncertain in one term maximizes uncertainty in regard to the second parameter.

Therefore it became necessary to have a way to clearly formulate the difference between the state of something that is uncertain in the way just described, such as an electron in a probability cloud, and effectively contrast it to the state of something that is not uncertain, something that has a definite value. When something is in the condition of being definitely "pinned-down" in some regard, it is said to possess an eigenstate. If the position of, e.g., an electron has been made definite, it is said to have an eigenstate of position.

A definite value, such as the position of an electron that has been successfully located, is called the eigenvalue of the eigenstate of position. The German word "eigen" was first used in this context by the mathematician David Hilbert in 1904. Schroedinger's wave equation gives wavefunction solutions, meaning the possibilities where the electron might be, just as does Heisenberg's probability distribution. As stated above, when a wavefunction collapse occurs because something has been done to locate the position of an electron, the electron's state becomes an eigenstate of position, meaning that the position has a known value.

Dirac wave equation
In 1928, Paul Dirac worked out a variation of Schroedinger's equation that accounted for a fourth property of the electron in its orbital. There was a doublet meaning a pair of lines in the spectrum of a hydrogen atom that was unaccounted for. This meant that there was more energy in the electron orbital from magnetic moment than had previously been described. Wolfgang Pauli when studying alkali metals had introduced what he called a "two-valued quantum degree of freedom" associated with the electron in the outermost shell. This led to the Pauli Exclusion Principle that predicted that no more than two electrons can inhabit the same orbital. It also predicted that any neutron, electron, or proton (types of fermions) could not exist in the same quantum state. We learned in Schroedinger's equation that there were three quantum states of the electron, but if two electrons could be in the same orbital, there had to be another quantum number to distinguish those two electrons from each other and to describe the extra magnetic moment shown in the atomic spectrum. In early 1925, the young physicist Ralph Kronig had introduced a theory to Pauli that the electron rotates in space in the same way that the earth rotates on its axis. This would account for the missing magnetic moment and allow for two electrons in the same orbital to be different if their spin was in opposite directions to each other thus satisfying the Exclusion Principle. So Paul Dirac introduced the fourth quantum number called the spin quantum number designated by the letter s to the new Dirac equation of the wavefunction of the electron. In 1930, Dirac combined Heisenberg's matrix mechanics with Schroedinger's wave mechanics into a single quantum mechanical representation in his Principles of Quantum Mechanics. The quantum picture of the electron was now complete.

All of the above development of quantum theory was based mainly on the spectroscopy|atomic spectrum of the hydrogen atom. This is due to the fact that each atom of each element produces a unique pattern of spectral lines when light from each different kind of element is passed through a prism. Scientists could not study the electron and nucleus of the atom itself because they cannot be seen. Even today with High-resolution Scanning Tunneling Electron Microscopes we can only get images of the atom as a blurry fuzzball. However, the spectral lines of the atom reveal the orbits of electrons and the energy that can be expected. It was basically this study of the spectroscopic analysis of first the hydrogen atom and then the helium atom that led to quantum theory. Therefore, the mathematical formula were made to fit the picture of the atomic spectrum. That is why quantum mechanics is sometimes referred to as a form of mathematical physics.

Quantum entanglement
However, it took a great mind to challenge quantum mechanics and to further define a concept hidden in Heisenberg's quantum theory. That mind was Albert Einstein who rejected Heisenberg's Uncertainty Principle. Collectively, Heisenberg's quantum mechanics based on Bohr's initial explanation became known as the Copenhagen Interpretation of quantum mechanics by detractors. Einstein, in trying to show that it was not a complete theory, recognized that the theory predicted that two or more particles which have interacted in the past exhibit surprisingly strong correlations when various measurements are made on them. Einstein called this "spooky action at a distance". In 1935, Schroedinger published a paper explaining the argument which had been denoted the Einstein-Podolsky-Rosen (EPR) argument. Einstein showed that the Copenhagen Interpretation predicted quantum entanglement which he was trying to prove was incorrect in that it would defy physics. Quantum entanglement means that when there is a change in one particle at a distance from another particle then the other particle automatically changes to counter-balance the system. In quantum entanglement measuring one entangled particle defines its properties and seems to influence the properties of its partner or partners instantaneously no matter how far apart they are. Due to the fact that the two particles are entangled interaction on the one causes instantaneous effects on the other. Einstein had calculated that quantum theory would predict this, he saw it as a flaw and therefore challenged it. However, instead of showing a weakness in quantum mechanics, this forced quantum mechanics to acknowledge that quantum entanglement did in fact exist and it became another foundation theory of quantum mechanics.

There is a lot of controversy over quantum mechanics, historically, by physicists involved in the theory themselves. It is because the theory was sort of an ad hoc theory to make mathematics that fit the measurements of the atomic spectrum that it is so controversial and has been attacked by Albert Einstein, Erwin Shroedinger, Alfred Lande and many other physicists involved in its development. The theory itself was a shot in the dark similar to the Ptolemian earth-centered solar system. The system of Ptolemy was a great theory with the earth at the center. It worked very well to predict all the movements of the planets with great accuracy even including retrograde motion. In fact, when Copernicus came up with his sun-centered solar system model, it didn't predict the position of the planets nearly as well as the earth-centered system because Copernicus still used absolutely circular orbits. It wasn't until Galileo saw through the telescope that Venus went through all its phases that it could be proven beyond a doubt that the sun was at the center. Today Quantum Mechanics works like Ptolemy's earth-centered solar system. It may not be reality (even according to Niels Bohr), but it sure predicts things very well. Once a better microscope (analogous to Galileo's telescope) is invented that can actually see the atomic structure, quantum mechanics will probably be overturned. There is so much information on the controversy over quantum mechanics that it would make an interesting article just quoting the physicists who invented it and Einstein who rejected many of its principles. However, quantum mechanics is so extremely useful in predicting outcomes at the atomic level in our present state of technology that it probably won't be revised for some time to come.

General Relativity

 * Where the spacetime signature is (+,-,-,-).

Motion of Objects
In general relativity, the motion of objects that don't suffer a force is along a geodesic. A geodesic is a line of extremal length in the space-time.

References and source information
1. Asimov, Isaac Understanding Physics, 1993.

2. Isaac Newton AXIOMS, OR LAWS OF MOTION, Original 1792 translation (from Latin) by Andrew Motte.

3. Sir Isaac Newton, Article by: J J O'Connor and E F Robertson, http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Newton.html.

4. J. Hermann, "Phoronomia", Amsterdam, Wetsten, (1716).

5. On the Soul By Aristotle Translated by J. A. Smith.

6. Tuomo Suntola, "Photon – the minimum dose of electromagnetic radiation", Suntola Consulting Ltd., Tampere University of Technology, Finland.

Authors
Contributing authors: Janeen Hunt