In optical light, astronomers measure the brightness of astronomical objects using the magnitude system. Historically, the magnitude system is based on a concept first introduced by the Greek astronomer Hipparchus (c. 190 BCE–120 BCE). In about 129 BCE, Hipparchus produced the first well-known star catalog in the western world. In this catalog, Hipparchus ranked stars by what he called magnitude. He called the brightest stars he could see those of the first magnitude. Stars not so bright he called second magnitude. Using this system, he called the faintest stars he could just barely see sixth magnitude. This basic system has survived to today. Galileo forced us to change the system slightly. In 1610, when he used a telescope to view the sky, Galileo discovered there were fainter stars, and the magnitude scale became open ended. As telescopes became bigger and better, astronomers kept adding more magnitudes to the bottom of the scale. By the middle of the 19th century, astronomers realized it was necessary to define a more rigorous magnitude system. It had been determined that a first-magnitude star was approximately 100 times brighter than a sixth-magnitude star. Accordingly, in 1856, Norman Pogson (1829–1891) proposed that a difference of 5 magnitudes be defined as exactly a factor of 100 to 1 in brightness. Since at the time in the western world, it was believed that all human senses were logarithmic; it seemed perfectly reasonable to define a magnitude difference between two sources in the following manner:
\begin{equation} m_{1} – m_{2} = 2.5 log \frac{F_{2}}{F_{1}} \end{equation}
where the Fs are the fluxes, or amount of light from two objects, labeled 1 and 2. This gives the magnitude difference between the first object and the second. Keep in mind that fainter objects will have larger magnitude numbers than brighter objects. (Very bright objects will even have negative magnitudes—Hipparchus defined magnitudes this way 2,000 years ago, and so it is easier to keep it than to go back and redefine past observations using a new system.) This rule describes how brightness measured by a light-sensitive instrument can be represented as astronomical magnitudes. The magnitude of an object is also known as apparent magnitude (m), because it describes how bright an object appears to us on Earth. As we can see from the equation, it is related to flux. There is also a quantity known as absolute magnitude (M), which describes how bright an object is inherently; it is related to the object’s luminosity.