We can determine how big an earthquake is by measuring the size of the signal directly from the seismogram. However, we also have to know how far away the earthquake was. This is because the amplitude of the seismic waves decreases with distance, so we must correct for this.
In 1932 Charles Richter devised the first magnitude scale for measuring earthquake size. This is commonly known as the Richter scale. Richter used observations of earthquakes in California to determine a reference event; the magnitude of an earthquake is calculated by comparing the maximum amplitude of the signal with this reference event at a specific distance. The Richter Scale is logarithmic, that means that the amplitude of a magnitude 6 earthquake is ten times greater than a magnitude 5 earthquake.
Since then, a number of different magnitude scales have been developed based on different seismic wave arrivals observed on a seismogram. Body wave magnitude, mb, is determined by measuring the amplitude of P-waves from distant earthquakes. Similarly, surface wave magnitude, Ms, is determined by measuring the amplitude of surface waves.
However many magnitude scales tend to underestimate the size of large earthquakes. This led to the development of the moment magnitude scale Mw. The advantage of Mw is that it is clearly related to a physical property of the source, since the seismic moment is a measure of the size of an earthquake based on the area of fault rupture, the average amount of movement, and the force that was required to overcome the friction holding the rocks together.
How the Richter’s magnitude Scale works. The amplitude is measured from the seismogram, as is the time difference between the arrival of the P- and S-waves. A line connecting the two values on the graph gives the magnitude of the earthquakes.