You are here
Home > Research >

Geo-Sounder v1

This p5.js sketch turns live earthquake data into an audio-visual replay. It pulls recent earthquake events from the USGS feed, filters them to a chosen geographic region, and compresses that activity into a short looping timeline on screen. As each quake appears in the replay, it is drawn as a pulsing visual marker and also triggers a bell-like tone, creating a shifting map of seismic activity that can be both seen and heard.

The code was created with help of AI. The following is an AI generated description of the code functionality:

Main functions

fetchQuakes()

This is the data-loading function. It connects to the selected USGS GeoJSON feed, reads the earthquake records, and converts them into a simpler internal format containing magnitude, place name, time, longitude, latitude, and depth. It then filters the data in three ways: only valid timestamps are kept, only earthquakes inside the chosen time window are included, and only earthquakes inside the selected region are retained. Finally, the results are sorted from oldest to newest so the replay can run in chronological order.

isInSelectedRegion(lon, lat)

This function controls the geographic filter. It checks whether an earthquake’s longitude and latitude fall inside the bounding box defined at the top of the sketch. In practical terms, this is the part you edit when switching from Oklahoma to another state, country, or local study area.

addToneData(q)

This function gives each earthquake its sound identity. It takes the quake data and derives a stable bell pitch using a combination of location, depth, magnitude, and timestamp. The result is converted from MIDI note values into frequency in Hertz, so each quake has both a visual presence and a repeatable tone. Larger magnitudes gently push the pitch upward, helping stronger events stand out sonically.

drawPlayback()

This is the core replay engine. It maps the filtered dataset onto a looping playback timeline, so a long stretch of real seismic activity can be compressed into a short screen-based performance. As the virtual replay time moves forward, earthquakes are revealed one by one. Each event is drawn in the correct screen position based on longitude and latitude, and each one is only sounded once per loop. Recent events get an expanding pulse ring, which makes the replay feel active rather than static.

drawFrame()

This draws the visual container for the work: the border, guide grid, title, and coordinate labels. It gives the sketch a structured display and makes clear which region is being shown. It also helps frame the work as a mapped replay rather than a random animation.

drawTimeline(playNorm, minTime, maxTime, virtualTime)

This function shows where the replay currently is within the compressed time span. It draws a progress bar across the bottom of the screen and displays the current replayed date and time, making the relationship between real earthquake time and screen time easier to follow.

drawTooltip(x, y, q)

This is the hover-information layer. When the cursor moves over an earthquake marker, a small tooltip appears showing the quake’s magnitude, location, depth, pitch, and timestamp. This adds a more forensic or exploratory mode to the piece, allowing viewers to inspect individual events in detail.

drawHUD()

The HUD is the lightweight information panel displayed over the sketch. It reports how many earthquakes are currently loaded, how large the selected time window is, when the USGS feed was last updated, and whether audio is enabled. This is useful both for debugging and for public display, because it keeps the dataset status visible.

initAudio()

Because browser audio must usually be enabled by user interaction, this function starts the sound engine after a click or tap. It creates the reverb effect, builds a pool of bell voices, and switches the sketch into audio mode. Until this happens, the visualisation still works, but remains silent.

triggerBell(q)

This is called when the replay timeline passes an earthquake for the first time in a loop. It chooses the next available synthesiser voice from the pool and tells it to play the earthquake’s mapped frequency, magnitude, and depth characteristics. In effect, this is the moment where data becomes sound.

BellVoice

This class defines the bell instrument itself. Each voice uses three oscillators and three envelopes to create a layered struck-metal tone rather than a plain sine beep. Reverb is added to make the sound more spacious and resonant. Magnitude affects loudness, while depth slightly darkens the tone, so the audio retains some relationship to the physical properties of the earthquake.

Helper functions: midiToHz() and fracPart()

These are small support functions. midiToHz() converts musical note numbers into actual frequencies, while fracPart()extracts the fractional part of a number and is used in the pitch-mapping process. They are simple, but they support the larger logic of turning seismic data into stable musical output.

Editable controls

The sketch is designed so a few variables at the top can reshape the whole piece. The main ones are the region bounds, the replay duration, the number of real hours being compressed, and the USGS feed source. Together, these let you decide whether the work behaves like a sparse, slow-moving trace of local events or a much denser sonic map of regional seismic activity.

antonyhall
Artist, educator, and researcher working between the fields of science and art.
http://antonyhall.net/blog
Top