# Difference between revisions of "Seismic event relocation techniques"

Intro discussion

## Seismic event relocation overview

In the study of mining induced seismicity, the accuracy of event location and magnitudes has improved through the technological advancement of hardware and processing algorithms. Seismic events are typically located using P and S wave arrival times at different receivers paired to a velocity model. One of the most influential developments in micro seismic data processing is event relocation. Research in the field of crustal seismology has developed a wide array of techniques to increase the accuracy of event location. These techniques fall into the broad category of “event relocation” or “relative location methods”. Adding event relocation to a micro seismic data processing flow aids interpretation by concentrating previously diffuse event clouds by reducing positional uncertainty. This increased confidence in event location can add context to the geometry of the source mechanism; adding value to the seismic data set.

Initial (absolute) event location accuracy is generally hampered by the following

• Errors in P and S wave arrival time picks either from auto-picking or human error
• Limited understanding or over-simplification of regional velocity model
• Limited spatial coverage of geophones in mining applications

The techniques outlined below are deemed “relocation” techniques because they are done after an initial (absolute) location solution has been calculated. These types of process can also be considered “post-processing” or “reprocessing” in the seismic data processing workflow. The case examples presented are related specifically to mining induced seismicity, though the development and use of the techniques transcend the field, with applications in hydraulic fracture monitoring and crustal seismology.

## When is relocation completed?

In the seismic data processing workflow, event relocation can be performed immediately after an initial (absolute) solution for an event’s location has been calculated. Often a relocation algorithm will be applied to heritage data as a relatively inexpensive way to add value to pre-existing datasets. Due to the advance in computational power in seismic monitoring solutions relocation algorithms can be completed “on the fly” as part of the standard event location processing flow to reduce uncertainty in calculated event locations.

## How is it done?

Seismic event relocation algorithms describe a style of process, with many variations on a handful of themes. The most popular and arguably most effective theme being the Double-difference technique which differs from other relocation algorithms due to its assumptions and simultaneous relocation of large numbers of events over large distances. Other themes include “Master Event Relocation”, where events are moved relative to single “master” event which is computationally straight forward, but propagates location errors from the initial placement of the master event through to the relocated “relative events”. Alternatively, there are “Simultaneous Relocation” approaches. First implemented by Got et al, they determined cross-correlation time delays for all possible event pairs in question and combined them into a system of linear equation which was solved via least squares approximation and converted to positional corrections. For the remainder of this article reference is made to the three aforementioned classifications; Double-difference, Master Event, and Simultaneous Event relocation algorithms to summarize the current state of event relocation techniques and enable direct comparison between them.

### Double-difference (DD) method

The double-difference approach, first introduced by Waldhauser in 2000 has been applied successfully as an event relocation algorithm option in mining induced seismicity applications.

#### Double-difference (DD) workflow

The following procedure for executing the double difference algorithm is adapted from (Waldhauser, Ellsworth 2000).

First, both P and S-wave differential travel times are derived from cross-spectral (cross correlation) methods with travel-time differences formed from catalog data and minimize residual differences (or double differences) for pairs of earthquakes by adjusting the vector difference between their hypocenters. From this calculated “double difference” we are able to determine interevent distances between correlated events that form a single multiplet (group of similar events) to the accuracy of the cross-correlation data while simultaneously determining the relative locations of other multiplets and uncorrelated events to the accuracy of the absolute travel-time data, without the use of station corrections.

The arrival time, T, for an earthquake, i, to a seismic station, k, is expressed using ray theory as a path integral along the ray,

where τ is the origin time of event i, u is the slowness field, and ds is an element of path length. Due to the nonlinear relationship between travel time and event location, a truncated Taylor series expansion (Geiger, 1910) is generally used to linearize equation (1). The resulting problem then is one in which the travel-time residuals, r, for an event i are linearly related to perturbations, Δm, to the four current hypocentral parameters for each observation k:

where rik = (tobs - tcal)ik, tobs and tcal are the observed and theoretical travel time, respectively, and Δmi = (Δxi, Δyi, Δzi, Δ τ i). Equation (2) is appropriate for use with measured arrival times. However, cross-correlation methods measure travel-time differences between events, (tki - tkj)obs, and as a consequence, equation (2) can not be used directly. Frechet (1985) obtained an equation for the relative hypocentral parameters between two events i and j by taking the difference between equation (2) for a pair of events,

where Δmij_(Δdxij, Δdyij, Δdzij, Δdsij) is the change in the relative hypo central parameters between the two events, and the partial derivatives of t with respect to m are the components of the slowness vector of the ray connecting the source and receiver measured at the source (e.g., Aki and Richards, 1980). Note that in equation (3) the source is actually the centroid of the two hypocenters, assuming a constant slowness vector for the two events. drikj in equation (3)m L. Ellsworth is the residual between observed and calculated differential travel time between the two events defined as

We define equation (4) as the double-difference. Note that equation (4) may use either phases with measured arrival times where the observables are absolute travel times, t, or cross-correlation relative travel-time differences. The assumption of a constant slowness vector is valid for events that are sufficiently close together, but breaks down in the case where the events are farther apart. A generally valid equation for the change in hypo central distance between two events i and j is obtained by taking the difference between equation (2) and using the appropriate slowness vector and origin time term for each event

or written out in full

The partial derivatives of the travel times, t, for events i and j, with respect to their locations (x, y, z) and origin times (s), respectively, are calculated for the current hypocenters and the location of the station where the kth phase was recorded. Δx, Δy, Δz, and Δs are the changes required in the hypo central parameters to make the model better fit the data. We combine equation (6) from all hypocentral pairs for a station, and for all stations to form a system of linear equations of the form

where G defines a matrix of size M _ 4N (M, number of double-difference observations; N, number of events) containing the partial derivatives, d is the data vector containing the double-differences (4), m is a vector of length 4N, [Δx, Δy, Δz, ΔT]T, containing the changes in hypocentral parameters we wish to determine, and W is a diagonal matrix to weight each equation. We may constrain the mean shift of all earthquakes during relocation to zero by extending (7) by four equations so that

for each coordinate direction and origin time, respectively. Note that this is a crude way to apply a constraint, but appropriate for a solution constructed by conjugate gradients (see Lawson and Hanson [1974] for more exact solutions of constrained least squares). As shown later, the double difference algorithm is also sensitive to errors in the absolute location of a cluster. Thus, equation (8) is usually down weighted during inversion to allow the cluster centroid to move slightly and correct for possible errors in the initial absolute locations.

### Master event (ME) method

In contrast to the double-difference method, all relocated events are linked back to single, master event. This is a major hindrance to the method, as any location error in the master event will propagate throughout all relocated events.

The procedure for executing the double difference algorithm is adapted from (Ito, 1985). Other examples of the Master Event Approach include (Scherbaum and Wendler, 1986; Fremont and Malone, 1987; Van Decar and Crosson, 1990; Deichmann and Garcia-Fernandez, 1992; Lees, 1998). This example was chosen for it’s relative simplicity and straight forward execution.

#### Master event (ME) event workflow

Differences of P-wave onset times are used to determine relative hyocenters of two events. P-wave onsets i- and j-th events at k-th station, Pik and Pjk, are written as

And

Respectively, where Oi is the origin of the i-th event, Tik the travel time of the P-wave from i-th event to k-th station, dk the total instrumental delay at k-th station, and so on. Since seismic waves from two events are recorded at a station using exactly the same observation system, all the instruemtnal time delays, dk, can be canceled out in arrival time differences. Therefore, difference of P-wave onse times between i- and j-th events observed at k-th station τijk is written as

The difference in τij between k- and l-th stations Δτkl is

Therefore Δτkl would be due totally to a change in hypocenters, between i- and j-th events

In determination of relative hypocenters, we assume that the medium is uniform with the P and S wave velocities. This simplification is considered because we need structure parameters only for the source region. Relative hypocenters will be determined from station-to-station differences in arrival time differences between two events, referring to the location of a master event. This is done by using 4 station-to-station time differences, in order to determine three coordinates of a relative hypocenter (X,Y,Z). The limitation of 4 time-differences out of 6 expected from 4 independent signals results from the fact that the signal of S02 can not be matched to those of S-3 and S-4, because of the difference in the time delay unit.

Make some comment about how many measurements you need to solve for three parameters, for the system to be uniquely determined.

### Simultaneous event method

Got et al. (1994) overcame these restrictions by determining cross-correlation time delays for all possible event pairs and combining them in a system of linear equations that is solved by least-squares methods to determine hypocentroid separations (see also Fre´chet, 1985). For simplicity, a constant slowness vector was used for each station from all sources. Because only cross-correlation data is considered, this approach cannot relocate uncorrelated clusters relative to each other.

#### Simultaneous event relocation workflow

To relocate one event of a doublet relative to the other, we will follow a method derived from that of Frechet 1985.

Let us call the coordinates of one event relative to the oether x, y, and z (positive to the east, north and down, respectively), T the difference in origin times. We can write the time delay between two events of a double recorded at the kth station situated at the azimuth Azk and the take-off angle Aink from their common hypocenter (estimated as the barycenter of both absolute locations) as

Where v is the P wave velocity at hypocentral depth. In expressing the whole set of NSTA time delays as a function of x, y, z and T coordinates, we find a system of linear equations

Where d is the data vector (time delays) and, for a doublet, G is a NSTA x 4 matrix containing the partial derivatives DTk relative to the unknown vector m = (x,y,z,T)T

For a multiplet including NEV events recorded by NSTA stations, (3) is a (sparse) system of NEW (NEV-1_NSTA/d linear equations and 4(NEV-1) unknowns.

In the classic earthquake location methods, as in Frechet 1985, method of relative location, similar systems are solved using the Lanczos decomposition of G. However, as the size of G increases as NEV3, this computation becomes very expensive for large multiplets (NEV > 50). First computations with the Lanczos method have shown that the system was generally well conditioned, with condition numbers ranging from 20 to 50.

In that case, there is an attractive way to reduce the size of our system; it consists of following the classical least squares approach and solving the (weighted) normal equations

to find

Where Cd is the data variance-covariance matrix.

As we can compute directly each elemt of the matrix CTCd-1G is a symmetric positive definite matrix, and the solution m is found by performing (after an appropriate scaling) the Cholesky decomposition of GTCd-1G.

It is often useful to relocate p new events relative to n relocate devents. In that aim, the wighted normal equations can be written (subscripts refer to dimensions of the matricies)

And the solution m for the p new relative locations becomes

$\displaystyle m_{4p} = (G^TC_d^{-1}G)^{-1}_{rp,rp}(G^TC_d^(-1)d - G^TC_d^{-1)G)_{4p,4(n-p)}m_{4(n-p}) \qquad (9)$

Since each time delay is estimated with an error, each equation will be weighted to take this error into account. However, time delays are computed with two kinds of errors: coherency-dependent errors and possible errors due to different instrumental delays. The use of a weight merely inversely proportional to the error in the time delay, which we compute from the linear fit of cross-spectrum phase, will therefore not be appropriate since it corresponds only to part of the error and strongly favors very coherent pairs of signals. To retain a maximum number of significant time dealys and to reject those that are aberrant (but possible coherent), we follow Frechet 1985 and use the bi-square weighting proposed by Mosteller and Tukey 1979.

Where Rk = DTk - Dtauk is the residual on each time delay DTk Dtaukl, being the theoretical time delay computer for the kth station after relative relocation, and Rmed is the median of the set of absolute values of Rk. This weight reject observations that give residuals whose absolute value is a times greater than the median; a is usually chosen between 4 and 6. The initial solution is obtained for unit weights. The inversion process is then iterated and the wight is modified at each step. The process stops when the rms reaches a minimum value or when the maximum iteration number is reached. Usual well-coherent multiplets require two or three iterations to be relocated.

The variance-covariance matrix of the model estimates is given by

This method of relative relocation is much faster and less expensive in memory requirements than a singular value decomposition. It allows the use of time delays computer for the whole set of event pairs, even when these pairs are numerous. From that point of view the method proposed is not a master event technique, as used by Frechet 1985, Ito 1985, Scherbaum and Wndler 1986, Fremont and Malone (1987), Deichmann and Garcia-Fernandez 1992. In a master event location, each event is relocated relative to only one event (the master event), no additional constraint being provided by the other events. The use of all possible pairs of events strongly increases the relocation accuracy, which is furthermore no longer dependent on the choice of master event. Indeed, in the original master event technique, a good master event needs to be coherent with each event, that is, it should occupy a central position in a multiplet of small spatial extension. Ito 1985, Scherbaum and Wendler 1986 and Fremont and Malone 1987 already noticed that the relative relocation error increases with distance from the master event.

## Successful application of event relocation in mining applications

Sample text here

### Application of extended double difference relocation

The location of the seismic event hypocenter is the very first task undertaken when studying any seismological problem. The accuracy of the solution can significantly influence consecutive stages of analysis, so there is a continuous demand for new, more efficient and accurate location algorithms. It is important to recognize that there is no single universal location algorithm which will perform equally well in any situation. The type of seismicity, the geometry of the recording seismic network, the size of the controlled area, tectonic complexity, are the most important factors influencing the performance of location algorithms. In this paper, we propose a new location algorithm called the extended double difference (EDD) which combines the insensitivity of the double-difference (DD) algorithm to the velocity structure with the special demands imposed by mining: continuous change of network geometry and a very local recording capability of the network for dominating small induced events. The proposed method provides significantly better estimation of hypocenter depths and origin times compared to the classical and double-difference approaches, the price being greater sensitivity to the velocity structure than the DD approach. The efficiency of both algorithms for the epicentral coordinates is similar

### Relocation of mining-induced seismic events in the upper silesian coal basin by use of double difference method

The application of the double-difference (DD) algorithm to the relocation of induced seismic events from the Upper Silesian Coal Basin is discussed. The method has been enhanced by combining it with the Monte Carlo sampling technique in order to evaluate relocation errors. Results of both synthetic tests and relocation of real events are shown. They are compared with estimates of the classical single-event (SE) approach obtained through the Monte Carlo sampling of the a posteriori probability. On the basis of this comparison we have concluded that the double-difference approach yields better estimates of depth than the classical location technique.