EUROPEAN RESEARCH INFRASTRUCTURE ON SOLID EARTH

TCS Services

An important effort was to identify the metadata standards of NFO specific data and data products and to create a service for them.


For this purpose, FRIDGE (EU NFO Federated specific data and products gateway and Virtual Laboratory), as an NFO service, is currently under construction. This will be the common gateway to all NFOs in order to discover and download NFO specific data and high level data products. It will also host simple visualization tools for multidisciplinary data.

The IT team is currently creating common dB standards for NFO specific data and data products, taking adventage of available data structures in the NFOs, e.g. the geochemical database schema used by TABOO and QuakeML, which is used in all NFOs. The IT team introduced and demonstrated a proof-of-concept of two new web services related with Vp/Vs ratio and Radon time series. Details are given in the Use Case section.

CREW (EU Testing Center for Early Warning & Source Characterization) is also defined as another EU level service. It is a testing facility, built on real-time and offline high-resolution data, aimed at fostering the development of next generation methodologies and software for real-time monitoring of faulting processes.

The TCS has also planned to propose a Transnational Access (TNA) at the NFOs. NFO-TCS offer opportunities for free-of-charge TNA for selected research groups (candidate NFOs) and companies (to develop/test new sensors) within a multidisciplinary platform.



 

The Data, Data Products, Software and Services (DDSS) provided by this TCS are:

  • Raw (standard and specific) Data coming from dense near fault multidisciplinary networks;
  • Multidisciplinary high-level Data Products.

 

 

46 DDSS (Data, Data Product, Service and Software) elements have been proposed. All elements are related with continuous acquisition and archives of long time-series of multidisciplinary data and data products. High priority group contains 30 DDSS elements: 12 are Data elements, 15 are Data Product elements and 3 are Services. 12 DDSS are Seismological Data or Data products; 5 DDSS are Geodetic Data or Data products, 4 DDSS are Geochemical Data, 1 DDSS is Strain Data, indicating the effective multidisciplinary within each NFO . Non-high group contains 20 DDSS elements from different disciplines. Almost all data and data products concern more than one NFO depending on the specific National or International RIs. Each DDSS linked to the corresponding harmonization group. Most of the elements from Seismology, Geodesy and Satellite are or will be exposed through the existing services from the authoritative WP (e.g. EIDA). Specific data, high-level data and data-products that will be produced by the NFO(s), will be exposed through the NFO-TCS data gateway.

 

The current DDSS priority list for the NFO-TCS is organized and contains the information about the current state of the metadata standards, data formats, service names, DDSS categories in ICS, the names of the related Harmonization Groups and the one to one link between the single DDSS element and the Institutions as data supplier.

NFO community rely on the services provided by other Thematic Core Services for the standard data (e.g. seismic and geodetic) and on the direct access to the e-infrastructures of individual NFOs via the Integrated Core Services web services for access and distribution of non-standard data (e.g. strain- and tilt-meters, geochemical and electro- magneto-telluric data). Within EPOS-IP the TCS will mainly work on the standardisation and provision of the HIGH priority DDSS.

 

Several data (Seismological /GNSS raw data) can be discovered and downloaded using services developed by other TCS, which are authoritative with respect to this kind of data. However, NFO-TCS manages a variety of data and data products, which need specific formats and metadata description, as well as services to be discovered.

 

FRIDGE  - EU NFO Federated specific data and products gateway and Virtual Laboratory is the the common gateway to all NFOs able to identify the metadata standards of NFO specific data and data products and to download them. It will also host simple visualization tools for comparison of multidisciplinary data.

 

 

FRIDGE – EU NFO Federated specific data products gateway and Virtual Laboratory (VL) for display architecture

 

FRIDGE is planned to be:

  • a virtual environment to experience simple visualization tools describing multidisciplinary data products and faults anatomy, aimed at promoting and disseminating Earth Science, including the state of scientific knowledge concerning earthquake source and tectonic processes generating catastrophic events, at different levels;
  • a novel and advanced e-infrastructure for visualization, analysis, comparison and mining of multidisciplinary time series and high level data and products
  • for scientific purposes;
  • the specific data and data products gateway for data discovery and download common to the European NFOs;

The implementation of the FRIDGE prototype will be made in collaboration with the ICS and our final proposal is moving (in the mid-term) the VL under the ICS control in order to develop of a proper virtual and modern working environment. The IT team of NFO is currently creating common dB standards for NFO specific data and data products, taking advantage of available data structures in the NFOs, such as . the geochemical database schema used by TABOO and QuakeML, which is used in all NFOs.

Effective risk-mitigation relies on the monitoring of the seismicity to study the preparation phase and the initiation of large events for earthquake early warning (EEW) and ground shaking characterization. NFOs are natural facilities for testing and comparing procedures and codes for real-time source characterization and related hazard products, such as EEW.

 

CREW – EU Testing Centre for Early Warning & Source Characterization based on high-dense networks provide researchers with a testing facility to evaluate and compare, in a transparent and equal manner, new methods, data and software. This will help the community to build the next generation of real-time systems, promoting leading-edge science and technology.

 

 

CREW is a fast and accurate real-time procedures based on high-dense nets. A testing centre for:

  • Comparing procedures with metrics defined by the community
  • Comparing new software/methods while developing leading-edge science
  • Using existing software on selected datasets

CREW operates on the ISNet (Iprinia Seismic Network). Minimum latency waveform data from the network is available on a single SeedLink server to the EEW algorithms that operate on separate Virtual Machines. Each software provides EEW alerts in standardized format (QuakeML) to a single database, which will be used for performance evaluation. Performance criteria will include location, magnitude, lead-time and ground motion estimation (with uncertainties) and make use of official authoritative bulletins. Performances are finally published on a dedicated web page.

Using the advantages of multi-disciplinary data sets, we can utilize many simple use cases.

Generally, to create these use cases, we solved complex problems that are related with:

  • the scientific concept
  • the query building
  • the common DB structure
  • the query performance
  • the input parameters standard definitions and defaults
  • the output (data and metadata) format standard

Exemple Use cases
Time
Series
X Chemical Component in time
Geodetic Displacement (all comp)
Number of earthquakes per day
b-value in time
Vp/Vs, Vp, Vs, Attenuation in time at site
Heat Flux in time
Maps with
surfaces
X Chemical component distribution
Map Layers from 3D models (Vp, Vs, Vp/Vs, Poisson ecc)
Faults surfaces
Geo Layers top and bottom surfaces
Seismological Discontinuity surfaces
Heat Flux
Vertical Sections through 3D models
Positional
maps
Interactive Maps of selected sites position
Interactive Maps of selected earthquakes position
Space-Time earthquakes distribution
Histograms
and
Dispersion
Selected earthquakes parameters errors distribution
Selected earthquakes' depth distribution
depth errors vs Origin-time errors
Dromocronas (Distance vs P and S Traveltime)
VLAB Vp/Vs, Vp, Vs, Neqks/Day, CO2 flux, Radon Flux, Geodetic Displacements ALL at site

 

 

In the following part, we specified one internal and one cross-disciplinary use case in detail.

 

Use case name/topic Selecting and viewing earthquakes’ distribution in maps and vertical sections Viewing and comparing Vp/Vs ratio to Radon concentration in time
Use case domain This use case is related to one single discipline: seismology. The goal is to allow the user to select earthquakes locations from the specific NFO-DB based on some spatial-temporal-quality criteria and to plot the distribution in a geographic map and along a vertical section which extremes are interactively selected on the map. The aim is to help understanding the fault system geometry and spatial-temporal trend of the seismicity pattern. This use case is Cross-disciplinary. The goal is the comparison of two “entities” related to seismological (Vp/Vs time series) and geochemical (Rn concentration time series) disciplines. The scientific reason for this is looking for changes (in space and time) related to the deformation process (e.g. fracturing and/or fluid migration processes) occurring during the pre- co- or post-seismic phase.
Use case description

As a seismologist, I want to observe the spatial and temporal distribution of earthquakes sources in the fault system volume in different 2D views. I want to be able to chose:
• the type of locations to use for plots (1D or 3D obtained with different location codes)
• the quality of such locations (rms, gap, h_err, v_err, nP, nS and so on)
• the time window
• the volume (geographic area + depth range)
• the magnitude range
• the magnitude threshold to use different symbol and show focal mechanisms if available

I also want to use a color palette to identify the time range of plotted locations
As soon as the map is ready, I also want to be able to draw an oriented rectangle on the interactive map to produce a vertical section with locations, as wide as the rectangle short side and as long the rectangle long side and as deep as the deepest earthquake.

As a seismologist, I want to observe and compare the spatial and temporal behavior of P- and S-wave velocity ratio (referable to the rock volume elastic parameters) with the temporal and spatial pattern in Rn concentration in a defined rock volume (referable to on-going deformation processes), looking for statistically coherent change points, thus possibly ascribable to the same undergoing physical process such as for example the earthquake preparatory phase.
Actors involved in the use case
  • User Seismologist
  • Seismology researcher
  • User Seismologist
  • Geochemistry researcher
  • Seismology researcher
Priority High Medium
Pre-conditions User must have logged in   User must have logged in
Flow of events – user view

The following steps are need to answer the question:

1. seismologist user chooses

         a. the specific NFO [the use case is focused on just one NFO]
         b. a lapse of time
         c. a geographic region inside the NFO area (or the whole)

2. seismologist user chooses

a. criteria to select earthquakes’ locations by
     i. type of location
     ii. quality of location
     iii. magnitude range
b. magnitude threshold to show strongest earthquakes with different symbol
and eventually attribute a focal mechanism if available
c. weather to use a color palette for time
d. symbols to use for earthquakes beyond magnitude threshold

3. seismologist user produces a geographical map with gray shaded topography, seismic stations distribution, earthquakes’ locations plotted as dots colored as a function of time and strongest earthquakes plotted with the same color code but with different symbol and focal mechanism associated if available
4. as soon as the map is ready the seismologist user
          a. draws an oriented rectangle on the map
          b. obtains a new Distance/Depth (XY Cartesian) plot with all the locations included within the rectangle from depth 0km to the     deepest earthquake (scaled plot) with the same color and symbol code as the map and focal mechanisms if available.
5. The locations, magnitudes, focal mechanisms, related locations’ quality parameters are restituted to the user in a quakeML catalog with all metadata included.

The following steps are need to answer the question:

1.  seismologist user chooses
     a. the specific NFO [the use case is focused on just one NFO]
     b. a lapse of time

2. seismologist user chooses
     a. one specific seismic station
     b. criteria to select seismic rays by earthquakes quality and takeoff angle
     c. criteria to generate a moving mean (boundary and default criteria are suggested by the Seismology researchers group)
     d. a magnitude threshold to show strongest earthquakes on top of the two  time series

3. seismologist user chooses
     a. one specific site from suggested neighboring Radon sites active in the same period or part of it, based on the step 2
     b. criteria to correct for meteorological data and to produce a moving mean (boundary and default criteria are suggested by the Geochemistry   researchers group)

4. seismologist user produces a combined X (time) Y (Vp/Vs ratio) and Y’ (Rn concentration) dispersion plot with symbols and strokes. Alternative sequences and needed steps (user view)

5. seismologist user may go for step 3 (Geochemistry) before step 2 (Seismology). Step 1 is forced to be the beginning.

System workflow - system view 1. It activates one tasks (query_locations) connecting to the specific NFO database
2. The task connects to the database and performs a query typically combining information from locations, quality, magnitudes, and focal mechanism tables
3. The task a temporary VIEW (see mySQL) in the DB storing the results from the combined query (one line per location)
4. The data contained in the VIEW are also stored in a downloadable quakeML catalog file
5. A listening service takes data from the view and produces the interactive map
7. The interactive map allows to draw the rectangle based on which extremes a sub_task is activated selecting earthquakes from the view based on polygonal in/out function and locations are plotted in a vertical section
8. A button to reset the vertical section plot and redraw the rectangle is made available
1. The user interface receives the input parameters for query_VpVs and query Radon
2. It activates two tasks, one per CPU, connecting to the specific NFO database
3. Each task connects to the database and searches, for the chosen recording site, records that match to the required criteria: SQL queries might be complex or simple queries, depending on the DB structure, operating on the basic tables containing
a. Vp/Vs: P arrival times (and related quality parameters), S arrival times (and related quality parameters), earthquakes locations, quality of earthquakes locations, takeoff angles, back-azimuth angles b. Radon concentration: Rn counts, site correction parameters, meteorological
site parameters
12. Each task restitutes a file on disk in a standardized format for time series (to be defined)
13. A listening service takes the files as soon as they are ready and produces an interactive plot where only scales, symbols and colors can be changed
14. A button to “Change criteria” sending to the selection page from step 2 is made available (see post-conditions)
Post-conditions The View is kept in the DB until the user session is active or until the session timeout is passed The request is kept in memory to be reloaded as default in step 6 of the
System workflow
Extension Points If the use case has extension points, list them here.
No extension points.
If the use case has extension points, list them here.
No extension points.
« Used » Use Cases If the Use Case uses other Use Cases, list them here.
No other use cases.
If the Use Case uses other Use Cases, list them here.
No other use cases.
Other Requirements This can include non-functional requirements related to the Use Case. Privacy legislation, response time of the system This can include non-functional requirements related to the Use Case.
Privacy legislation, response time of the system