1        Purpose

Create a collaborative immersive environment allowing citizen scientists and researchers to join physically or virtually via avatar or volumetric representation of themselves for navigation, inspection, analysis, and simulations of scientific or industrial 3D/spatial models/datasets ranging from microscopic to macroscopic.

Examples are:

  • View data in its actual 3D or 4D (over time) form through Immersive Reality.
  • Present very large data sets that are generated by microscopes, patient, and industrial scanners.
  • Format/reformat, qualify, and quantify sliced dataset with enhanced visualisation and analysis tools or import results for rapid correction of metadata for volumetric import.
  • Provide tools for investigators to understand complex data sets completely and communicate their findings efficiently.

Objective of an exemplary case: to define interfaces of AI Modules that create 3D models of the fascia from 2D slices sampling microscopic medical images, classify cells based on their spatial phenotype morphology, enable the user to explore, interact with, zoom in the 3D model, count cells, and jump from a portion of the endoderm to another.

2        Description

There is a file containing the digital capture of 2D slices, e.g., of the endocrine system.

An AIM reads the file and creates the 3D model of the fascia.

Another AIM finds the cells in the model and classifies them.

A human

  1. navigates the 3D model.
  2. interacts with the 3D model.
  3. zooms in the 3D model (e.g., x2000).
  4. converts a confocal image stack into a volumetric model.
  5. Analyses the movement of an athlete for setting peak performance goals.

Relevant data formats are:

  1. Image Data: TIFF, PNG, JPEG, DICOM, VSI, OIR, IMS, CZI, ND2, and LIF files
  2. Mesh Data: OBJ, FBX, and STEP files
  3. Volumetric Data: OBJ, PLY, XYZ, PCG, RCS, RCP and E57[1]
  4. Supplemental Slides from Powerpoint/Keynote/Zoom
  5. 3D Scatterplots from CSV files

3        Specific application areas

3.1       Microscopic dataset visualisation

  1. Deals with different object types, e.g.:
    1. 3D Visual Output of a microscope.
    2. 3D model of the brain of a mouse.
    3. Molecules captured as 3D objects by an electronic microscope.
  2. Create and add metadata to a 3D audio-visual object:
    1. Define a portion of the object – manual or automatic.
    2. Assign physical properties to (different parts) of the 3D AV object.
    3. Annotate a portion of the 3D AV object.
    4. Create links between different parts of the 3D AV object.
  3. Enter, navigate and act on 3D audio-visual objects:
    1. Define a portion of the object – manual or automatic.
    2. Count objects per assigned volume size.
    3. Detect structures in a (portion of) the 3D AV object.
    4. Deform/sculpt the 3D AV object.
    5. Combine 3D AV objects.
    6. Call an anomaly detector on a portion with an anomaly criterion.
    7. Follow a link to another portion of the object.
    8. 3D print (portions of) the 3D AV object.

3.2       Macroscopic dataset visualisation and simulation

  1. Deals with different dataset types, e.g.:
    1. Stars, 3D star maps (HIPPARCOS, Tycho Catalogues, etc.).
    2. Deep-sky objects (galaxies, star clusters, nebulae, etc.).
    3. Deep-sky surveys (galaxy clusters, large-scale structures, distant galaxies, etc.).
    4. Satellites and man-made objects in the atmosphere and above, space junks, planetary and Moon positions.
    5. Real-time air traffic.
    6. Geospatial information including CO2 emission maps, ocean temperature, weather, etc.
  2. Simulation data
    1. Future/past positions of celestial objects.
    2. Stellar and galactic evolution.
    3. Weather simulations.
    4. Galaxy collisions.
    5. Black hole simulation.
  3. Create and add metadata to datasets and simulations:
    1. Assign properties to (different parts) of the datasets and simulations.
    2. Define a portion of the dataset – manual or automatic.
    3. Annotate a portion of the datasets and simulations.
    4. Create links between different parts of the datasets and simulations.
  4. Enter, navigate, and act on 3D audio-visual objects:
    1. Search data for extra-solar planets.
    2. Count objects per assigned volume size.
    3. Detect structures and trends in a (portion of) the datasets and simulations.
    4. Call an anomaly detector on a portion with an anomaly criterion.

3.3       Educational lab

  1. Experiential learning models simulations for humans.
  2. Group navigation across datasets and simulations.
  3. Group interactive curricula.
  4. Evaluation maps.

3.4       Collaborative CAD

  1. Building information management.
  2. Collaborative design and art.
  3. Collaborative design reviews.
  4. Event simulation (emergency planning etc.).
  5. Material behaviour simulation (thermal, stress, collision, etc.).

[1] https://info.vercator.com/blog/what-are-the-most-common-3d-point-cloud-file-formats-and-how-to-solve-interoperability-issues