Polysomnography (PSG) Tutorial
This use case illustrates a demo configuration of an Extra Horizon (ExH) environment to support a polysomnography application.
Last updated
This use case illustrates a demo configuration of an Extra Horizon (ExH) environment to support a polysomnography application.
Last updated
Polysomnography, also called a sleep study, is a comprehensive test used to diagnose sleep disorders. Polysomnography records your brain waves, the oxygen level in your blood, heart rate, and breathing, as well as eye and leg movements during the study. Polysomnography may be done at a sleep disorders unit within a hospital, a sleep center, or at home. A polysomnography records raw, multichannel time series data from channels such as EEG, EMG, ECG, PulseOx,… at a sampling rate of ±256Hz
The objective of this demo configuration is to ingest, process, store and annotate a data file generated by a medical device developed by the customer and make that data available to retrieve via the API and visualize it. The dataset is an EDF file containing multiple hours of data.
The following image shows a conceptual overview of Extra Horizon. On the left, you’ll find client interfacing applications such as a web front-end and mobile app. These clients connect to the customer's API.
Access management in Extra Horizon relies on two services: the Authentication service and the User service for user management.
Security is an important feature of every web-based application. Authorization is a requirement for all of our services. Authorization with Extra Horizon services is done through the Authentication service, which will grant a token that can be used to validate requests to other services.
The ExH User service manages standard user interaction like registering new users, activating prescriptions and resetting passwords. It also provides special features such as using roles to manage user privileges and groups to connect/aggregate any number of patients and staff members.
As is illustrated above, the User service allows to control a user's privileges by medium of roles. Users can also be connected to a group, wherein privilege levels are controlled through group roles. Group roles determine a user's permissions within a group, provided they are enlisted to that group as a staff member.
Next to storing data in structured documents, the Data service allows configuring the structure and behavior of these documents using data schemas. With this feature, the behavior of the data can be programmed.
The Task service provides a way to execute code on demand by scheduling tasks. Tasks do not contain code themselves, but instead contain the information necessary to invoke code that is stored elsewhere, such as an AWS Lambda function. Tasks can either be queued to be executed as soon as possible, or scheduled for execution at a later moment.
The Notification service makes it easy to send notifications to users and checking if they've been received. The Mail service allows more formal communication with users and works based on mail templates. E-mails and notifications can be sent in multiple languages by using the Localization service.
In this example, we illustrate a typical data-related use case: annotating biological signals. For this, an EDF file containing multiple biological signals and some metadata is provided.
The figure illustrates the data pathway from an EDF file until the data it contains is used for some purpose.
This pathway can be summarized as follows:
A new .EDF file is stored on the Files service (1a). As the file token is received with the response from the Files service, this token is stored in a collection on the Data service in the form of a JSON document (1b).
When a new document is uploaded to the Data service, it automatically triggers a task in the Task service.
The Task service gets the file token as it is triggered by the Data service, and the token is used to extract the corresponding file from the Files service.
The task, a python script, opens the file and segments all the signals into 1-minute chunks. Each synchronized segment is then uploaded to a second collection on the Data service in the form of a JSON document.
The JSON documents are ready to be used for further applications. For instance, the signals can be annotated.
The File service allows uploading any kind of data, structured or unstructured. After the upload, a token will be returned to access at a later time.
To upload a new file to the Files service, a post request is needed:
Example response:
To retrieve a file from the Files service, a get request is used
The Data service is meant for storing structured data.
The Data service can contain multiple collections to hold data in different structures. Each collection is characterized by a schema that specifies the data structure it accepts, but also the different statuses data can be into, how data transition between statuses, etc.
Token schema
The following shows the schema for the token collection. As can be seen, the data can contain two properties, token
and info
. In addition, A task is triggered at the creation of the document.
Signals schema
To store the 1-minute signals, another collection is used. Here is its schema.
Different statuses can be defined in the schema. The documents can go from one status to another by means of transitions. A transition can be manual or automatic. A manual transition is triggered by sending a request, while an automatic transition is triggered by the data service itself when some specified set of conditions is fulfilled (field present, check on values, etc.).
To write a new document in a collection of the data service, a post request is used:
To read a document from a specified collection of the data service:
Once an EDF file is stored into the File service and consequently the signals have been uploaded into the Data service, the data is easily accessible for further applications. One example is to annotate the signals. The following figure shows an annotator tool developed by FibriCheck. The data are retrieved from the data service, annotated and annotations are stored back to the data service. A demo is available here