Application Information¶
Architecture¶
The Alert Archive database’s design is described in detail in DMTN-183. The Alert Archive is a combination of three systems, the Ingester, Server, and S3 bucket. The Ingester is a Kafka consumer running alongside the Alert Stream and reads alerts from active Alert Stream topics. The Ingester worker checks the Alert Stream topic at regular intervals, consumes new alerts, compresses the individual alerts, and posts them to the S3 bucket located at USDF. To retrieve these alerts, the Alert Archive server allows for retrieval based on the source ID of the alert via an http handler.
Architecture Diagram¶
Associated Systems¶
The alert archive is dependant on the Alert Stream Broker., which receives alert packets from the Prompt Processing pipelines <>.
Configuration Location¶
The Alert Archives configuration is split between three different Github repositories. The Server and Ingester are in separate LSST DM repositories, and their deployment is managed via Phalanx helm charts.
Config Area |
Location |
|---|---|
Configuration |
|
Server Application Code Repository |
|
Ingester Application Code Repository |
|
Vault Secrets Dev |
secret/rubin/usdf-alert-stream-broker-dev/alert-stream-broker/ |
Vault Secrets Prod |
Data Flow¶
The Alert Archive relies on the Alert Stream. Alerts are read from the alert stream via the Ingester. The Ingester checks for alerts every 60 minutes, and when it begins reading new alerts will continue read until all alerts have been read and sent to the alert-archive S3 bucket and a 30 minute timeout has been reached. Here, the alerts can be retrieved via the Server, which allows for simple key retrieval.
Dependencies - S3DF¶
ArgoCD
S3 Storage
Phalanx
Dependencies - External¶
None
Disaster Recovery¶
If the Alert Archive Ingester or the Alert Archive Server goes down, follow instructions in DMTN-214 for recovery steps.