Skip to content

Make stream-looper more robust and evaluate its cost #223

@troyraen

Description

@troyraen

Background

Our "stream-looper" is a small VM running a python script that publishes alerts to "loop" (i.e., "heartbeat") topics at a rate of ~1/sec, which is useful for testing. It is not part of our main broker pipeline. We currently run stream-loopers for ZTF and Elasticc. Each is configured a little differently (especially w.r.t. where they get the alerts from) and is tweaked manually as needed. There are docs in my working-notes describing each setup (note that the python script is in the same directory but not visible in the docs, so look for it in the repo).

(Using ZTF for the example)
Ideally, the stream-looper VM gets the alerts from a "reservoir" subscription on the ztf-alerts topic, which is published by our ZTF consumer.

  • Advantages are the ztf-loop alerts are:
    • always current (e.g., Avro schema version)
    • in exactly the same format as published by our consumer (including metadata from the ZTF Kafka broker that is required by some of our modules)
  • Disadvantages are:
    • Pub/Sub leaves messages in a subscription for a max of 7 days (so ztf-loop will not receive alerts if ZTF has been offline for >7 days)
    • We pay for a lot of messages in the "reservoir" subscription that we never use (since ZTF publishes more than 1/sec on average). This may become more important since Google started charging to keep message in a subscription >24(?) hours.

When alerts are unavailable from a "reservoir" subscription, the steam-looper may be configured to either publish random data or to get alerts from somewhere else (e.g., a GCS bucket).

Work to be done

In my opinion, the overall setup works well. It has required almost no maintenance, other than manually changing the source of alerts when needed/wanted. The VM costs very little. It could benefit from some relatively simple work:

  • Make the python script more robust w.r.t. the source of alerts. Perhaps it should do this: The first source is the "reservoir" subscription. If that is empty, it should fall back gracefully to a bucket and/or data that is either on-disk or randomly generated.
  • Evaluate the cost of the stream-looper module, particularly "reservoir" subscriptions, but also "loop" topics, etc. (and part of the work is figuring out "etc.", though I don't think there's anything as significant as the reservoirs). We probably need to change something. We may want to keep messages in the reservoir for a shorter amount of time, or otherwise do something smarter than just dumping the entire ZTF stream in there (and soon, LSST).
  • Add the stream-looper instructions to our main docs, and the python script to a less obscure location. (Currently, both are only in my working-notes, linked above.)

Metadata

Metadata

Assignees

No one assigned

    Labels

    DocumentationDocumentation needs to be added or correctedEnhancementNew feature or requestPipeline: PeripheryTasks that rely on the pipeline, but do not impact it and do not warrant their own repogood first issueGood first issue for anyone new to Pitt-Google, python, GCP, etc.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions