Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend API so you can return events with specific event_ids #14

Open
yalisassoon opened this issue Oct 18, 2019 · 10 comments
Open

Extend API so you can return events with specific event_ids #14

yalisassoon opened this issue Oct 18, 2019 · 10 comments

Comments

@yalisassoon
Copy link

This would make it easier to write tests that check for the presence of very specific events i.e. 1:1 checks.

Hat tip @miike

@yalisassoon
Copy link
Author

It would also be great if this could return whether or not the specific event validated? (I.e. you could confirm that the event was received but did not end up in "Good".)

@markst
Copy link

markst commented Jul 10, 2024

We are using Snowplow Micro for validating our iOS unit tests. Each test sends specific tracking events, and we need to verify these as good or bad. However, we face two main challenges:

The current API provides a summary but lacks detailed information about each event. Including full event payloads in /micro/good and /micro/bad responses would be beneficial.

We need a way to tag events with a test identifier (e.g., test_id) and query events by this identifier. This would allow us to trace and validate events specific to each test.

To work around these limitations, we currently have to:

  • Run our tests sequentially.
  • Reset Snowplow Micro after each test to ensure a clean state.

@markst
Copy link

markst commented Jul 10, 2024

Response could potentially return more details on events i.e:

{
  "total": 7,
  "good": 5,
  "bad": 2,
  "events": {
    "good": [
      {
        "event_id": "abc123",
        "details": { /* detailed event payload */ }
      },
      /* more good events */
    ],
    "bad": [
      {
        "event_id": "xyz789",
        "error": "Validation error",
        "details": { /* detailed event payload */ }
      },
      /* more bad events */
    ]
  }
}

@miike
Copy link
Contributor

miike commented Jul 10, 2024

@markst How do you poll now and perform your assertions? Generally we assert on the contents of /micro/good and /micro/bad - often by including a debugging context (this is similar to how the Micro UI works).

@markst
Copy link

markst commented Jul 10, 2024

@miike same sort of setup as here: https://github.com/snowplow/snowplow-ios-tracker/blob/master/Integrationtests/Utils/Micro.swift

An example:

final class ABCTrackerTests: XCTestCase {
    var abcTracker: ABCTracker?
    var eventStore: EventStore?
    let collectorUrl: String = Constant.TrackerConfig.CollectorURL.snowplowMicro

    override func setUp() {
        super.setUp()

        abcTracker = ABCTracker(
            collectorURL: collectorUrl,
            appId: TestData.appId,
            base64Encoded: false,
            namespace: TestData.nameSpace,
            environment: .testing
        )

        eventStore = abcTracker?.eventStore

        if let count = eventStore?.count(), count > 0 {
            _ = eventStore?.removeAllEvents()
        }

        if collectorUrl == Constant.TrackerConfig.CollectorURL.snowplowMicro {
            wait(for: [MicroEventValidationHelper.reset()], timeout: MicroEventValidationHelper.timeout)
        }
    }

    func testTimoutConfigurations() {
        XCTAssertEqual(abcTracker?.backgroundTimeoutValue, 1800)
        XCTAssertEqual(abcTracker?.foregroundTimeoutValue, 1800)
    }

    func testTrackEventWithMinimumParameters() {
        // Act
        let eventId = abcTracker?.trackEvent(forEvent: "test_action", contentType: "test_category")

        // Assert
        PayloadValidationHelper.validatePayload(eventStore: eventStore, eventId: eventId?.uuidString, testables: [
            .init(type: .property(
                operations: [
                    .assertEqual(key: "se_ac", value: "test_action"),
                    .assertEqual(key: "se_ca", value: "test_category"),
                    .assertEqual(key: "se_la", value: "unknown"),
                    .assertNil("se_pr")
                ]
            )),
            .init(type: .contextAvailability(availableSchemas: []))
        ])

        abcTracker?.flushEventBuffer()

        validateEventSchemaWithMicro()
    }

also fyi

@markst
Copy link

markst commented Jul 10, 2024

@faceLessGod Might have some comments, was it that micro/good or micro/bad responses were not able to associate to a given unit test? Meaning we need to be able to associate a good/bad event with a unique identifier to a given unit test?

@markst
Copy link

markst commented Jul 11, 2024

@miike perhaps including a debugging context could be our solution, might sharing more info?

@miike
Copy link
Contributor

miike commented Jul 11, 2024

The idea of a debugging context is to attach a context (ideally global) that annotates the event with something that you can later assert against. The /micro/good endpoint does support filtering - although the filtering is basic (event type, schema and contexts) which can help avoid needing to iterate through the full array of events.

@miike
Copy link
Contributor

miike commented Jul 22, 2024

@markst To update on this one this is something we've been discussing internally.

Adding a specific filter isn't particularly difficult at this stage but adding it in a generic way that scales to multiple different fields isn't easy due to the way the code has been factored.

One experiment we've been running is to embed a in-memory database which allows for a greater deal of flexibility both with fields as well as operators and performance (though a database may be overkill).

Can I ask if for each test you are asserting against a single event or are there tests where you are asserting against the presence of multiple events?

@markst
Copy link

markst commented Jul 30, 2024

Will need to get back to you on this.

But the unit tests in the Snowplow-ios-tracker are a good example of use case.

func setup()
Provides an opportunity to reset state before calling each test method in a test case.

As you can see wait(for: [Micro.reset()], timeout: Micro.timeout) occurs before each test.
Then the validation Micro.expectCounts(good: 1) happens on each test.

https://github.com/snowplow/snowplow-ios-tracker/blob/master/Integrationtests/TestTrackEventsToMicro.swift

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants