Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use cases and principles for Attribution/Measurement proposals #14

Open
lbdvt opened this issue Oct 4, 2022 · 1 comment
Open

Use cases and principles for Attribution/Measurement proposals #14

lbdvt opened this issue Oct 4, 2022 · 1 comment

Comments

@lbdvt
Copy link

lbdvt commented Oct 4, 2022

We'd like to propose a set of use cases and principles to assess Attribution/Measurement proposals coverage from a utility standpoint. We hope it will help in defining the roadmap of these proposals.

What use cases need to be covered or not by a proposal depends on the bidding/ad display mechanism constraints. For example, when using FLEDGE and Fenced Frames, a part or all of the "monitoring" use case may need to be supported by a specific Attribution/Measurement proposal. In less constrained environments, some data (usually data not linked to attribution) may be readily available.

We'd like to gather feedback on the proposal below, and add it as a document for the group.

Use cases

An advertising measurement system must support the following use cases, if the data is not otherwise accessible:

Billing

The system must provide data for computing advertiser spend and publisher revenue.

  • Publisher revenue is generally linked to auction clearing prices, but can also be linked to additional information (e.g. discounts or premiums linked to specific deals or volumes).
  • Advertiser spend can be linked to auction clearing prices (CPM), but also conversions such as clicks on a display, visits to the advertiser site, or sales.
    The system shall provide ways for advertisers, publishers, and ad tech providers to audit that data and/or reconcile it with external data sources.

Reporting

The system must provide data for reporting for advertisers, publishers, and ad tech providers.

  • Common dimensions in these reports include campaign, product, day, hour, type of device, ad display characteristics (e.g. size), creative type, click zone, etc.
  • Common KPIs in these reports include bid win rate, yield, number of displays, user exposure, click-through rate, attributed visits, attributed sales, etc.
    • Note that attribution can be used for billing, reporting, campaign optimization and piloting.
    • It must be possible to use various events for attribution, such as visits and sales.
  • Flexibility around these dimensions and KPIs are important.
  • Reporting must also plan for after-the-fact control of key functionalities such as:
    • Brand Safety for advertisers (i.e. “as an advertiser, I want to control where my ads have been displayed, to make sure that there has no been on a site I don’t want them to”).
    • Ad Quality for publishers (i.e. “as a publisher, I want to control the ads that have been displayed on my site, to make sure that there was no offensive/refused ad or brand”).
  • Reporting on incrementally / lift measurement (combined with A/B testing capabilities).

Legal Reporting

Some jurisdictions have legal requirements related to web advertising (e.g. French digital advertising reporting law, which requires disclosing to a marketer, among other things, the domains where its ads have been displayed). The system must support this.

Campaign optimization

The system must provide Machine Learning training for the models that select campaigns, products, ads layouts and features, calculate bid amounts, or dynamic attribution models (i.e. that are able to highlight the main contributors among several impressions or clicks).

Campaign piloting

It must be possible to control campaign delivery pace and enforce budget constraints in real-time.
Marketers must have the ability to get feedback on their campaigns' performance, e.g. cost per action, to adapt the related objectives, with a latency of around an hour.

User feedback collection

Ads may include a method for users to provide feedback on what they see. The system must enable reporting those feedbacks in an aggregated way, with a latency of around an hour.

Monitoring and incident detection

It must be possible to possible to detect incidents (at campaign, client, or company level) within 5 minutes, as they can cause large overspend or loss of opportunities.
One standard way of detecting incident is to detect unexpected changes to KPIs, such as win rate, yield, ad displays, etc. A certain level of data breakdown enabling a quick and detailed diagnosis of the incident is required.

Allow for external data ingestion

The solution should cover the ingestion and processing of external data (e.g. brick&mortar stores sales) in combination with web events.

Principles

We propose seven key principles to consider:

Fairness

The system must treat all participants equally, and shall not introduce a competing advantage for those who do not need to use them, that is to say, open web measurement shall not be at a disadvantage compared to large first parties.
Delegation to third parties should be possible, enabling small and medium sites to rely on external suppliers.

Accuracy

The system must provide accurate enough data for the use cases it supports.

Velocity

The system must provide data at a rate fast enough for the use cases it supports.

Flexibility

The system must not be locked to specific measurement methods. (e.g. last-click attribution, specific attribution windows, etc.), machine learning frameworks, etc. Lack of flexibility would limit participants' ability to innovate.
As an example, video advertising has some specific measurement KPIs, such as view percentage. The system must easily adapt to such innovations.

Interoperability

The system must provide for interoperability between advertisers, publishers, and Ad Tech participants, i.e. it should be possible for multiple participants to share, compare, and combine data. In particular multiple participants should be able to get their own reports, investigate, and resolve any dispute.

Scalability

The system shall scale to the need of all Ad Tech participants, for a reasonable cost.
On-device processing should not incur undue load on these users' devices.

Compliance

The system shall improve compliance with respect to applicable Laws and Reglementations on user data processing.

@bmayd
Copy link

bmayd commented Oct 17, 2022

Adding my feedback from the October 17 meeting (minutes should be posted here):

  • There are several efforts to compile advertising use cases, with two of the bigger ones being Partnership for Responsible Addressable Media (aka PRAM) Priority Business Use Cases and the W3C Improving Web Advertising Business Group Advertising Use Cases. It would be very helpful to have a consolidated, master-list of measurement use-cases that the several efforts pursuing measurement solutions and/or seeking to evaluate proposals could align on. In particular, it would be help the subgroup of folks in the PATCG working on a comparison of the major existing proposals who want to provide an indication of the level of support each proposal offers for a core set of use-cases. Such a list would also be helpful for folks working on related areas, such as anti-fraud.
  • Some of the use-cases outlined above have dependencies on the user-agent and some can be supported with server-to-server interactions independent of the user-agent. It would be helpful to note which use-cases have user-agent dependencies so the scope of effort could be limited accordingly and those that are out of scope could be taken on by groups better positioned to own them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants