Skip to content

Conversation

@jelly
Copy link
Member

@jelly jelly commented Nov 21, 2025

No description provided.

@jelly jelly added the no-test For doc/workflow changes, or experiments which don't need a full CI run, label Nov 21, 2025
@jelly jelly force-pushed the dbus-remove-match branch from 434806e to dba9f41 Compare November 21, 2025 15:27
Copy link
Member

@allisonkarlitskaya allisonkarlitskaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've gotten too old to read code without types ;)

self.ready()

def add_signal_handler(self, handler, **kwargs):
def get_r(self, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"r"? rule?

also: please add types as you go

self.create_task(self.do_call(message))
elif 'add-match' in message:
self.create_task(self.do_add_match(message))
elif 'remove-match' in message:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

huh. just flat out unimplemented. fascinating.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is more:

# - removing matches
# - removing watches
# - emitting of signals
# - publishing of objects
# - failing more gracefully in some cases (during open, etc)

func = filter_owner
else:
func = handler
r_string = ','.join(f"{key}='{value}'" for key, value in r.items())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woh deja vue

elif 'add-match' in message:
self.create_task(self.do_add_match(message))
elif 'remove-match' in message:
self.create_task(self.do_remove_match(message))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel slightly like we should try to handle this synchronously, at least the part about removing the entry from the list...

On the other hand, the entire wire protocol is wildly async, so maybe it's easier to keep with the prevailing style in this function of sending everything to tasks...

r = self.get_r(**remove_match)
r_string = self.get_r_string(r)
for index, (r_key, slot) in enumerate(self.matches):
if r_key == r_string:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This starts to look like a multidict...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better data structure is indeed welcome :)

for index, (r_key, slot) in enumerate(self.matches):
if r_key == r_string:
logger.debug('got match %s %d', self.matches, index)
slot.cancel()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...and systemd will send the RemoveMatch for us?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so, the test passes :) But how would I verify this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I have tested this with a simple systemd_ctypes example :)

import asyncio

from systemd_ctypes import Bus, EventLoopPolicy, introspection


def property_changed(message):
    print('Property changed:', message.get_body())
    return 0


async def main():
    system = Bus.default_system()
    slot1 = system.add_match("interface='org.freedesktop.DBus.Properties'", property_changed)
    slot2 = system.add_match("interface='org.freedesktop.DBus.Properties'", property_changed)
    await asyncio.sleep(10)
    print("canelling slot1")
    slot1.cancel()
    print("canelled slot1")
    await asyncio.sleep(1000)
    print(slot2)


asyncio.set_event_loop_policy(EventLoopPolicy())
asyncio.run(main())

This combined with busctl monitor:

‣ Type=method_call  Endian=l  Flags=0  Version=1 Cookie=2  Timestamp="Tue 2025-12-02 15:34:29.972283 UTC"
  Sender=:1.535  Destination=org.freedesktop.DBus  Path=/org/freedesktop/DBus  Interface=org.freedesktop.DBus  Member=AddMatch
  UniqueName=:1.535
  MESSAGE "s" {
          STRING "interface='org.freedesktop.DBus.Properties'";
  };

‣ Type=method_return  Endian=l  Flags=1  Version=1 Cookie=-1  ReplyCookie=2  Timestamp="Tue 2025-12-02 15:34:29.972293 UTC"
  Sender=org.freedesktop.DBus  Destination=:1.535
  MESSAGE "" {
  };

‣ Type=method_call  Endian=l  Flags=0  Version=1 Cookie=3  Timestamp="Tue 2025-12-02 15:34:29.972303 UTC"
  Sender=:1.535  Destination=org.freedesktop.DBus  Path=/org/freedesktop/DBus  Interface=org.freedesktop.DBus  Member=AddMatch
  UniqueName=:1.535
  MESSAGE "s" {
          STRING "interface='org.freedesktop.DBus.Properties'";
  };

‣ Type=method_return  Endian=l  Flags=1  Version=1 Cookie=-1  ReplyCookie=3  Timestamp="Tue 2025-12-02 15:34:29.972314 UTC"
  Sender=org.freedesktop.DBus  Destination=:1.535
  MESSAGE "" {
  };

So two matches added! Then one removed, so systemd does not combine them.

‣ Type=method_call  Endian=l  Flags=1  Version=1 Cookie=4  Timestamp="Tue 2025-12-02 15:35:39.333566 UTC"
  Sender=:1.536  Destination=org.freedesktop.DBus  Path=/org/freedesktop/DBus  Interface=org.freedesktop.DBus  Member=RemoveMatch
  UniqueName=:1.536
  MESSAGE "s" {
          STRING "interface='org.freedesktop.DBus.Properties'";
  };

for index, (r_key, slot) in enumerate(self.matches):
if r_key == r_string:
logger.debug('got match %s %d', self.matches, index)
slot.cancel()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to do "use counting" for matches. If "add-match" has been received 5 times, there need to be 5 matching "remove-match"es before the slot can be cancelled. See protocol.md.

Is this taken care of properly? I guess so, because the 5 "add-match"es have made five slots and five entries in self.matches, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does, the old C code seems to do this:

add-match

  peer = ensure_peer (self, name);
  if (cockpit_dbus_rules_add (peer->rules,
                              path ? path : path_namespace,
                              path_namespace ? TRUE : FALSE,
                              interface, signal, arg0))
cockpit_dbus_rules_add (CockpitDBusRules *rules,
                        const gchar *path,
                        gboolean is_namespace,
                        const gchar *interface,
                        const gchar *member,
                        const gchar *arg0)
{
  RuleData *rule = NULL;

<snip>

  if (rule == NULL)
    {
      rule = g_slice_new0 (RuleData);
      rule->refs = 1;
      rule->path = g_strdup (path);
      rule->is_namespace = is_namespace;
      rule->interface = g_strdup (interface);
      rule->member = g_strdup (member);
      rule->arg0 = g_strdup (arg0);
      g_hash_table_add (rules->all, rule);
      recompile_rules (rules);
      return TRUE;
    }
  else
    {
      rule->refs++;
      return FALSE;
    }

So it is actually more efficient! By keeping one slot for the same match rule.

Removing:

  rule = rule_lookup (rules, path, is_namespace, interface, member, arg0);
  if (rule == NULL)
    return FALSE;

  rule->refs--;
  if (rule->refs == 0)
    {
      g_hash_table_remove (rules->all, rule);
      recompile_rules (rules);
      return TRUE;
    }

  return FALSE;

return r

def get_r_string(self, r):
return ','.join(f"{key}='{value}'" for key, value in r.items())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order matters here. I think we need to explicitly sort the keys alphabetically or something.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed.

assert.equal(signals, 2, "received exactly two signals, watch was removed");
subscription.remove();
await dbus.call("/otree/frobber", "com.redhat.Cockpit.DBusTests.Frobber", "RequestSignalEmission", [0]);
assert.equal(signals, 2, "received exactly three signals, ");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"two signals"?

subscription = dbus.subscribe({
interface: "com.redhat.Cockpit.DBusTests.Frobber",
path: "/otree/frobber"
}, on_signal);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I twould be good to subscribe twice with the same rule but two different signal handlers, and then check that both handlers are called, and that after removing one of them, the other is still called.

@jelly
Copy link
Member Author

jelly commented Dec 2, 2025

I've done my research and added a new test which subscribes twice on the same cockpit.DBusClient() this leads to duplicate notify() events for all subscribers as add-match is called twice. So the Python implementation all this time has been "wrong", as the C implementation used to group the same add-match matches into one AddMatch dbus call, keeping track of other "references".

We should do the same here.

@mvollmer
Copy link
Member

mvollmer commented Dec 4, 2025

We should do the same here.

I am afraid that's not enough. It is nice to not send two exactly equivalent AddMatch requests to the D-Bus broker, but we also have to worry about overlapping but not exactly equivalent match expressions.

So if one part of the code requests

{ 
  interface: "com.redhat.Cockpit.DBusTests.Frobber",
  path: "/otree/frobber"
}

and another part requests the overlapping but not equivalent

{
   interface: "com.redhat.Cockpit.DBusTests.Frobber"
}

then we must send two AddMatch requests to the broker, and if we do that with two calls to sd_bus_add_match, then we will get two slot callback invocations for each single signal coming from the broker. The current code will then call send_json(signal_info) twice, which is incorrect.

Thus, I think we should not call send_json(signal_info) from the sd_bus_add_match slot callback. Instead, we should unconditionally install a single slot (per connection) with sd_bus_match_signal, and call send_json(signal_info) from its callback.

We can continue to use sd_bus_add_match to send AddMatch and RemoveMatch messages to the broker if that is convenient, but the callback of those slots should do nothing.

With this, we don't have to identify identical AddMatch requests to achieve correctness, although it might still be an important optimization, depending on how many identical "add-match" requests our JavaScript actually produces.

Does this make sense?

@mvollmer
Copy link
Member

mvollmer commented Dec 9, 2025

Instead, we should unconditionally install a single slot (per connection) with sd_bus_match_signal

I misunderstood what sd_bus_match_signal does, I thought it does not send a AddMatch message to the broker. In fact, it is just a convenience wrapper around sd_bus_add_match that will format the match expression for you.

We can use sd_bus_add_filter instead. See allisonkarlitskaya/systemd_ctypes#38 for a old PR that adds the necessary bits to systemd_ctypes. I have a feeling I have been down this path before but then we decided not to actually proceed for some reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

no-test For doc/workflow changes, or experiments which don't need a full CI run,

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants