-
Describe the issue you are experiencingRunning as docker container, upon start radio seems to work normally for approx 5 min. After that, sending command through radio from HM device seems to work albeit being very slow, sending to HMIP device seems broken, listening by raspberry pi seems broken passed the 5 minute mark after reboot. Describe the behavior you expectedWorking radio both for send & receive Steps to reproduce the issue
What is the version this bug report is based on?3.79.6.20241122 Which base platform are you running?rpi4 (RaspberryPi4, ARM64/aarch64) Which HomeMatic/homematicIP radio module are you using?HM-MOD-RPI-PCB Anything in the logs that might be useful for us?Docker: `Version: 27.3.1`
Host: `Linux rpihm 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023 aarch64 GNU/Linux`
dmesg shows:
(bash)
...
[ 54.026317] eq3loop: created slave mmd_hmip
[ 54.026965] eq3loop: created slave mmd_bidcos
[ 54.089354] eth2: renamed from veth7af4ea5
[ 54.118730] IPv6: ADDRCONF(NETDEV_CHANGE): veth2c88575: link becomes ready
[ 54.118871] docker_gwbridge: port 4(veth2c88575) entered blocking state
[ 54.118884] docker_gwbridge: port 4(veth2c88575) entered forwarding state
[ 58.027434] eq3loop: eq3loop_open_slave() mmd_bidcos
[ 58.027497] eq3loop: eq3loop_close_slave() mmd_bidcos
[ 58.032235] eq3loop: eq3loop_open_slave() mmd_hmip
[ 58.032409] eq3loop: eq3loop_close_slave() mmd_hmip
[ 58.167238] eq3loop: eq3loop_open_slave() mmd_bidcos
[ 87.104816] eq3loop: eq3loop_open_slave() mmd_hmip
[ 87.105022] eq3loop: eq3loop_close_slave() mmd_hmip
[ 87.108711] eq3loop: eq3loop_open_slave() mmd_hmip
[ 87.108901] eq3loop: eq3loop_close_slave() mmd_hmip
[ 87.110634] eq3loop: eq3loop_open_slave() mmd_hmip
[ 87.110843] eq3loop: eq3loop_close_slave() mmd_hmip
[ 87.120854] eq3loop: eq3loop_open_slave() mmd_hmip
[ 356.692669] raw-uart raw-uart: generic_raw_uart_open(): Too many open connections.
[ 356.692904] sysfs: cannot create duplicate filename '/devices/virtual/eq3loop/mmd_hmip'
[ 356.692914] CPU: 3 PID: 8836 Comm: multimacd Tainted: G C O 6.1.21-v8+ #1642
[ 356.692921] Hardware name: Raspberry Pi 4 Model B Rev 1.4 (DT)
[ 356.692925] Call trace:
[ 356.692927] dump_backtrace+0x120/0x130
[ 356.692937] show_stack+0x20/0x30
[ 356.692942] dump_stack_lvl+0x8c/0xb8
[ 356.692950] dump_stack+0x18/0x34
[ 356.692955] sysfs_warn_dup+0x6c/0x88
[ 356.692961] sysfs_create_dir_ns+0xe8/0x100
[ 356.692965] kobject_add_internal+0x98/0x218
[ 356.692971] kobject_add+0xa0/0x108
[ 356.692975] device_add+0xf0/0x748
[ 356.692981] device_create_groups_vargs+0xe8/0x150
[ 356.692985] device_create+0x6c/0x90
[ 356.692989] eq3loop_ioctl+0x174/0x2dc [eq3_char_loop]
[ 356.693004] __arm64_compat_sys_ioctl+0x168/0x180
[ 356.693011] invoke_syscall+0x4c/0x110
[ 356.693018] el0_svc_common.constprop.3+0xfc/0x120
[ 356.693024] do_el0_svc_compat+0x24/0x48
[ 356.693030] el0_svc_compat+0x30/0x88
[ 356.693036] el0t_32_sync_handler+0xe4/0x100
[ 356.693042] el0t_32_sync+0x190/0x194
[ 356.693049] kobject_add_internal failed for mmd_hmip with -EEXIST, don't try to register things with the same name in the same directory.
[ 356.693058] eq3loop: created slave mmd_hmip
[ 356.693412] sysfs: cannot create duplicate filename '/devices/virtual/eq3loop/mmd_bidcos'
[ 356.693420] CPU: 3 PID: 8836 Comm: multimacd Tainted: G C O 6.1.21-v8+ #1642
[ 356.693426] Hardware name: Raspberry Pi 4 Model B Rev 1.4 (DT)
[ 356.693429] Call trace:
[ 356.693431] dump_backtrace+0x120/0x130
[ 356.693437] show_stack+0x20/0x30
[ 356.693441] dump_stack_lvl+0x8c/0xb8
[ 356.693446] dump_stack+0x18/0x34
[ 356.693451] sysfs_warn_dup+0x6c/0x88
[ 356.693456] sysfs_create_dir_ns+0xe8/0x100
[ 356.693460] kobject_add_internal+0x98/0x218
[ 356.693464] kobject_add+0xa0/0x108
[ 356.693468] device_add+0xf0/0x748
[ 356.693472] device_create_groups_vargs+0xe8/0x150
[ 356.693476] device_create+0x6c/0x90
[ 356.693480] eq3loop_ioctl+0x174/0x2dc [eq3_char_loop]
[ 356.693491] __arm64_compat_sys_ioctl+0x168/0x180
[ 356.693498] invoke_syscall+0x4c/0x110
[ 356.693504] el0_svc_common.constprop.3+0xfc/0x120
[ 356.693510] do_el0_svc_compat+0x24/0x48
[ 356.693516] el0_svc_compat+0x30/0x88
[ 356.693522] el0t_32_sync_handler+0xe4/0x100
[ 356.693528] el0t_32_sync+0x190/0x194
[ 356.693533] kobject_add_internal failed for mmd_bidcos with -EEXIST, don't try to register things with the same name in the same directory.
[ 356.693540] eq3loop: created slave mmd_bidcos
[ 366.855332] eq3loop: eq3loop_open_slave() mmd_bidcos
[ 464.659375] eq3loop: eq3loop_open_slave() mmd_hmip
[ 464.665511] eq3loop: eq3loop_open_slave() mmd_hmip
[ 464.666409] eq3loop: eq3loop_open_slave() mmd_hmip
[ 464.667384] eq3loop: eq3loop_open_slave() mmd_hmip
...
|
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 3 replies
-
Despite reinstalling everything according to the manual procedure, the issue remains fully reproducible. The system functions perfectly for the first 5 minutes only. After that, it becomes erratic and slow, and the dmesg log exhibits the duplicate error. |
Beta Was this translation helpful? Give feedback.
-
Well, this
Smells like you have multiple docker containers trying to access the same device or something like that. In fact, this definitly smells like a local installation issue on your end. So please seek for help in the discussion fora instead. |
Beta Was this translation helpful? Give feedback.
-
@jens-maus thanks for feedback, I have been using the same setup (albeit the updates) for years but following your advise I stopped every other containers, controlled with
REM: One sees the 5 minutes gap between the start of both processes |
Beta Was this translation helpful? Give feedback.
-
Indeed, it’s quite strange. The same behavior occurs even with an unconfigured setup. I’ve searched for the cause of the second instance being triggered but haven’t found anything. I’m considering creating a wrapper around multimacd to prevent two instances from running simultaneously. |
Beta Was this translation helpful? Give feedback.
-
Are you suggesting that piVCCU is no longer necessary? It is still mentioned in the manual installation process here. As a temporary workaround, I manually kill the second process that spawns within 5 minutes after the first, and then everything seems to works fine. While I could automate this, it really shouldn’t be happening in the first place. |
Beta Was this translation helpful? Give feedback.
Well, this
Smells like you have multiple docker containers trying to access the same device or something like that. In fact, this definitly smells like a local installation issue on your end. So please seek for help in the discussion fora instead.