Skip to content

Commit 4b7ff45

Browse files
dverbeirVictorNogueiraRio
authored andcommitted
bpf: Zero-fill re-used per-cpu map element
Zero-fill element values for all other cpus than current, just as when not using prealloc. This is the only way the bpf program can ensure known initial values for all cpus ('onallcpus' cannot be set when coming from the bpf program). The scenario is: bpf program inserts some elements in a per-cpu map, then deletes some (or userspace does). When later adding new elements using bpf_map_update_elem(), the bpf program can only set the value of the new elements for the current cpu. When prealloc is enabled, previously deleted elements are re-used. Without the fix, values for other cpus remain whatever they were when the re-used entry was previously freed. A selftest is added to validate correct operation in above scenario as well as in case of LRU per-cpu map element re-use. Fixes: 6c90598 ("bpf: pre-allocate hash map elements") Signed-off-by: David Verbeiren <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Matthieu Baerts <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
1 parent 38ca23a commit 4b7ff45

File tree

1 file changed

+28
-2
lines changed

1 file changed

+28
-2
lines changed

kernel/bpf/hashtab.c

Lines changed: 28 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -703,6 +703,32 @@ static void pcpu_copy_value(struct bpf_htab *htab, void __percpu *pptr,
703703
}
704704
}
705705

706+
static void pcpu_init_value(struct bpf_htab *htab, void __percpu *pptr,
707+
void *value, bool onallcpus)
708+
{
709+
/* When using prealloc and not setting the initial value on all cpus,
710+
* zero-fill element values for other cpus (just as what happens when
711+
* not using prealloc). Otherwise, bpf program has no way to ensure
712+
* known initial values for cpus other than current one
713+
* (onallcpus=false always when coming from bpf prog).
714+
*/
715+
if (htab_is_prealloc(htab) && !onallcpus) {
716+
u32 size = round_up(htab->map.value_size, 8);
717+
int current_cpu = raw_smp_processor_id();
718+
int cpu;
719+
720+
for_each_possible_cpu(cpu) {
721+
if (cpu == current_cpu)
722+
bpf_long_memcpy(per_cpu_ptr(pptr, cpu), value,
723+
size);
724+
else
725+
memset(per_cpu_ptr(pptr, cpu), 0, size);
726+
}
727+
} else {
728+
pcpu_copy_value(htab, pptr, value, onallcpus);
729+
}
730+
}
731+
706732
static bool fd_htab_map_needs_adjust(const struct bpf_htab *htab)
707733
{
708734
return htab->map.map_type == BPF_MAP_TYPE_HASH_OF_MAPS &&
@@ -778,7 +804,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
778804
}
779805
}
780806

781-
pcpu_copy_value(htab, pptr, value, onallcpus);
807+
pcpu_init_value(htab, pptr, value, onallcpus);
782808

783809
if (!prealloc)
784810
htab_elem_set_ptr(l_new, key_size, pptr);
@@ -1033,7 +1059,7 @@ static int __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
10331059
pcpu_copy_value(htab, htab_elem_get_ptr(l_old, key_size),
10341060
value, onallcpus);
10351061
} else {
1036-
pcpu_copy_value(htab, htab_elem_get_ptr(l_new, key_size),
1062+
pcpu_init_value(htab, htab_elem_get_ptr(l_new, key_size),
10371063
value, onallcpus);
10381064
hlist_nulls_add_head_rcu(&l_new->hash_node, head);
10391065
l_new = NULL;

0 commit comments

Comments
 (0)