You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: cmd/gpu_nfdhook/README.md
+1-67
Original file line number
Diff line number
Diff line change
@@ -25,70 +25,4 @@ by the NFD, allowing for finer grained resource management for GPU-using PODs.
25
25
26
26
In the NFD deployment, the hook requires `/host-sys` -folder to have the host `/sys`-folder content mounted. Write access is not necessary.
27
27
28
-
## GPU memory
29
-
30
-
GPU memory amount is read from sysfs `gt/gt*` files and turned into a label.
31
-
There are two supported environment variables named `GPU_MEMORY_OVERRIDE` and
32
-
`GPU_MEMORY_RESERVED`. Both are supposed to hold numeric byte amounts. For systems with
33
-
older kernel drivers or GPUs which do not support reading the GPU memory
34
-
amount, the `GPU_MEMORY_OVERRIDE` environment variable value is turned into a GPU
35
-
memory amount label instead of a read value. `GPU_MEMORY_RESERVED` value will be
36
-
scoped out from the GPU memory amount found from sysfs.
37
-
38
-
## Default labels
39
-
40
-
Following labels are created by default. You may turn numeric labels into extended resources with NFD.
41
-
42
-
name | type | description|
43
-
-----|------|------|
44
-
|`gpu.intel.com/millicores`| number | node GPU count * 1000. Can be used as a finer grained shared execution fraction.
45
-
|`gpu.intel.com/memory.max`| number | sum of detected [GPU memory amounts](#gpu-memory) in bytes OR environment variable value * GPU count
46
-
|`gpu.intel.com/cards`| string | list of card names separated by '`.`'. The names match host `card*`-folders under `/sys/class/drm/`. Deprecated, use `gpu-numbers`.
47
-
|`gpu.intel.com/gpu-numbers`| string | list of numbers separated by '`.`'. The numbers correspond to device file numbers for the primary nodes of given GPUs in kernel DRI subsystem, listed as `/dev/dri/card<num>` in devfs, and `/sys/class/drm/card<num>` in sysfs.
48
-
|`gpu.intel.com/tiles`| number | sum of all detected GPU tiles in the system.
49
-
|`gpu.intel.com/numa-gpu-map`| string | list of numa node to gpu mappings.
50
-
51
-
If the value of the `gpu-numbers` label would not fit into the 63 character length limit, you will also get labels `gpu-numbers2`,
52
-
`gpu-numbers3`... until all the gpu numbers have been labeled.
53
-
54
-
The tile count `gpu.intel.com/tiles` describes the total amount of tiles on the system. System is expected to be homogeneous, and thus the number of tiles per GPU can be calculated by dividing the tile count with GPU count.
55
-
56
-
The `numa-gpu-map` label is a list of numa to gpu mapping items separated by `_`. Each list item has a numa node id combined with a list of gpu indices. e.g. 0-1.2.3 would mean: numa node 0 has gpus 1, 2 and 3. More complex example would be: 0-0.1_1-3.4 where numa node 0 would have gpus 0 and 1, and numa node 1 would have gpus 3 and 4. As with `gpu-numbers`, this label will be extended to multiple labels if the length of the value exceeds the max label length.
57
-
58
-
## PCI-groups (optional)
59
-
60
-
GPUs which share the same pci paths under `/sys/devices/pci*` can be grouped into a label. GPU nums are separated by '`.`' and
61
-
groups are separated by '`_`'. The label is created only if environment variable named `GPU_PCI_GROUPING_LEVEL` has a value greater
62
-
than zero. GPUs are considered to belong to the same group, if as many identical folder names are found for the GPUs, as is the value
63
-
of the environment variable. Counting starts from the folder name which starts with `pci`.
64
-
65
-
For example, the SG1 card has 4 GPUs, which end up sharing pci-folder names under `/sys/devices`. With a `GPU_PCI_GROUPING_LEVEL`
66
-
of 3, a node with two such SG1 cards could produce a `pci-groups` label with a value of `0.1.2.3_4.5.6.7`.
67
-
68
-
name | type | description|
69
-
-----|------|------|
70
-
|`gpu.intel.com/pci-groups`| string | list of pci-groups separated by '`_`'. GPU numbers in the groups are separated by '`.`'. The numbers correspond to device file numbers for the primary nodes of given GPUs in kernel DRI subsystem, listed as `/dev/dri/card<num>` in devfs, and `/sys/class/drm/card<num>` in sysfs.
71
-
72
-
If the value of the `pci-groups` label would not fit into the 63 character length limit, you will also get labels `pci-groups2`,
73
-
`pci-groups3`... until all the pci groups have been labeled.
74
-
75
-
## Capability labels (optional)
76
-
77
-
Capability labels are created from information found inside debugfs, and therefore
78
-
unfortunately require running the NFD worker as root. Due to coming from debugfs,
79
-
which is not guaranteed to be stable, these are not guaranteed to be stable either.
80
-
If you do not need these, simply do not run NFD worker as root, that is also more secure.
81
-
Depending on your kernel driver, running the NFD hook as root may introduce following labels:
82
-
83
-
name | type | description|
84
-
-----|------|------|
85
-
|`gpu.intel.com/platform_gen`| string | GPU platform generation name, typically an integer. Deprecated.
86
-
|`gpu.intel.com/media_version`| string | GPU platform Media pipeline generation name, typically a number. Deprecated.
87
-
|`gpu.intel.com/graphics_version`| string | GPU platform graphics/compute pipeline generation name, typically a number. Deprecated.
88
-
|`gpu.intel.com/platform_<PLATFORM_NAME>.count`| number | GPU count for the named platform.
89
-
|`gpu.intel.com/platform_<PLATFORM_NAME>.tiles`| number | GPU tile count in the GPUs of the named platform.
90
-
|`gpu.intel.com/platform_<PLATFORM_NAME>.present`| string | "true" for indicating the presense of the GPU platform.
91
-
92
-
## Limitations
93
-
94
-
For the above to work as intended, GPUs on the same node must be identical in their capabilities.
28
+
For detailed info about the labels created by the NFD hook, see the [labels documentation](../gpu_plugin/labels.md).
If installed with NFD and started with resource-management, plugin will export a set of labels for the node. For detailed info, see [labeling documentation](./labels.md).
342
346
343
347
## Issues with media workloads on multi-GPU setups
GPU labels originate from two main sources: NFD rules and GPU plugin (& NFD hook).
4
+
5
+
## NFD rules
6
+
7
+
NFD rule is a method to instruct NFD to add certain label(s) to node based on the devices detected on it. There is a generic rule to identify all Intel GPUs. It will add labels for each PCI device type. For example, a Tigerlake iGPU (PCI Id 0x9a49) will show up as:
8
+
9
+
```
10
+
gpu.intel.com/device-id.0300-9a49.count=1
11
+
gpu.intel.com/device-id.0300-9a49.present=true
12
+
```
13
+
14
+
For data center GPUs, there are more specific rules which will create additional labels for GPU family, product and device count. For example, Flex 170:
15
+
```
16
+
gpu.intel.com/device.count=1
17
+
gpu.intel.com/family=Flex_Series
18
+
gpu.intel.com/product=Flex_170
19
+
```
20
+
21
+
For MAX 1550:
22
+
```
23
+
gpu.intel.com/device.count=2
24
+
gpu.intel.com/family=Max_Series
25
+
gpu.intel.com/product=Max_1550
26
+
```
27
+
28
+
Current covered platforms/devices are: Flex 140, Flex 170, Max 1100 and Max 1550.
29
+
30
+
To identify other GPUs, see the graphics processor table [here](https://dgpu-docs.intel.com/devices/hardware-table.html#graphics-processor-table).
31
+
32
+
## GPU Plugin and NFD hook
33
+
34
+
In GPU plugin, these labels are only applied when [Resource Management](README.md#fractional-resources-details) is enabled. With the NFD hook, labels are created regardless of how GPU plugin is configured.
35
+
36
+
Numeric labels are converted into extended resources for the node (with NFD) and other labels are used directly by [GPU Aware Scheduling (GAS)](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling). Extended resources should only be used with GAS as Kubernetes scheduler doesn't properly handle resource allocations with multiple GPUs.
37
+
38
+
### Default labels
39
+
40
+
Following labels are created by default.
41
+
42
+
name | type | description|
43
+
-----|------|------|
44
+
|`gpu.intel.com/millicores`| number | node GPU count * 1000.
45
+
|`gpu.intel.com/memory.max`| number | sum of detected [GPU memory amounts](#gpu-memory) in bytes OR environment variable value * GPU count
46
+
|`gpu.intel.com/cards`| string | list of card names separated by '`.`'. The names match host `card*`-folders under `/sys/class/drm/`. Deprecated, use `gpu-numbers`.
47
+
|`gpu.intel.com/gpu-numbers`| string | list of numbers separated by '`.`'. The numbers correspond to device file numbers for the primary nodes of given GPUs in kernel DRI subsystem, listed as `/dev/dri/card<num>` in devfs, and `/sys/class/drm/card<num>` in sysfs.
48
+
|`gpu.intel.com/tiles`| number | sum of all detected GPU tiles in the system.
49
+
|`gpu.intel.com/numa-gpu-map`| string | list of numa node to gpu mappings.
50
+
51
+
If the value of the `gpu-numbers` label would not fit into the 63 character length limit, you will also get labels `gpu-numbers2`,
52
+
`gpu-numbers3`... until all the gpu numbers have been labeled.
53
+
54
+
The tile count `gpu.intel.com/tiles` describes the total amount of tiles on the system. System is expected to be homogeneous, and thus the number of tiles per GPU can be calculated by dividing the tile count with GPU count.
55
+
56
+
The `numa-gpu-map` label is a list of numa to gpu mapping items separated by `_`. Each list item has a numa node id combined with a list of gpu indices. e.g. 0-1.2.3 would mean: numa node 0 has gpus 1, 2 and 3. More complex example would be: 0-0.1_1-3.4 where numa node 0 would have gpus 0 and 1, and numa node 1 would have gpus 3 and 4. As with `gpu-numbers`, this label will be extended to multiple labels if the length of the value exceeds the max label length.
57
+
58
+
### PCI-groups (optional)
59
+
60
+
GPUs which share the same pci paths under `/sys/devices/pci*` can be grouped into a label. GPU nums are separated by '`.`' and
61
+
groups are separated by '`_`'. The label is created only if environment variable named `GPU_PCI_GROUPING_LEVEL` has a value greater
62
+
than zero. GPUs are considered to belong to the same group, if as many identical folder names are found for the GPUs, as is the value
63
+
of the environment variable. Counting starts from the folder name which starts with `pci`.
64
+
65
+
For example, the SG1 card has 4 GPUs, which end up sharing pci-folder names under `/sys/devices`. With a `GPU_PCI_GROUPING_LEVEL`
66
+
of 3, a node with two such SG1 cards could produce a `pci-groups` label with a value of `0.1.2.3_4.5.6.7`.
67
+
68
+
name | type | description|
69
+
-----|------|------|
70
+
|`gpu.intel.com/pci-groups`| string | list of pci-groups separated by '`_`'. GPU numbers in the groups are separated by '`.`'. The numbers correspond to device file numbers for the primary nodes of given GPUs in kernel DRI subsystem, listed as `/dev/dri/card<num>` in devfs, and `/sys/class/drm/card<num>` in sysfs.
71
+
72
+
If the value of the `pci-groups` label would not fit into the 63 character length limit, you will also get labels `pci-groups2`,
73
+
`pci-groups3`... until all the pci groups have been labeled.
74
+
75
+
### Limitations
76
+
77
+
For the above to work as intended, GPUs on the same node must be identical in their capabilities.
78
+
79
+
### GPU memory
80
+
81
+
GPU memory amount is read from sysfs `gt/gt*` files and turned into a label.
82
+
There are two supported environment variables named `GPU_MEMORY_OVERRIDE` and
83
+
`GPU_MEMORY_RESERVED`. Both are supposed to hold numeric byte amounts. For systems with
84
+
older kernel drivers or GPUs which do not support reading the GPU memory
85
+
amount, the `GPU_MEMORY_OVERRIDE` environment variable value is turned into a GPU
86
+
memory amount label instead of a read value. `GPU_MEMORY_RESERVED` value will be
87
+
scoped out from the GPU memory amount found from sysfs.
0 commit comments