You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: assignments/false-sharing/assignment.md
+22-27
Original file line number
Diff line number
Diff line change
@@ -59,13 +59,14 @@ In all of these examples `tid` is the thread id (starting at `0` up to the `thre
59
59
`threads` is the number of threads that we're using, and `length` is the number of elements in the array.
60
60
Also, in all examples we will assume that the threads can *race* on the `result` so we must declare it as a `std::atomic` to make sure that all accesses are completed consistently.
61
61
62
-
You can find the compiled binary for this implementation in X86 under `workloads/array_sum/naive-native`.
63
-
You can use this binary to run the program on real hardware.
62
+
You can find the the makefile to compile the binary under `workloads/array_sum/naive-native`.
63
+
Run `make all-native` to compile the binaries.
64
+
You can use these binaries to run the program on real hardware.
64
65
Here is an example of how you could run the binary on native hardware.
65
66
This example sums up an array of size `32768 elements` with `8 threads`.
66
67
67
68
```shell
68
-
./naive-native 32768 8
69
+
./naive-native 16384 4
69
70
```
70
71
71
72
**CAUTION**: You **SHOULD NOT** run `workloads/array_sum/naive-gem5` on real hardware.
@@ -76,19 +77,13 @@ Here is an example of how you should create an object of this workload.
76
77
This example creates a workload of this binary that sums up `16384 elements` with `4 threads`.
77
78
78
79
```python
79
-
from workloads.array_sum_workload import NaiveArraySumWorkload
0 commit comments