Use multiple computers on a network to run a single GPU.js kernel!
Benchmark Results:
The package is available on npm and can be installed using npm
/yarn
.
npm i gpujs-hive-compute
OR
yarn add gpujs-hive-compute
See browser-hive-compute.
There is no default CLI for this because building one is really easy. See examples/squares.js
and examples/helper-cli.js
.
You can clone the repository, run yarn install
and yarn build
and run node examples/helper-cli.js
to use a simple CLI for Helper. You can use examples/squares.js
as a template for real CLI usage of the library or use it for testing.
NOTE: This library uses Websockets for communication because they are standard, browser-compatible and easy to use.
The library has two core components, the Helper and the Leader. The Leader is the main device which you control and which asks the other connected devices i.e Helpers to build and run parts of the kernel. The Leader side code is just like writing any GPU.js kernel, the library handles all the splitting of work between devices. The Helper and Leader can communicate as long as they are on the same local network. (or if the leader's global ip and port are exposed and known)
Example Leader: (This will work with typescript as well)
const { hiveRun } = require('gpujs-hive-compute');
const GPU = require('gpu.js'); // This is required to be installed separately
const gpu = new GPU(); // Instantiate
hiveRun({
gpu: gpu, // give the GPU object
func: function(arg1, arg2) {
return arg1 + arg2; // A normal GPU.js kernel function
},
options: {
output: [20] // Standard GPU.js kernel settings/options
},
onWaitingForHelpers: url => console.log(url),
doContinueOnHelperJoin: (numHelpers) => { // This callback is fired whenever a new helper joins. Return true
return numHelpers > 3; // If more than 3 helpers join, it will run the kernel and during this time, no new helper can join.
},
inputs: [ // Inputs for the kernel, leave blank if there are no inputs.
5, // arg1
6 // arg2
],
}).then(output => console.log(output)).catch(e => console.log(e)); // Or use async...await
See examples/squares.js
.
Example Helper: (This will work with typescript as well)
const { hiveHelp } = require('gpujs-hive-compute');
const GPU = require('gpu.js'); // This is required to be installed separately
const gpu = new GPU(); // Instantiate
hiveHelp({
gpu: gpu,
url: `ws://192.168.0.10:8782` // This URL will be logged to the console by the Leader and will differ from device to device.
}).then(() => console.log('successfully converted')).catch(e => console.log(e)); // Or use async...await
The library exports the following functions:
Where options is an object with the following properties:
gpu
(GPU): Instance of a GPU.jsGPU
object.func
(Function): The GPU.js kernel function.port
(number): The port for the websocket server. (8782
by default)kernelOptions
(Object): GPU.js kernel settings/options.onWaitingForHelpers(url) => void
(Function): A callback that is fired when a the hive is accepting helpers, the only parameter is the join url.doContinueOnHelperJoin(numHelpers) => boolean
(Function): This is a callback function that is fired whenever a new helper joins. The parameternumHelpers
is the number of helpers currently active. Returntrue
to run the kernel orfalse
to wait for more helpers to join. No new helper can join while the kernel is running.logFunction(...args) => void
(Function): A custom log function if you don't want console logs. (console.log
by default)inputs
(Array): This is an array of kernel inputs in the form[arg1, arg2, arg3]
.
Returns a promise with the output or an error.
Where options is an object with the following properties:
gpu
(GPU): Instance of a GPU.jsGPU
object.url
(string): The WebSocket URL used by the Leader and Helper to communicate. The URL will be logged to the console by the leader. e.g:ws://192.168.0.10:8782
.logFunction(...args) => void
(Function): A custom log function if you don't want console logs. (console.log
by default)
Returns a promise which either rejects with an error or resolves when the whole process is complete.
- 3-D kernel outputs: Will be supported soon
- Graphical Output: There is no straightforward way of doing this. (Basically impossible)
- Pipelining: The task is distributed among multiple GPUs so there is no single texture that can be pipelined.
- Not All Kernel Constants are available: Kernel constants are supported but the following names are reserved by the library:
hive_offset_x
,hive_offset_y
,hive_offset_z
,hive_output_x
,hive_output_y
andhive_output_z
. - Slightly network intensive: The data between helpers and leaders is sent as JSON. According to a test, with a single helper, the leader and helper both received or transmitted a total of 50MB for a 1000*1000 matrix multiplication. This can be slow over wifi and will be much slower for larger input sizes which are quite common. At least 100Mbit/s ethernet is recommended. (Or 5GHz wifi)