Skip to content
SharonGoliath edited this page Aug 12, 2010 · 3 revisions

The CANFAR Data Processing Services are described in the CANFAR Infrastructure High Level Design Document. The components of the Data Processing Services are being developed by both the HEP Group at the University of Victoria, and the CADC. The division of responsibility for the development of the Data Processing Services, is identified in the CANFAR Grid Infrastructure Statement of Work.

The HEP Group is developing a Virtual Machine provisioning system call the Cloud Scheduler. The Cloud Scheduler allows CANFAR VMs to be deployed remotely to clusters such that CANFAR jobs can be executed on those VMs.

The following user stories are a description of the expected behaviour of the Cloud Scheduler software, as it responds to the identified user requests. These stories describe which users are expected to interact with the Cloud Scheduler software, and what these users can do as part of that interaction. This set of user stories defines the scope of the capabilities the Cloud Scheduler software will
provide.

The order of development for these user stories is identified by the priority of these stories. This priority is set by the CADC, and is expected to change during the execution of the project.

  1. A single user submits a single job for execution on a constrained VM
    (a VM provided and maintained by the development team).
  2. A single user submits a small number of jobs for execution on a constrained VM. A small number
    of jobs is the minimum of total jobs running <= number of cores available, or 10.
  3. A single user submits a small number of jobs for execution on a small number of constrained VMs.
  4. A single user submits a small number of jobs for execution on a user-provided VM.
  5. A single user specifies memory requirements when submitting a small number of jobs for
    execution on a user-provided VM.
  6. A single user specifies memory and number of cores requirements
    when submitting a small number of jobs for execution on a user-provided VM.
  7. A single user specifies memory, number of cores, and scratch space
    requirements when submitting a small number of jobs for execution on a user-provided VM.
  8. A user is informed of user-provided VM provisioning failure.
  9. A small number of users submits a small number of jobs on user-provided VMs, where the VMs require
    homogeneous architectures.
  10. A small number of users submits a small number of jobs on user-provided VMs. The number of cores
    available is less than the number of jobs submitted.
  11. A single user submits a medium total number of jobs for execution on multiple user-provided VMs,
    where the VMs require heterogeneous architectures. A medium number of jobs is the maximum of
    5 * number of cores available, or 100.
  12. A single user submits a medium total number of jobs for execution on multiple user-provided VMs,
    where the VMs require heterogeneous architectures. The number of cores
    available is less than the number of jobs submitted.
  13. Multiple users submit a medium total number of jobs for execution on multiple user-provided VMs,
    where the VMs require heterogeneous architectures.
  14. Multiple users submit a medium total number of jobs for execution on multiple user-provided VMs,
    where the VMs require heterogeneous architectures. The number of cores
    available is less than the number of jobs submitted.
  15. Multiple users submit a large total number of jobs for execution on multiple user-provided VMs,
    where the VMs require heterogeneous architectures. A large number of jobs is the maximum of
    50 * number of cores available, or 1000.
  16. Multiple users submit a large total number of jobs for execution on multiple user-provided VMs,
    where the VMs require heterogeneous architectures. The number of cores
    available is less than the number of jobs submitted.
  17. Multiple users specify memory requirements when submitting a medium number of jobs for execution on
    user-provided VMs.
  18. Multiple users specify memory and number of cores requirements when
    submitting a medium number of jobs for execution on user-provided VMs.
  19. Multiple users specify memory, number of cores, and scratch space
    requirements when submitting a medium number of jobs for execution on user-provided VMs.
  20. A single user submits a high-priority job for execution on a user-provided VM, to a job queue with a
    medium number of waiting entries. The high-priority job is submitted for execution within a
    reasonable timeframe.
  21. Each of multiple users submits a high-priority job for execution on a user-provided VM,
    to a job queue with a medium number of waiting entries. The high-priority jobs are submitted for
    execution within a reasonable timeframe.
  22. An administrator configures the provisioning system to use a single test bed cluster.
  23. An administrator configures the provisioning system to use a single test bed cluster, where the
    processing nodes have 32-bit and 64-bit architecture.
  24. An administrator configures the provisioning system to use two test bed clusters, where the
    processing nodes have 32-bit and 64-bit architecture.
  25. An administrator triggers a configuration update of the provisioning system.
  26. An administrator configures the provisioning system to use a single production cluster.
  27. An administrator configures the provisioning system to use a multiple production clusters.
  28. A user submits a job for execution, where the job requires access to services external to the VM.
  29. An administrator audits VM network traffic.
  30. A user optimizes data access for job execution, by specifying the physical cluster for execution, as
    part of job submission.
  31. A user deletes a queued job.
  32. A user deletes a running job.
Clone this wiki locally