Skip to content

dream-lab/UltraViolet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UltraViolet

A "Digital twin" emulation environment for IoT cyber-infrastructure. UltraVIoLET extends from its prior verison, "VIoLET: A Large-Scale Virtual Environment for Internet of Things" which is available here.

IoT deployments have been growing manifold, encompassing sensors, networks, edge, fog, and cloud resources. Yet, most researchers and practitioners do not have access to large-scale IoT testbeds for validation. Simulation environments are a poor substitute for evaluating software platforms or application workloads in realistic environments. VIoLET is a virtual environment for validating IoT at scale. It is an emulator for defining and launching large-scale IoT deployments within cloud VMs. It allows users to declaratively specify container-based compute resources that match the performance of native IoT compute devices using Docker. These can be inter-connected by complex topologies on which bandwidth and latency rules are enforced. Users can configure synthetic sensors for data generation as well. We also incorporate models for CPU resource dynamism, and for failure and recovery of the underlying devices. This IoT emulation environment fills an essential gap between IoT simulators and real deployments.

Attribution

If you use this work, please cite the following:

"VIoLET: An Emulation Environment for Validating IoT Deployments at Large Scales", Shrey Baheti and Shreyas Badiger and Yogesh Simmhan, ACM Transactions on Cyber-Physical Systems (TCPS), 5(3), 2001, 10.1145/3446346

@article{10.1145/3446346,
author = {Baheti, Shrey and Badiger, Shreyas and Simmhan, Yogesh},
title = {VIoLET: An Emulation Environment for Validating IoT Deployments at Large Scales},
year = {2021},
issue_date = {July 2021},
publisher = {ACM},
volume = {5},
number = {3},
issn = {2378-962X},
url = {https://doi.org/10.1145/3446346},
doi = {10.1145/3446346},
journal = {ACM Transactions on Cyber-Physical Systems (TCPS)},
pages = {1--39},
}

UltraViolet installation

Automatic Installation

run the install script

manual installation

Set Python3 as default

sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10

1. Install dependencies

sudo apt-get install git autoconf screen cmake build-essential sysstat python3-matplotlib uuid-runtime python3-pip -y

2. Install ContainerNet

Containernet is a fork of mininet that allows to use docker containers as mininet hosts. More at Containernet Github

sudo apt-get install ansible aptitude -y

cd ~/UltraViolet

cd containernet/ansible && sudo ansible-playbook -i "localhost," -c local install.yml

cd .. && sudo make install

3. Install metis

Metis is a graph partitioning library. Usage of Metis in Ultraviolet is similar to Violet. We use Metis to partition the emulated docker containers into 'n' VM based on various constraints like CPU, Memory, bandwidth, latency.

cd ~/UltraViolet/metis-5.1.0

make config

make

sudo make install

4. Install Pyro4

pip3 install Pyro4

5. Install MaxiNet

Mininet is limited to one physical host. By using Maxinet we deploy mininet on several physical machines. MaxiNet runs on a pool of multiple physical machines called Workers. Each of these Workers runs a Mininet emulation and only emulates a part of the whole network. Switches and hosts are interconnected using GRE tunnels across different Workers. MaxiNet provides a centralized API for controlling the emulation. This API is invoked at a specialized Worker called the ** Frontend**. The Frontend partitions and distributes the virtual network onto the Workers and keeps a list of which node resides on which Worker. This way we can access all nodes through the Frontend.

cd ~/UltraViolet

cd Maxinet3

sudo make install

6. Set up cluster and configure MaxiNet

Repeat above (1-5) steps for every pc you want to use as a worker or frontend.

On the frontend/worker machine copy the MaxiNet-cfg-sample file to ~/.MaxiNet.cfg and edit the file.

cp share/MaxiNet.cfg ~/.MaxiNet.cfg

vi ~/.MaxiNet.cfg

Please note that every worker connecting to the MaxiNet Server will need an respective entry in the configuration file, named by its hostname and containing its ip. Although UltraViolet tries to guess the IP of the worker if not found in the configuration file. In the maxinet config file "share" suggests the load the worker can take as compared to the other workers. For Ultraviolet we keep it as 1

More details here.

Start MaxiNet

On the frontend machine call

sudo MaxiNetFrontendServer

POX: Start SDN controller

cd "your ultraviolet folder" && python3 pox.py forwarding.l2_learning

On every worker machine call. Select the IP of the machine when asked by typing the corresponding option. Frontend server can be a worker as well!

sudo MaxiNetWorker

In the logs of the frontendserver, you could see that the workers are connecting to the frontend.

Optionally: run in a new terminal,

python3 /usr/local/share/MaxiNet/examples/simplePing.py

python3 /usr/local/share/MaxiNet/examples/testconfig.py

Congratulations, you just set up your own SDN!

Ultraviolet Execution Pipeline

  • Run coremark to decide the number of VMs and cpus for containers

    • Download coremark and then run make XCFLAGS="-DMULTITHREAD=<number_of_VM_cpus> -DUSE_FORK=1" REBUILD=1 on the VM to get Vm's Coremark
    • No. of cpus for VM = output of lscpu
    • No. of cpu for container= (coremark_of_device/VM_coremark)*number_of_cores_in_VM.
    • No. of VMs = Ceil(sum_of_coremark_of_all_devices/coremark_of_VM)
    • Error of emulated container is +-5% of original device
  • Create your own directory by copying d20-test-pipeline.

  • Coremark has been ran on most of the devices. Their values are specified in device_types.json with key "coremark". If not you can run the coremark step on the physical device.

    • Incase you don't see your device. Then that device needs to be benchmarked.
    • First run coremark benchmark on the hardware device. make XCFLAGS="-DMULTITHREAD=<number_of_VM_cpus> -DUSE_FORK=1" REBUILD=1
    • Calculate the cpu needed for the device on the VM: (coremark_of_device/VM_coremark)*number_of_cores_in_VM
    • We need to find ratios of cpu_quota and cpu_period that matches tha above ratio. And then run the coremark benchmark on containers.
    • For example if the above ratio is 0.5, you can keep the cpu_period = 25000 and cpu_quota = 12500, and run the benchmark on the containers.
    • read more here readme
  • infra_gen.json : This is the only file that needs to be created/changed. This file takes in info like type of FOG and Edges, #FOGs & #EDGES. More

  • Violet input generator - generates infra_config.json

    • Run: python3 infra-gen.py <config_folder>/infra_gen.json
    • Infra_gen.json needs to be changed for each cluster architecture, refer to d20 and d272 pipeline folders in Ultraviolet repo, for how to change infra_gen.json.
    • Also bandwidth and latency are randomly selected from infra_gen.json, to prevent this you can fix values for each private or public switch/network.
    • Important to note here is we calculate the number of vm that might be needed for a successful deployment from this file. You can check infra_config.json in the config folder to know the number of vm. The python file also prints #vm.
  • Assumptions made for cluster

    • Uses infra_gen.json (also used in violet to generate large deployments)
      • mentions number of devices, networks, devices connected to network
    • Any device connected to global switch is a fog
    • There is only one global switch and all fogs connect to it
    • The order of devices in infra_gen.json determines the gateway used in the private network.
    • All connections to a switch have the same bandwidth and latency.
    • Fogs, edges can only be connected to a single private network.
      • No edge/fog can connect to multiple private networks
    • Any device connected to private network is an edge (except the gateway)
    • All fogs have generated ip 10.1.*.*
    • All edges have generated ip 10.3.*.*
    • Additional fog interfaces will start with 10.2.*.* (for private gateway connections to edge devices)
    • There is only one VM type
    • All edges and fogs connect to global switch. Edges are connected to global switch via their private switch for ToqueDB
      • Be careful with link properties for global connections to switch for TorqueDB deploy, maintain all uniform properties for links.
    • Interface and switch names cannot be long.
  • Violet metis input generator - generates metis-input file in config folder. If your vm count == 1 then skip this step.m

    • Run python3 metis-input-gen.py
    • Make sure file paths point to the write cluster files (same goes for every script used).
  • gpmetis on metis input generated - generates metis_input.part.<numberofVMs>. If your vm count == 1 then skip this step.

    • Run gpmetis config/metis_input <numberofVMs>
    • Number of Vms can be found from the infra_config.json file, in the last entry of the file vm_number.
    • number of vm should be greater than 1 as gpmetis doesnot accept 1 as number of partitions.
  • Violet check metis partition - to make sure metis doesn't over allocate

    • python3 metis-check.py config/metis_input.part.<numberofVMs> <numberofVMs>
    • Sum of coremark of all devices allocated to VM < = coremark of VM
    • Increase number of VMs, rerun gpmetis with extra VM count
    • File paths to the metis output generated and number of VMs must be provided
    • Number of VMs is generated by input generator and is found in the infra_config.json file generated.
  • Ultraviolet metis mapper - parse through metis output.

    • If a link to a vertex is across VMs/workers add extra switch in switches.json. Ultraviolet metis mapper also creates switches.json which is the static map of switches to their VMs in accordance with the metis partition.
    • Run python3 mapper.py config/metis_input.part.<numberofVMs>
    • Do multiple switches connected to each need to be split across systems- assumed to never happen
    • Sanity check - one switch must be duplicated only once per VM
    • Switch with the most number of connections inside a single VM stays in that VM.
  • deploy.py is run on the new json files to deploy cluster

  • If all things go well.... Cluster is deployed!

Sanity Checks

  • Run coremark on every container generated in parallel.
    • Coremark achieved when all devices used needs to match with expected coremark of devices
  • Pingall (already present in ultraviolet cli) and iperf all for every containers.

Running your TEST code in the Containers:

  • Firstly you should have a container image that contains all your dependencies.
  • Place that image file name in device_types.json file in the config folder.
  • Maxinet offers an experiment object that can be used to execute any command in the container.
  • In the deploy.py file after the topology deployment code i.e. exp.setup() , you may insert your logic to execute some particular script/command in the running container. For ex. check this file here we pass the exp obj from deploy.py. You can access each of the deployed nodes from this variable and run .cmd on the filtered node variable to execute the command on the container.

HELP

Find examples code in Maxinet3/MaxiNet/Frontend/examples

To read violet file, read the pythonfile in Maxinet3/MaxiNet/Frontend/dev

Container related help

infra_gen.json

  • This file has two sections: For Public and Private Networks.

CREDITS

We thank Harshil Gupta, Jeet Ahuja, Bishal Ranjan Swain, Shreya Mishra, Animesh ND and Suman Raj for their contributions to UltraVioLET.

Copyright 2005 DREAM:Lab, Indian Institute of Science

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published