Skip to content

Releases: LABSS/PROTON-T

Chasing opinions

11 Oct 10:11
Compare
Choose a tag to compare
Chasing opinions Pre-release
Pre-release

We need to better understand the role of opinions.

running plan T13

The purpose of this plan is:

  • to execute again a large set of repeated scenarios on maximum values in order to show that the interventions chosen do have an effect after some modification of the opinion dynamics. The OD dynamics should be much slower. It will be run at first with 10K agents.

files to run

We run each of the rpT13 each on 1 machine:

rpT13_base.xml  
rpT13_cworkers10.xml  
rpT13_cworkers5.xml 
rpT13_high-risk.xml  

and it can be run as usual with something along the line of nohup time /opt/netlogo/6.0.4/netlogo-headless-2G.sh --model PROTON-T.nlogo --setup-file experiments-xml/XXX.xml --table XXX.csv > XXX.out 2>&1 &

summary

We will run control + 2 conditions (no cpos) and one more "high power" community workers condition. That makes up for 4 conditions.

To these, we will add 2 levels of alpha (0.2 and 0.5) and two different "speeds", one even slower (0.005 and one at real speed (0.1 ))

This makes up 16 configurations divided in the usual 4 files, each file with 4 parameters. If we set repetitions to 10, each file will cover the 40 processors neatly.

We can run either 1 machine each file (10 repetitions).

wow, so many files

12 Sep 19:25
Compare
Choose a tag to compare
wow, so many files Pre-release
Pre-release

This is planned to run the 4 xml files (10 simulations each) on a single machine. I have removed the xml clutter, so they are easy to select.

This configuration will use more disk space, saving some extra files with the detailed configuration of opinions for the 10K and 40K agents (but not at all steps). It refers to the following running plan rp1.1:

running plan 1.1

The purpose of this plan is:

  • to find out how much we must scale the recruitable to get that five percent at the end.

The purpose here is just to save a good opinion and demographic distribution, so that we can know how to tune from the beginning, to obtain 5.6 in the steady state

We will first test 10K, then 40K again. No interventions.

base - the reference experiment

A run with 10K agents saving all ingredients of risk.

more recruitables.

Side experiment: let's start with 10% recruitable to see how they decrease

later on

, and let's also reduce OD (bringing talk-effect-size at 0.01) speed to see what difference it makes.

Less for those that worked well

02 Sep 15:23
Compare
Choose a tag to compare
Pre-release

This release is a first step towards computing the thresholds for the interventions that work.

We should run the rp1.0 plan, which, if I didn't make mistakes, contains the following files:

cworkers_10K_rp1.0.xml
cworkers_rp1.0.xml
high-risk_10K_rp1.0.xml
high-risk_25_rp1.0.xml
base_10K_rp1.0.xml
base_t001_rp1.0.xml + base_r10_rp1.0.xml 

The first five should all contain 40 runs each. The ones with 10K should be faster. The last two files are on the same line because they could be ran together being 20 repetitions each.

running plan 1.0

The purpose of this plan is:

  • to scale down interventions and find the minimum level of effect.

We will first test with a reduced number of agents, then run something with 40K again.

summary

This is again a variation on the three interventions. We will also start running some experiments at 10K agents.

base - the reference experiment

Just a run with 10K agents.

more recruitables.

Side experiment: let's start with 10% recruitable to see how they decrease, and let's also reduce OD (bringing talk-effect-size at 0.01) speed to see what difference it makes.

high-risk

we are now addressing just MOST citizens - the 75%. We should test 25% at 40K to get a reference. In the meantime, we can also test 75, 50, 25, and 10% at 10K.

communityworkers

In this case the effect wasn't significant on the recruitment. We can try with plenty more - let's say 20, but we must test if the system starts with them.

Likewise we could try at 10K for 5 and 20.

cpos

This requires more code change, specifically here:

to setup-police
  create-police (4 * total-citizens / 1000) [
    set shape "person soldier"
    set cpo? false
    setxy random-xcor random-ycor
  ]
  set police-action-probability 0.23 * 1000 / 4 / 365 / (ticks-per-day - 1)
  ;                             23% of num-citizens / num-policemen / 365 / (hours - 1)
  ask n-of round (count police * cpo-% / 100) police [ set cpo? true ]
end

Chasing opinions - borked version

10 Oct 20:57
Compare
Choose a tag to compare
Pre-release

We need to better understand the role of opinions.

running plan T13

The purpose of this plan is:

  • to execute again a large set of repeated scenarios on maximum values in order to show that the interventions chosen do have an effect after some modification of the opinion dynamics. The OD dynamics should be much slower. It will be run at first with 10K agents.

summary

We will run control + 2 conditions (no cpos) and one more "high power" community workers condition. That makes up for 4 conditions.

To these, we will add 2 levels of alpha (0.2 and 0.5) and two different "speeds", one even slower (0.005 and one at real speed (0.1 ))

This makes up 16 configurations divided in the usual 4 files, each file with 4 parameters. If we set repetitions to 10, each file will cover the 40 processors neatly.

We can run either 1 machine each file (10 repetitions).

files to run

We run each of the rpT13 each on 1 machine:

rpT13_base.xml  
rpT13_cworkers10.xml  
rpT13_cworkers5.xml 
rT13_high-risk.xml  

and it can be run as usual with something along the line of nohup time /opt/netlogo/6.0.4/netlogo-headless-2G.sh --model PROTON-T.nlogo --setup-file experiments-xml/XXX.xml --table XXX.csv > XXX.out 2>&1 &

40 by 40 we will overcome

19 Aug 19:58
Compare
Choose a tag to compare
Pre-release

This is the first big release of the PROTON-T simulator.

It contains a set of xml files for running the experiments in blocks of 40 experiments. In this version, the blocks ready to run are the 0.9 ones, and specifically

base_rp0.9-ch1020.xml
base_rp0.9-ch25.xml
communityworkers_rp0.9_g45.xml
communityworkers_rp0.9_g50.xml
communityworkers_rp0.9_g55.xml
cpos_rp0.8.xml
cpos_rp0.9_g45.xml
cpos_rp0.9_g50.xml
cpos_rp0.9_g55.xml
high-risk_rp0.8.xml
high-risk_rp0.9_g45.xml
high-risk_rp0.9_g50.xml
high-risk_rp0.9_g55.xml

Please behave..

18 Aug 19:21
Compare
Choose a tag to compare
Please behave.. Pre-release
Pre-release

.. as expected.

This is another calibration, this time running all the way because we also need to know if the behaviour is stable after the first 800 steps.

give them more recruiters (enrollers)

18 Aug 13:34
Compare
Choose a tag to compare
Pre-release

This is not much different from last version, but with a different orientation in the calibrate.xml file.

No proselitism

15 Aug 20:38
Compare
Choose a tag to compare
No proselitism Pre-release
Pre-release

We had a horrible bug in which clever recruiters talked normal citizens into becoming recruiters.

Now we have removed it and we have to calbrate all again.

It is all the same as the last release from the point of view of running - except that the suggested runs are in the file calibration.xml.

Less recruits

13 Aug 20:25
Compare
Choose a tag to compare
Less recruits Pre-release
Pre-release

This version will be used for the last phases of calibration.

It can be run with

nohup time /opt/netlogo/6.0.4/netlogo-headless-2G.sh --model PROTON-T.nlogo --setup-file experiments-xml/base_rp0.8.xml  --table calm.august.3882389.1.csv > calm.august.3882389.1.out 2>&1 &

with the obvious substitutions. I've run with 2GB and 5 repetitions, but these can be changed. We just need to get as close as possible to 800 steps by the next days. For that, only the base xml is enough.

4 experiments and a short night

04 Aug 21:13
Compare
Choose a tag to compare

This implements running plan 0.8 (with a mirabile tag alignment).

The code had been cleaned up and the night now lasts just 1 step. Thus, the target for one year is at 6205 steps.

For running on the supercomputer, @GKeren, you will find 4 files with 5 repetitions each. Those should be launched to make up the 50 needed repetitions of the single experiment - we can start with less and accumulate more runs, or we could use 4 machines with one of the files each - that would give us 40 runs, not 50, but from what I see they will be more than enough.

For the rest, runs should be similar to the ones in version @0.7a, but much longer.

For our checks it is important to know the java version under which we are running, so please take note of java -version.