-
Notifications
You must be signed in to change notification settings - Fork 0
Description
After I "mitigated" some memory issues (toruseo/UXsim#143, #144, #145, #146), I did a little seed sensitivity analysis.
I performed 5 runs with identical settings but no seed (so a random one) specified. The settings were:
def init(self, step_time=1/12, start_time=5, end_time=11, choice_model="rational_vot", enable_av=True, av_cost_factor=0.5, av_vot_factor=0.5, ext_vehicle_load=0.6, uxsim_platoon_size=10, car_comfort=0.5, bike_comfort=1.2, simulator=None):
So some AVs, for 6 hours in the morning.
Where visual inspection was enough, I kept it to visual inspection.
High-level sensitivity
The mode choice share looks very similar:
Also the mode choice patterns over time, distance and costs look very similar:
If we take a look at traffic measures, they also look really similar, both in averages and in 50% area range. The average_delay looks a bit noisy in peak traffic though, but only in exact numbers, the patterns stay the same.
Per-area sensitivity
The per-area shows a bit of a mixed bag:
On the one hand, the patterns are quite similar, but the numbers do vary a bit.
What does help in that the inter-seed noise is lower than the inter-area differences. Visually, there are vertical stripes, meaning that the seeds correlate higher with each other than the areas.
Preliminarily conclusion
The seed noise is low enough that the signal from all metrics studied so far, is stronger than the seed noise. This means while the exact numbers might differ a little bit, conclusions can be gained from a single run.
As additional metrics are added those should be verified.
Raw results in acd8b0d, notebook in 16b3f8b, graphs in 1c07085.
CC @quaquel