-
Notifications
You must be signed in to change notification settings - Fork 15
MeerKAT Simulation
In this tutorial I demonstrate how to run a telescope simulation. I'll be simulating a sky model (from NVSS) into a simulated MeerKAT dataset. This type of simulation pipeline consists of three parts:
- Create an empty visibility dataset (a.k.a measurement set; MS). We use the simms tool do this
- Simulate the sky model into the MS. For this we'll use the MeqTrees based simulator tool
- Image the MS. Imaging is done using the imager tool, available imagers are casa clean, lwimager, wsclean
Before we start the pipeline we need to decide on the I/O flow of the pipeline. That is, we need to tell the Penthesilea where to find input and where to dump output.
In your working directory (In this tutorial, I'm working in Penthesilea/examples) create an input folder (I'm calling mine meerkat-sim-input)
mkdir meerkat-sim-input # this does the trick on my linux terminal. You do you
Then also create an output folder
mkdir meerkat-sim-output
With that sorted out, we enter a python environment (I recommend ipython). All the commands that follow are executed in an python environment. Lets slither away!
The user interacts with Penthesilea via a Python interface called otrera. We need the *Pipeline class from the otrera module.
from otrera import Pipeline
Lets create variables pointing to out I/O folders
INPUT = "meerkat-sim-input"
OUTPUT = "meerkat-sim-output"
The Pipeline class has to know where to find some data products (antenna position files for telescopes, etc.). This data is stored on Penthesilea/data
or ../data
from my working directory
DATA = "../data"
The measurement set is special I/O product since its often both the input and output product. For this reason it is treated differently. In short, all MSs are stored in their folder. I'll use msdir; this folder will be created if it doesn't exist (FYI: same is true for the output folder)
MSDIR = "msdir"
Each component of the pipeline requires a configuration file. These are json files which hold the parameters for a given process (e.g imaging parameters for an imager). You can find template configurations for the Penthesilea tools at Penthesilea/data/configs
. Pipeline needs to know this folder
CONFIGS = "../data/configs"
These are templates we'll be needing for this tutorial
simms_template = "simms_params.json"
simulator_template = "simulator_params.json"
imager_template = "imager_params.json"
Now we create an MS variable. This is the name of MS that will be created and used in the simulation
MS = "meerkat_simulation_example.ms"
We will also need a sky model. This is the model that we'll simulate (This is the NVSS sky model mentioned earlier)
LSM = "nvss1deg.lsm.html"
We are now ready to start writing the pipeline. First, we create a Pipeline instance.
pipeline = Pipeline("Simulation Example", configs=CONFIGS, data=DATA, ms_dir=MSDIR)
We always start from the template config
simms_dict = pipeline.readJson(simms_template)
Then update the parameters we need to update for our purposes. ** You should have a look at all the configuration files to get an idea of what is available**
simms_dict["telescope"] = "meerkat"
simms_dict["msname"] = MS
simms_dict["synthesis"] = 2
Now that we have configured this step, we add it to the pipeline.
pipeline.add("ares/simms", "simms_example", simms_dict, input=INPUT,
output=OUTPUT, label="Creating MS")
In the above function call:
- ares/simms is the Docker executor image that has the simms script to be executed
- simms_example is the name of the container that will perform the execution
- simms_dict is holds the parameters for the MS that will be created.
Same drill as before
simulator_dict = pipeline.readJson(simulator_template)
simulator_dict["msname"] = MS
simulator_dict["skymodel"] = LSM
pipeline.add("ares/simulator", "simulator_example", simulator_dict, input=INPUT, output=OUTPUT, label="Simulating visibilities")
Now that we've created our MS and simulated our sky into it, we can image image it.
imager_dict = pipeline.readJson(imager_template)
To make things a bit more interesting, lets image it with three different uv-weighting (Briggs, robust=-2 (eqv. uniform), 0, 2(eqv. natural))
imager_dict["weight"] = "briggs"
briggs_robust = 2,0,-2
prefix = imager_dict["imageprefix"]
Then add an imaging " step" for each uv-weighting
for i, robust in enumerate(briggs_robust):
imager_dict["msname"] = MS
imager_dict["robust"] = robust
imager_dict["imagename"] = "%s_robust-%d"%(prefix, i)
pipeline.add("ares/imager", "imager_example_%d"%i, imager_dict, input=INPUT, output=OUTPUT,
label="Imaging MS, robust=%f"%robust)
With the pipeline, we are set to go!
pipeline.run()