The process of building light curves was discussed in detail in Evans et al. (2007), and slightly revised by Evans et al. (2009). Since then, there have been many small modifications and many larger under-the-hood improvements in efficiency, which are detailed in the changelog. This page provides an up-to-date overview of the algorithm. This is deliberately written generically, to cover both automatically generated GRB light curves and products created on-demand: the only difference between these two cases is that for GRBs a specific binning method is always used, whereas for on-demand products, the user can select the method and parameters.
The process can be split up into three phases which, in the original paper, we named alliteratively as the preparation, production and presentation phases. These phases can be summarised thus:
The following sections describe the algorithms of these three phases. For GRBs, light curve data can also be constructed from the data obtained in WT mode while the spacecraft is slewing to the newly-discovered burst. These ‘WT settling-mode’ data are analysed via a separate script and only combined with the normal pointing-mode data at the presentation phase.
Back to contents | Back to repository
This phase begins with the data from the quick-look site and archive, and ends by producing the following files (per mode):
xrtlccorr
software.For WT mode data, if any snapshots were marked as unreliable (see below), then there will be two source and background event lists created: one comprised of all available data, and one which does not include data from the unreliable snapshots.
The preparation phase begins by attempting to detect and localise the source (unless centroiding has been disabled), initially using only the data from the first observation, but using later datasets if the source cannot be found in the first one. A deep image of all observations is also created, and source detection is used to identify all sources in this image, so that they can be excluded from the background extraction region. The code then steps over each observation, and within these it works independently, first in WT mode, and then PC mode. The observation is split into snapshots - periods of continuous observation - and the following steps are run for each snapshot:
xrtlccorr
code to build a file
giving the time-dependent PSF corrections.1 An event list has a single roll angle keyword (PA_PNT
), so
if there are multiple spacecraft orbits in the list it will contain the mean value. The s.mkf
file in
the data auxil
directory gives the roll angle as a function of time, and we use this to determine
the per-snapshot roll angle.
Once all snapshots in the observation have been processed in this way,
the source event lists are merged into a single source event list, and likewise for the background event lists, the
correction files produced by xrtlccorr
, and the WT mode systematic error files. This process is repeated for
all observations, and then the per-observation files are then merged, and sorted into time order. Note that the PC and WT
mode files are not merged with each other, but remain distinct.
At this stage the preparation phase is complete.
Back to contents | Back to repository
The next step is to take the event lists produced in the preparation phase and bin the events into a light curve. If the data contain unreliable WT mode points the binning process is performed twice for WT mode: once using only the reliable WT mode snapshots, and once using all snapshots.
The binning software runs on PC mode and WT mode separately, following the same process for each (apart from the WT mode systematic errors, see below). It reads in the files produced by the preparation phase, and the first step is to identify the snapshot times. This is done using the GTI information in the event lists.
The four binning methods are:
Time binning is probably familiar to all readers: the bins are of a predefined duration; therefore the bin times are defined first, and then source and background events are allocated to them. Snapshot binning and observation binning are similar, except that instead of the duration of the bin being set in seconds, it is set in Swift's observing units. i.e. one bin is constructed for each snapshot, or each observation; thus the bins can be of non-uniform duration. In each of these cases, events are allocated to the appropriate bin based on their time, therefore if the dataset has observations which overlap in time, per-observation binning cannot be used, because there is not a unique mapping from time to bin. Count binning, as employed for GRBs, is different: in this case a bin accumulates events until it is deemed to be ‘full’ (described below).
Whichever binning method is used, once a bin has been filled, certain operations are carried out before the bin is saved. The decisions made at this point depend upon the parameters of the binning that have been specified, and much of this has been added to the software since the publication of the papers describing the facility, therefore we detail them in full.
For time, snapshot and observation binning, once the above steps have been done, details of the newly created bin are written to disk. For counts (GRB-style) binning, there are more steps which must be taken, as we now explain.
When binning by counts, a bin that has undergone the above steps is not necessarily finalised. If the source was found to be undetected in the bin, instead of being saved, the bin is kept ‘open’ that is, events continue to be added to the bin until either the bin represents a detection, or an observation gap longer than some specified maximum duration is exceeded. In the latter case, the bin must be written to disk and a new bin started. In the former case, even though the bin represents a detection, and meets the binning criteria it is not immediately finalised. A new bin is started, but the previous bin is not written to disk, but stored in memory. At the end of the current snapshot of data, if the new bin is not yet complete, the software decides whether to append those counts to the previous bin (in which case the bin calculations are reperformed), or to keep the new bin open, and carry it forward to the next snapshot; whichever method maximises the fractional exposure is accepted. A bin is only finalised and saved when the observation gap criterion mentioned above is met, or when the next bin is considered ‘complete’.
The hardness ratios are produced in a manner identical to that described above, except that events are allocated to the hard or soft channels depending on their energy and only frequentist calculations are permitted. Also, if counts binning is selected, for a bin to be considered complete the number of counts in the bin must exceed the specified threshold in both channels.
Back to contents | Back to repository
At the end of the production phase binned data files exist separately
for WT and PC mode, and these files contain much more than simply count-rate data (they are in fact the
detailed data files which you can download). The bins comprising detections
are in separate files from the upper limits. The presentation phase parses these files and combines the appropriate data
from each to produce the final light curve plots presented online. These are plotted as postscript files using qdp
and then also converted to GIF format for publication on the website. The different light curves
produced by this phase of the process are described in the light curve web page documentation.
If the light curve contains WT mode data, the presentation phase produces two light curves: in the first the count-rates and errors are taken directly from the output of the production phase; in the second the systematic errors are removed before plotting. If the light curve contained unreliable WT mode points, the production phase produced two WT mode datasets: one including only the reliable WT mode data and one using all data; the presentation phase will produce separate plots based on these two datasets. Therefore, the presentation phase produces up to four sets of light curves, those with and without systematic errors and for each of these, those with and without the unreliable WT mode data.
Back to contents | Back to repository
Swift collects data in WT mode while the spacecraft is slewing. For most objects these data have little value: they add maybe 5—10 s of data at the start of the exposure, but they cannot always be analysed reliably, especially if the source is not bright: it is better to rely purely on the standard data available once Swift has settled on the target. For GRBs, however, speed is of the essence, and those few seconds of data potentially contain interesting information. We therefore include the settling-mode data in GRB light curves, provided the source is bright enough to produce at least 15 events in the settling-mode data (if the source is fainter than this, the light curve will be background-dominated).
The settling-mode data cannot be analysed using the same software as the normal, pointing-mode data, because the spacecraft attitude information during the settling phase is less accurate than after Swift has settled, and also, the degree of inaccuracy is changing. That is, the position of the GRB in RA & Dec appears to change while the spacecraft slews! Therefore, in order to analyse the settling mode data, it is necessary to determine the source position in the XRT coordinate frame with high cadence. Based on a detailed analysis of the data, we find that repeated centroids are necessary until the slew rate has fallen below 1 pixel/sec in the CCD x and y axes, at which point the attitude information has stabilised; from this point onwards, if necessary, an RA & Dec position derived from pointing-mode data can be used.
Of course, despite this difference, the overall approach to constructing a settling-mode light curve is the same as for the pointing mode-data. The settling-mode algorithm can be summarised thus: