Things to know, or to look out for, when analysing XRT data. See also the XRT threads page for step-by-step guides to analysing the data.
See also the XRT Calibration digest page
Bright optical sources can deposit significant charge in the CCD pixels, affecting the detection of X-rays. This is described on the optical loading page, where we provide graphs showing when optical loading becomes an issue for different stellar types. We also provide an optical loading calculator to predict the level of optical loading for a given star.
At the end of May 2005, the XRT CCD was hit by a micrometeorite. This has led to a small number of hot columns in PC and WT modes being vetoed in order to prevent saturation of the telemetry. Because of the way the CCD is read out, this method cannot be used for Photo-Diode mode, so this mode is currently disabled.
Unfortunately, because some of the bad columns run down the centre of the CCD, sources occasionally lie on top of, or very close to, them. In this case, the loss of counts have to be corrected for, both when extracting spectra and light-curves. Examples of such a chance alignment are shown below.
The bad columns closest to the centre of the field of view (and, therefore, the most likely to affect any observations) are located at DETX positions of 291-294 for PC (291-295 for WT) and 319-321 (both PC and WT). DETX=290 is an additional partial bad column for PC mode (between DETY=199 and 290). Note that the columns will only be obvious when plotting the image in detector coordinates, or for a single orbit at a time, since using sky coordinates will blur different pointings together, concealing the fact that some orbits may be over the bad columns while others are not.
To correct for the bad columns, exposure maps need to be created, and incorporated into the ARFs generated. This procedure is covered by the XRT exposure map thread.
For sources that lie close to the bad columns, the correction
factor which compensates for the exposure lost to those columns
is very sensitive to the exact location of the object on the CCD.
When using the standard attitude information (from the
sw*sat.fits.gz file), the XRT astrometry is accurate to
3.5′′ 90% of the time. Thus a catalogued position of
the object in question may not correspond to the correct position
in the XRT astrometric frame, and this can result in the
correction factor being wrong by up to a factor of ~2 in extreme
cases. It is therefore strongly advised that the object position
used to calculate the correction factors should be found from XRT
Photon Counting mode data taken as close as possible in time to
the WT mode data. Where no such data are available, we advise
analysing the data (from
xrtpipeline onwards) using the most accurate celestial co-ordinates (e.g. optical or
radio) available. However even in these cases, if the
object centre is within 2-3 pixels of the bad columns (this can
be determined by plotting the image in ds9) users should be aware
that the correction factor may be uncertain, and flares/drop-outs seen in the WT mode could be spurious.
movement may affect this correction, see the Pointing Stability
During the first ~150 sec of a snapshot the spacecraft may still
be slewing very slightly. If the source is situated near to the
bad columns it is necessary to calculate a time-dependent
correction factor to compensate for this movement. This is the
default behaviour of the xrtlccorr code, although in some cases
residual features of 5-10% of the mean flux level may still
exist. If these are seen early in the snapshot, we advise users
to examine the
sw*s.mkf.gz file in the
auxil directory of the
observation. By plotting RA or Dec against time in this file, one
can determine whether the spacecraft is moving at the time of a
flare or dip. If it is, and the source is near the bad columns
(as can be determined by viewing the WT mode image in, e.g., ds9)
the feature should be viewed with scepticism.
As of 2014 July 02, the user objects tools correctly calculate time-dependent correction factors.
The bad column correction factor is also strongly sensitive to the assumed position of the X-ray source. This was discussed above (under Bad columns).
Occasionally, when Swift is settled on a target (rather than slewing), an instability can cause the source to drift around on the detector. If the source is close to one of the bad columns, a small movement can cause the core of the PSF to become hidden by the bad columns, leading to a drop in the detected count rate. An example of such an event is shown below.
xrtlccorr tool allows for time-dependent corrections to be
made, which correct for this effect. This option is enabled by default within
See the Exposure Correction thread for details.
When analysing PC-mode data, it is common practice to determine the background level
back command in
ximage. However, because the Swift-XRT has
a very low background level, this tool often gives biased or incorrect results for XRT images, and we advise against its use. The problems,
are twofold, as described below. For estimating the background we instead suggest
examining the image and exposure map by eye, and identifying a region on the detector which is source free
and uniformly illuminated. Then you can place a region on this and identify the number of counts per pixel
in this, using the
counts command in
ximage or the
ds9, for example. This can then be used in
ximage, for example the
detect command takes an option
to specify the background level across the image, in counts per pixel.
The first issue with the
back command is that at the present time it does not
make use of the exposure map. Therefore the effects of vignetting, bad columns, and field of view
are not included in the background calculation. Additionally, any cell containing zero counts
(the back command splits the image into a series of cells, and measures the number of counts in each)
is excluded from the estimation of the mean background. This has the advantage of ignoring parts of the image outside
the field of view, however it also means that any fully exposed portions of the image which contain no events
will be ignored, biasing the measured background.
The second issue is with the sigma clipping method employed by
Background cells containing a number of counts more then 3-σ from the mean value of all cells
are discarded and the mean level is recalculated. However, whether a cell is more than 3-σ from
the mean is determined using Gaussian statistics, which is not appropraite for the low-background Swift-XRT
data, where Poisson statistics should be used. The practical upshot of this is that the 3-σ lower limit, below which
cells are ignored, is frequently negative, e.g. cells containing fewer than -2 counts are excluded; this of course
is impossible in Poissonian datasets such as XRT images, and therefore in reality the sigma clipping only
rejects cells lying above the mean, again biassing the background.
There are three types of Swift attitude file, with between one and three of these being available for any given observation:
The sat file contains the attitude determined from the spacecraft
star trackers. The pat file is almost identical to the sat file,
but the task
attjumpcorr has been applied to it. From
the point of view of XRT analysis, these files are
indistinguishable. The attitude in the uat file has been
determined using the UVOT as a star tracker.
There are a number of tasks which use an attitude file (e.g.,
pointxform). It is essential that the
attitude file used by such tasks is the same as that used to create
the corresponding event list. In principle, the file used by
xrtpipeline can be determined by examining the ATTFLAG
keyword in the EVENTS extension of the event list. The flags are
NB This must be read from the EVENTS extension, since this is the only
one to be altered when
xrtpipeline is run.
However, a significant number of datasets taken before
August 2007 have incorrectly formatted ATTFLAG values -- these have been
corrected in the UKSSDC archive.When running
on data from this time frame which have been downloaded from the GSFC
SDC, versions of HEASOFT more recent than 6.15.1 will correct the issue.
Pile-up in CCD cameras occurs when there is a significant probability that two or more photons registering within a given CCD frame will have overlapping charge distributions. This can lead to a spectral distortion if the resulting distribution is recognised as a single event whose energy is the sum of the overlapping events (i.e., two or more soft X-ray photons can be registered as a single higher-energy photon), or a flux loss if the charge distribution has a pattern, or grade, outside that clasified as a true X-ray event (0-12 for Swift Photon Counting mode; 0-2 for Windowed Timing).
For basic details on how to estimate the level of pile-up, and how to correct for it, see the pile-up analysis thread.
Useful papers containing details about Swift pile-up analysis:
There are a number of issues of which users should be aware when it comes to data screening.
Because of the failure of the Thermo-Electric Cooler power supply, the XRT
CCD routinely operates between about -70 and -50C. By default, xrtpipeline
excludes any data which were obtained at temperatures higher than -47C (from
v1.2 of the software onwards). This can be changed either by including a GTI
expression directly when running the pipeline (this method is needed to
include data taken at higher temperatures which would usually be thrown away, e.g.
xrtpipeline gtiexpr="CCDTemp>=-102&&CCDTemp=<-45"), or by later filtering
within XSELECT (e.g.
select mkf mkf_dir=./ mkf_name=sw[obsid]s.mkf "CCDTemp<-55").
While less stringent filtering on the temperature will sometimes lead to more exposure time, there is the possibility of more hot pixels appearing within the data.
When the angle between the XRT and the limb of the Earth is small, the average background level is higher. An extreme example is shown below.
Velocity aiding was switched off onboard the spacecraft at 21:57 UT on 31st
January 2005. Data obtained before this date should include the command
aberration = yes when running
xrtpipeline (the default for all
recent versions of the pipeline is
aberration = no).
When the CCD temperature is around -52C, certain hot pixels become active. Although many are masked out, this can lead to "mode-switching"; this occurs when the count-rate within the centre of the field of view is close to the PC/WT switch point. This means that, even if you expect your object to be in PC mode, a substantial portion can end up in WT event files. Exposure time is lost with each mode change. This is annoying, but there is nothing that the user can do about it. The science planners at PSU do a very good job at keeping the temperature down, but sometimes it's not possible (or a new burst turns out to be in a part of the sky which causes the XRT to become particularly hot).
Prior to version 3.7 of the software (HEASoft 6.10), extractor/xselect ignored the TIMEPIXR keyword which defines whether the TIME keyword refers to the start (TIMEPIXR=0), middle (TIMEPIXR=0.5) or end (TIMEPIXR=1) of the frame. Instead, it always assumed that the time corresponded to the centre.
The values of the TIME column of PC event files actually refer to the beginning of the CCD frame (duration=2.507s) - i.e., they have the TIMEPIXR keyword of 0. For software earlier than version 3.7, light-curves produced via xselect/extractor will have the keyword incorrectly set to 0.5. This can be changed (for a file called lightcurve.lc) using the following command:
fparkey 0. lightcurve.lc+1 TIMEPIXR.
This corrected file can then be analysed as usual within xronos/lcurve, which does take account of the TIMEPIXR keyword.#
This is only the case for PC data, however. The event time values for WT data correspond to the middle of the bin, meaning that the extractor/xselect bug for these earlier software versions is not a problem.
Extractor/xselect now checks for the TIMEPIXR keyword, with an event now being defined as good if the middle of the frame lies within the Good Time Interval (GTI). However, the GTI START and STOP times (and event file TSTART and TSTOP keywords) produced by extractor are not set to the frame boundaries but fixed at the user-specified values; this means that, for short exposures the exposure times may be significantly incorrect. It also means that events can be included in an extracted dataset which have TIME values which lie outside the GTI range. This can cause problems in downstream software such as xrtexpomap. At present the only solution to this is to ensure that your time filter values match up with frame boundaries.
A new version of extractor has been produced which should be included in the next release of the software. This has a (hidden) parameter: adjustgti. If set to "yes" this will automatically cause user-supplied time-filters to be reset to frame boundaries (without changing which events are included), which solves the problem.
TIMEPIXR is incorrectly set to 0 in the WT event lists; while the
extractor/xselect bug cancelled out this error in earlier versions of
the software, this is no longer the case. The keyword will be corrected
in a future release of the software, however will not immediately be retroactively
applied to the archival Swift data, or to new Quick Look data. Users
should issue the command
fparkey 0.5 wtmodeeventfile.evt+1 TIMEPIXR
for any WT mode event list they use. Note that we always recommend that users run xrtpipeline (see analysis thread) themselves, rather than using the cleaned event lists taken directly from the archive.
This file explains how to check for a periodicity in XRT data.
The times listed in an event file occur at intervals of the fundamental time resolution (TIMEDEL keyword) for the mode being analysed. When binned light curves are created with arbitrary time bins close to small number multiples of TIMEDEL, artefacts can sometimes be seen in subsequent powerspectral analysis of the data.
A common method to deal with this is to randomise the event times over the timebin interval (TIMEDEL) before the data are binned. The Swift-XRT software does not currently perform this randomisation but it can be achieved using the fcalc ftool and its built in random() function using the expression 'TIME-TIMEPIXR*TIMEDEL+RANDOM()*TIMEDEL'. For example,
ftcalc infile=input.evt+1 outfile=output.evt column=TIME
Up to HEASOFT version (v6.16), there are known issues
xrtgrblc ftool: by default it does not correct for pile-up below
300 count s-1 in WT mode (correction down to ~100 count s-1 is needed; this can be enabled by including
wtreglist=2 on the command line), and the
exposure corrections do not have the necessary time
resolution (see Pointing stability for details).
However the defaults in the latest version (HEASOFT 6.17) have been updated so that the pile-up limits are more suitable, with 100 count s-1 being the lower limit used for WT data, and the exposure corrections are time-dependence.
For information on how to create XRT light curves by hand, see the XRT analysis thread. Alternatively, the online XRT product generator can be used.
There are 3 "hot spots" or "burn marks" in the centre of the XRT CCD. These
are areas of enhanced dark current due to focussing of X-rays during the
ground calibration before launch. These would not be visible below about -90C,
but, because the CCD is not as cool as expected, the spots were sometime
visible in earlier data and could be mistaken for a GRB afterglow if the user
is not vigilant! The positions are known in detector coordinates. These are
labelled in the image below. Filtering out the lowest energies tends to make
these spots disappear. Do this by using the pha_cut command in XSELECT (e.g.,
filter pha_cut 30 1000 filters between 0.3 and 10 keV).
These areas are now masked out as bad pixels, so should seldom be a problem.
For some of the early (start of 2005) Swift data, the RA and Dec in the headers of the raw
files are set to zero or 90, rather than the position of the target. This
causes the pipeline to crash in normal circumstances. To side-step this
problem (very rare now), use the dummy attitude file sw00000000000sat.fits.gz
The problem is believed to be fixed for all recently-processed versions of the data.
This event can only occur if the XRT is in Manual State, rather than its usual Auto State; some of the earlier (before April 2005) datasets were interrupted in this manner. If this happens, the event-list for the original target may also include data for the slew to the burst and subsequent snapshots. This complicates matters, since the RA and Dec have changed part way through the observation. The header files will include the coordinates of the original target, rather than those for the burst, thus processing the data in the normal way will show a field of view not containing the GRB.
To fix this problem, RA_PNT and DEC_PNT in the xhd.hk file must be changed to the required position:
sw<obsid>/xrt/hk directory there will be a file called
sw<obsid>xhd.hk, within which can be found keywords called
DEC_PNT (in 2 separate extensions).
These should be changed from the values of
the original target to those of the GRB of interest using the
For example, if the required RA and Dec are 278.103333 and 42.365000 respectively, use the commands (obviously including the correct <obsid>):
fparkey 278.103333 sw<obsid>xhd.hk+0 RA_PNT fparkey 278.103333 sw<obsid>xhd.hk+1 RA_PNT fparkey 42.365000 sw<obsid>xhd.hk+0 DEC_PNT fparkey 42.365000 sw<obsid>xhd.hk+1 DEC_PNT
xrtpipeline can be run as normal. Following this, the cleaned
event-list should be read into XSELECT as normal. Then the housekeeping file
also needs to be read in, using the command
read hk; XSELECT then prompts for
the location of the housekeeping directory (
the name of the HK file (
command can then be used to choose the relevant RA and Dec. For the example
here, this would be:
select hk "RA>260&&Dec<50"
xrthotpix failed with older versions of the Swift software,
giving the message:
ERROR: Operation not permitted Task xrthotpix 0.1.5 terminating with status 1
If this happens, the cleaned event-list will not be generated for the pointing observation.
impfac=2000 on the command
line. This parameter is used to compute the background level. The problem has
been fixed for v1.2 (and later versions) of the software.