Skip banner and navigation tools.

 |  site map

The swifttools.ukssdc.data.GRB module

Jupyter notebook version of this page

Summary

This module provides direct access to the GRB products created at the UKSSDC, as well as the means to manipulate them (e.g. rebinning light curves). Being part of the swifttools.ukssdc.data module it is designed for use where you already know which GRB(s) you want to get data for. If you don't know that, for example you want to select them based on their characteristics, you will need the GRB query module.

Tip You can read this page straight through, and you may want to, but there is a lot in it. So if you want a full overview, read through. If you want to know how to do do something specific, I'd advise you to read the introduction, and then use the contents links to jump straight to the section you are interested in.

You will notice that the functions in this module all begin with get (whereas the parent module used download). As you'll realise pretty quickly, this is because this module lets you do more than just download datafiles, you can instead pull the data into variables for example.

An important point to note is that this documentation is about the API, not the GRB products. So below we will talk about things like WT mode systematic errors, and unreliable light curve data, without any real explanation; you can read the documentation for the products if you don't understand.

OK, first let's import the module, using a short form to save our fingers:

import swifttools.ukssdc.data.GRB as udg

After a short introduction, this page is split into sections according to the product type as below.

Page contents


Introduction and common behaviour

Before we dive into things, let me introduce a couple of important concepts which are common to everything that follows.

Firstly, you specify which GRB(s) you are interested in either by their name, or their targetID. You do this (prepare for a shock) by supplying the GRBName or targetID argument to the relevant function: all functions in this module to get data take these arguments. You must supply one or the other though, if you suply a GRBName and targetID you will get an error. These arguments can be either a single value (e.g. GRBName="GRB 060729") or a list/tuple (e.g. targetID=[282445, 221755]) if you want more than one GRB. Hold that thought for just a moment because we'll come back it to in a tick.

The second common concept is how the data you request are stored. This is controlled by two arguments, which again are common to all of the data-access functions in this module. These are:

You can set both to True to return and save. (You can also set both to False if you like wasting time, CPU and making TCP packets feel that they serve no real purpose in life, but why would you do that?) If you are using saveData=True you may also need the clobber parameter. This specifies whether existing files should be overwritten, and is False by default.

You should still be holding onto a thought… the fact that you can supply either a single value or a list (or tuple) to identify the GRBs you want. What you do here affects how the data are stored as well.

If you gave a single value, then saveData=True will cause the data for your object to be downloaded to the specified directory. returnData=True will cause the function to return a dict containing your data.

If you supplied a list of objects then of course you're going to get multiple dataset. saveData=True will (by default) create a subdirectory per GRB to stick the data in. returnData=True will return a dict with an entry per GRB; that entry will the dict containing the data.

But, you may ask, what will the names of these subdirectories, or the keys of these dicts be? They will be whatever identifer you used to specify the GRBs you wanted. If you used the GRBName argument, the directories and dict keys will be the GRB names; if you used targetID then they will be, er, the target IDs.

That may not make so much sense in the abstract, but all will become clear as we see some demos. Oh, but just to further confuse you, please do note that a list (or tuple) with one entry is still a list (unless it's a tuple), and so the returned data will still have the subdirectories/extra dict that you get when you supply a list (or tuple) even though there is only one entry in this. Confused? Me too, but it will make sense when we start playing, so let's do so.

Light curves

There are basically three ways we can play with light curve data, let's start with just downloading them straight from the UKSSDC website.

Saving directly to disk

Our function for getting light curves is cunningly named getLightCurves(). As I'm sure you remember from above, it we want to save the data to disk, we supply saveData=True (this is the default but I think it's helpful to be explicit). Let's try that:

udg.getLightCurves(GRBName="GRB 060729",
                   saveData=True,
                   destDir='/tmp/APIDemo_GRBLC1',
                   silent=False)
Resolved `GRB 060729` as `221755`.
Making directory /tmp/APIDemo_GRBLC1

Downloading light curves:   0%|          | 0/5 [00:00<?, ?files/s]

In the above I turned silent off just to show you that something was happening; feel free to go and look at '/tmp/APIDemo_GRBLC1' to see what was downloaded.

getLightCurves() mainly just wraps the common getLightCurve() function, so see its documentation for a description of the various arguments, but there is one extra argument available to you to mention here: subDirs.

This variable, a boolean which defaults to True, only matters if you supplied a list/tuple of GRBs. If it is True then a subdirectory will be created for each GRB (as I mentioned above. However, you can set this to False, if you want to put all the data in the same directory. In this case the GRB name or targetID (depending on which argument you called getLightCurves() with) will be prepended to the file names.

Let's demonstrate this with a couple of simple downloads:

lcData = udg.getLightCurves(GRBName=("GRB 060729","GRB 080319B"),
                            destDir='/tmp/APIDemo_GRBLC2',
                            silent=False
                            )
Resolved `GRB 060729` as `221755`.
Resolved `GRB 080319B` as `306757`.
Making directory /tmp/APIDemo_GRBLC2
Making directory /tmp/APIDemo_GRBLC2/GRB 060729

Downloading light curves:   0%|          | 0/5 [00:00<?, ?files/s]

Making directory /tmp/APIDemo_GRBLC2/GRB 080319B

Downloading light curves:   0%|          | 0/4 [00:00<?, ?files/s]

And as you can see, and hopefully expected, the GRBs were each saved into their own directory. Now let's do exactly the same thing, but disable the subdirs:

lcData = udg.getLightCurves(GRBName=("GRB 060729","GRB 080319B"),
                            destDir='/tmp/APIDemo_GRBLC3',
                            subDirs=False,
                            silent=False,
                            verbose=True
                            )
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['OK', 'targetID', 'APIVersion'])
Resolved `GRB 060729` as `221755`.
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['OK', 'targetID', 'APIVersion'])
Resolved `GRB 080319B` as `306757`.
Making directory /tmp/APIDemo_GRBLC3
Getting GRB 060729
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['Datasets', 'WT_incbadData', 'WTHard_incbadData', 'WTSoft_incbadData', 'WTHR_incbadData', 'PC_incbadData', 'PCUL_incbadData', 'PCHard_incbadData', 'PCSoft_incbadData', 'PCHR_incbadData', 'OK', 'Binning', 'TimeFormat', 'T0', 'APIVersion'])
Checking returned data for required content.

Downloading light curves:   0%|          | 0/5 [00:00<?, ?files/s]

Saving /tmp/APIDemo_GRBLC3/GRB 060729_WTCURVE.qdp
Saving /tmp/APIDemo_GRBLC3/GRB 060729_WTHR.qdp
Saving /tmp/APIDemo_GRBLC3/GRB 060729_PCCURVE_incbad.qdp
Saving /tmp/APIDemo_GRBLC3/GRB 060729_PCUL_incbad.qdp
Saving /tmp/APIDemo_GRBLC3/GRB 060729_PCHR_incbad.qdp
Getting GRB 080319B
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['Datasets', 'WT_incbadData', 'WTHard_incbadData', 'WTSoft_incbadData', 'WTHR_incbadData', 'PC_incbadData', 'PCHard_incbadData', 'PCSoft_incbadData', 'PCHR_incbadData', 'OK', 'Binning', 'TimeFormat', 'T0', 'APIVersion'])
Checking returned data for required content.

Downloading light curves:   0%|          | 0/4 [00:00<?, ?files/s]

Saving /tmp/APIDemo_GRBLC3/GRB 080319B_WTCURVE.qdp
Saving /tmp/APIDemo_GRBLC3/GRB 080319B_WTHR.qdp
Saving /tmp/APIDemo_GRBLC3/GRB 080319B_PCCURVE_incbad.qdp
Saving /tmp/APIDemo_GRBLC3/GRB 080319B_PCHR_incbad.qdp

I turned verbose on as well so that you could see what was happening: the data all got saved into the same directory, with the GRB name prepended to the files.

In these examples I've given the GRB name, but the targetID was an option if you happen to know it:

lcData = udg.getLightCurves(targetID=(221755, 306757),
                            destDir='/tmp/APIDemo_GRBLC4',
                            silent=False,
                            verbose=True
                            )
Making directory /tmp/APIDemo_GRBLC4
Getting 221755
Making directory /tmp/APIDemo_GRBLC4/221755
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['Datasets', 'WT_incbadData', 'WTHard_incbadData', 'WTSoft_incbadData', 'WTHR_incbadData', 'PC_incbadData', 'PCUL_incbadData', 'PCHard_incbadData', 'PCSoft_incbadData', 'PCHR_incbadData', 'OK', 'Binning', 'TimeFormat', 'T0', 'APIVersion'])
Checking returned data for required content.

Downloading light curves:   0%|          | 0/5 [00:00<?, ?files/s]

Saving /tmp/APIDemo_GRBLC4/221755/WTCURVE.qdp
Saving /tmp/APIDemo_GRBLC4/221755/WTHR.qdp
Saving /tmp/APIDemo_GRBLC4/221755/PCCURVE_incbad.qdp
Saving /tmp/APIDemo_GRBLC4/221755/PCUL_incbad.qdp
Saving /tmp/APIDemo_GRBLC4/221755/PCHR_incbad.qdp
Getting 306757
Making directory /tmp/APIDemo_GRBLC4/306757
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['Datasets', 'WT_incbadData', 'WTHard_incbadData', 'WTSoft_incbadData', 'WTHR_incbadData', 'PC_incbadData', 'PCHard_incbadData', 'PCSoft_incbadData', 'PCHR_incbadData', 'OK', 'Binning', 'TimeFormat', 'T0', 'APIVersion'])
Checking returned data for required content.

Downloading light curves:   0%|          | 0/4 [00:00<?, ?files/s]

Saving /tmp/APIDemo_GRBLC4/306757/WTCURVE.qdp
Saving /tmp/APIDemo_GRBLC4/306757/WTHR.qdp
Saving /tmp/APIDemo_GRBLC4/306757/PCCURVE_incbad.qdp
Saving /tmp/APIDemo_GRBLC4/306757/PCHR_incbad.qdp

I trust this hasn't surprised you.

As a final demonstration, let's illustrate the point that a tuple with one entry is still a tuple, and so the way the data are saved is set accordingly:

lcData = udg.getLightCurves(GRBName=("GRB 060729",),
                            destDir='/tmp/APIDemo_GRBLC5',
                            silent=False,
                            verbose=True
                            )
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['OK', 'targetID', 'APIVersion'])
Resolved `GRB 060729` as `221755`.
Making directory /tmp/APIDemo_GRBLC5
Getting GRB 060729
Making directory /tmp/APIDemo_GRBLC5/GRB 060729
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['Datasets', 'WT_incbadData', 'WTHard_incbadData', 'WTSoft_incbadData', 'WTHR_incbadData', 'PC_incbadData', 'PCUL_incbadData', 'PCHard_incbadData', 'PCSoft_incbadData', 'PCHR_incbadData', 'OK', 'Binning', 'TimeFormat', 'T0', 'APIVersion'])
Checking returned data for required content.

Downloading light curves:   0%|          | 0/5 [00:00<?, ?files/s]

Saving /tmp/APIDemo_GRBLC5/GRB 060729/WTCURVE.qdp
Saving /tmp/APIDemo_GRBLC5/GRB 060729/WTHR.qdp
Saving /tmp/APIDemo_GRBLC5/GRB 060729/PCCURVE_incbad.qdp
Saving /tmp/APIDemo_GRBLC5/GRB 060729/PCUL_incbad.qdp
Saving /tmp/APIDemo_GRBLC5/GRB 060729/PCHR_incbad.qdp

As you can see here, we only got a single GRB, but because we supplied its name as a tuple, not as a string, the data were saved into a subdirectory just as if we'd requested multiple GRBs.

Storing the light curves in variables

Let's move on now to the returnData=True case. As I told you earlier this will return a dict containing the data. All light curves returned by anything in the swifttools.ukssdc module have a common structure which I call a "light curve dict", and you can read about this here.

There are no special parameters related to returning data, so let's jump straight in with some demonstrations similar to those above.

lcData = udg.getLightCurves(GRBName="GRB 220427A",
                            incbad="both",
                            nosys="both",
                            saveData=False,
                            returnData=True)

I don't recommend simply printing lcData straight, it's quite big. If you read about the light curve dict then you may have an idea what to expect, but it's helpful to explore it anyway, so let's do that.

list(lcData.keys())
['WT',
 'WTHard',
 'WTSoft',
 'WTHR',
 'WT_incbad',
 'WTHard_incbad',
 'WTSoft_incbad',
 'WTHR_incbad',
 'WT_nosys',
 'WTHard_nosys',
 'WTSoft_nosys',
 'WTHR_nosys',
 'WT_nosys_incbad',
 'WTHard_nosys_incbad',
 'WTSoft_nosys_incbad',
 'WTHR_nosys_incbad',
 'PC',
 'PCHard',
 'PCSoft',
 'PCHR',
 'PC_incbad',
 'PCUL_incbad',
 'PCHard_incbad',
 'PCSoft_incbad',
 'PCHR_incbad',
 'PC_nosys',
 'PCHard_nosys',
 'PCSoft_nosys',
 'PCHR_nosys',
 'PC_nosys_incbad',
 'PCUL_nosys_incbad',
 'PCHard_nosys_incbad',
 'PCSoft_nosys_incbad',
 'PCHR_nosys_incbad',
 'Datasets',
 'Binning',
 'TimeFormat',
 'T0',
 'URLs']

(I used list() above because Jupyter renders it a bit more nicely than the dict_keys object). There is a lot to take in there, but most of those entries are just light curve data.

Let's first just check the keys that gave me some information about the light curve generically:

print(f"Binning: {lcData['Binning']}")
print(f"TimeFormat: {lcData['TimeFormat']}")
print(f"T0: {lcData['T0']}")
Binning: Counts
TimeFormat: MET
T0: 672786064

So we know that the light curves are binned by the number of counts per bin (no surprise, this is standard for GRBs), the timeformat is in Swift Mission Elapsed Time, and is in seconds since MET 672786064 (again, see the light curve dict documentation for more info).

The 'URLs' key lists the URL to the individual files in the light curve by dataset. I'm not going to print it here because it's long (and boring) but you can explore it if you want.

The 'Datasets' key is really the crucial one for exploring the data, it's essentially an index of all the datasets we obtained, in the form of a list. Let's take a look:

lcData['Datasets']
['WT',
 'WTHard',
 'WTSoft',
 'WTHR',
 'WT_incbad',
 'WTHard_incbad',
 'WTSoft_incbad',
 'WTHR_incbad',
 'WT_nosys',
 'WTHard_nosys',
 'WTSoft_nosys',
 'WTHR_nosys',
 'WT_nosys_incbad',
 'WTHard_nosys_incbad',
 'WTSoft_nosys_incbad',
 'WTHR_nosys_incbad',
 'PC',
 'PCHard',
 'PCSoft',
 'PCHR',
 'PC_incbad',
 'PCUL_incbad',
 'PCHard_incbad',
 'PCSoft_incbad',
 'PCHR_incbad',
 'PC_nosys',
 'PCHard_nosys',
 'PCSoft_nosys',
 'PCHR_nosys',
 'PC_nosys_incbad',
 'PCUL_nosys_incbad',
 'PCHard_nosys_incbad',
 'PCSoft_nosys_incbad',
 'PCHR_nosys_incbad']

As a quick aside, you may wonder why we bother with this array, but we can't just step over the keys in lcData - 'Binning', for example, is not a light curve, and maybe in the future we'll want to add other things, so 'Datasets' is handy.

There are a lot of datasets in this example, because we set both incbad and nosys to "both", so we got all data with/out missing centroids and with/out WT-mode systematics (if you don't know what I'm talking about, see the the light curve documentation.

The contents of the datasets were discussed in light curve dict documentation (I'm sounding like a broken record, I know), so I'm not going to spend time on it here, except to show you one entry as an example:

lcData['PC']
Time TimePos TimeNeg Rate RatePos RateNeg FracExp BGrate BGerr CorrFact CtsInSrc BGInSrc Exposure Sigma SNR
0 231.329 9.456 -10.603 2.492634 0.547598 -0.547598 1.000000 0.006904 0.003088 2.396688 21.0 0.138491 20.058508 336.828549 4.551939
1 254.712 8.638 -13.928 2.458382 0.502403 -0.502403 1.000000 0.001227 0.001227 2.314148 24.0 0.027698 22.565840 865.481752 4.893247
2 272.685 10.724 -9.335 2.265870 0.510246 -0.510246 1.000000 0.006904 0.003088 2.288348 20.0 0.138491 20.058540 320.682615 4.440743
3 295.985 12.497 -12.577 1.890566 0.427545 -0.427545 1.000000 0.008838 0.003125 2.396673 20.0 0.221586 25.073120 252.461207 4.421909
4 318.394 10.146 -9.912 2.501020 0.546498 -0.546498 1.000000 0.001381 0.001381 2.392053 21.0 0.027698 20.058560 757.171533 4.576448
5 339.776 11.330 -11.236 2.218482 0.485411 -0.485411 1.000000 0.002455 0.001736 2.390206 21.0 0.055396 22.565840 534.694019 4.570320
6 358.720 9.937 -7.614 2.755213 0.617819 -0.617819 1.000000 0.003156 0.002232 2.424578 20.0 0.055396 17.551180 509.165055 4.459578
7 382.023 11.708 -13.365 2.049209 0.438572 -0.438572 1.000000 0.003314 0.001913 2.344316 22.0 0.083095 25.073160 456.842799 4.672456
8 414.364 14.469 -20.633 1.477339 0.316180 -0.316180 1.000000 0.002367 0.001367 2.366125 22.0 0.083095 35.102380 456.842799 4.672456
9 446.410 15.019 -17.577 1.561949 0.333009 -0.333009 1.000000 0.000000 0.000000 2.314176 22.0 0.000000 32.595080 1000.000000 4.690416
10 488.377 20.690 -26.949 1.088421 0.233243 -0.233243 1.000000 0.002326 0.001163 2.368805 22.0 0.110793 47.638980 395.137470 4.666469
11 542.713 21.515 -33.646 0.935390 0.201224 -0.201224 1.000000 0.003515 0.001329 2.366170 22.0 0.193888 55.160920 297.561957 4.648511
12 588.681 25.693 -24.453 0.972465 0.213350 -0.213350 1.000000 0.002209 0.001105 2.334482 21.0 0.110793 50.146260 377.085766 4.558066
13 641.368 25.660 -26.994 0.877843 0.198803 -0.198803 1.000000 0.004734 0.001578 2.340248 20.0 0.249284 52.653580 237.689376 4.415632
14 679.006 8.081 -11.978 2.458033 0.539997 -0.539997 1.000000 0.006904 0.003088 2.363423 21.0 0.138491 20.058540 336.828549 4.551939
15 720.041 22.207 -32.954 0.883485 0.193569 -0.193569 1.000000 0.001506 0.000870 2.329874 21.0 0.083095 55.160840 435.998488 4.564193
16 774.036 23.372 -31.789 0.824176 0.185594 -0.185594 1.000000 0.002511 0.001123 2.288964 20.0 0.138491 55.160900 320.682615 4.440743
17 821.775 18.257 -24.367 1.086019 0.244213 -0.244213 1.000000 0.002599 0.001300 2.327431 20.0 0.110793 42.624280 359.034063 4.447021
18 881.398 36.361 -41.366 0.596078 0.134801 -0.134801 1.000000 0.002851 0.001008 2.342510 20.0 0.221586 77.726680 252.461207 4.421909
19 960.261 40.240 -42.502 0.604748 0.133034 -0.133034 1.000000 0.002009 0.000820 2.401749 21.0 0.166189 82.741260 307.072742 4.545812
20 1038.422 44.819 -37.922 0.559252 0.126833 -0.126833 1.000000 0.003348 0.001059 2.346153 20.0 0.276982 82.741240 225.175713 4.409355
21 1130.873 32.603 -47.631 0.606471 0.133055 -0.133055 1.000000 0.001381 0.000690 2.329414 21.0 0.110793 80.233960 377.085766 4.558066
22 1195.819 22.817 -32.344 0.852082 0.192695 -0.192695 1.000000 0.004017 0.001420 2.376406 20.0 0.221586 55.160800 252.461207 4.421909
23 1252.063 34.271 -33.427 0.698409 0.156609 -0.156609 1.000000 0.000818 0.000579 2.370587 20.0 0.055396 67.697320 509.165055 4.459578
24 1332.788 33.780 -46.454 0.612633 0.134769 -0.134769 1.000000 0.002071 0.000846 2.359336 21.0 0.166189 80.233880 307.072742 4.545812
25 1417.909 38.922 -51.341 0.541106 0.119679 -0.119679 1.000000 0.003069 0.000970 2.356892 21.0 0.276982 90.263160 236.592612 4.521307
26 1501.599 45.495 -44.768 0.510294 0.115895 -0.115895 1.000000 0.003375 0.001018 2.338662 20.0 0.304680 90.263040 214.395107 4.403078
27 1628.792 58.711 -81.698 0.316147 0.072110 -0.072110 0.999953 0.002762 0.000738 2.263279 20.0 0.387775 140.402692 189.239167 4.384250
28 1734.370 38.382 -46.867 0.301100 0.068007 -0.068007 1.000000 0.002311 0.000874 1.296184 20.0 0.197032 85.248460 265.914145 4.427464
29 1825.855 84.798 -53.104 0.430621 0.095358 -0.095358 0.999997 0.002188 0.000660 2.868992 21.0 0.301715 137.901200 227.527350 4.515846
30 6244.828 121.386 -131.853 0.148955 0.033866 -0.033866 1.000000 0.001295 0.000254 1.917496 20.0 0.327916 253.238668 305.896115 4.398357
31 6479.412 112.460 -113.198 0.160183 0.036232 -0.036232 1.000000 0.001006 0.000237 1.828074 20.0 0.227019 225.658080 369.526994 4.421057
32 6732.499 152.728 -140.627 0.090189 0.023810 -0.023810 1.000000 0.001118 0.000219 1.803255 15.0 0.327916 293.355360 228.147331 3.787794
33 7026.168 164.951 -140.941 0.116608 0.026460 -0.026460 1.000000 0.000948 0.000198 1.809713 20.0 0.290080 305.891620 325.860125 4.406869
34 7308.908 137.957 -117.789 0.145418 0.032956 -0.032956 0.990174 0.001046 0.000228 1.865945 20.0 0.264856 253.232572 341.460864 4.412544
35 7607.183 195.718 -160.318 0.101916 0.023307 -0.023307 1.000000 0.001240 0.000210 1.855237 20.0 0.441426 356.036600 262.128016 4.372822
36 12416.800 356.855 -317.612 0.031317 0.008268 -0.008268 1.000000 0.000489 0.000071 1.439771 15.0 0.329649 674.466680 308.325562 3.787583
37 13083.407 449.956 -309.753 0.033220 0.008030 -0.008030 1.000000 0.000588 0.000073 1.437743 18.0 0.446400 759.708920 317.029134 4.137071
38 18498.142 396.336 -461.161 0.035760 0.008738 -0.008738 1.000000 0.000740 0.000076 1.765800 18.0 0.634600 857.496600 265.306944 4.092577

OK, so that's what returnData=True does. If we supply a list of GRBs then, as you should expect, we get a dict of light curves dicts; the top level is indexed either by GRB name, or targetID, depending on how you called the function, so:

lcData = udg.getLightCurves(GRBName=("GRB 220427A","GRB 070616"),
                            saveData=False,
                            returnData=True)
list(lcData.keys())
['GRB 220427A', 'GRB 070616']

I trust this doesn't come as a surprise! Nor should the fact that each of these in turn is a light curve dict similar to that above (although this time I left nosys and incbad to their defaults). I can prove this easily enough:

list(lcData['GRB 070616'].keys())
['WT_incbad',
 'WTHard_incbad',
 'WTSoft_incbad',
 'WTHR_incbad',
 'PC_incbad',
 'PCHard_incbad',
 'PCSoft_incbad',
 'PCHR_incbad',
 'Datasets',
 'Binning',
 'TimeFormat',
 'T0',
 'URLs']

(I'll let you explore the other GRB yourself).

If we had supplied targetID=(1104343, 282445) you would have got the same data, but indexed by these targetIDs, not the name. You can test it out if you don't trust me, but frankly, I'm expecting most people know GRBs by their names not the Swift targetIDs!

Plotting light curves

If we've downloaded a light curve then we can make use of the module-level plotLightCurve() function to give us a quick plot. I'm not going to repeat the plotLightCurve() documentation here, but I will note that its first argument is a single light curve dict so if, as in our case here, we downloaded multiple GRB light curves, we have to provide one of them to the function, like this:

from swifttools.ukssdc import plotLightCurve
fig, ax = plotLightCurve(lcData['GRB 070616'],
                         whichCurves=('WT_incbad', 'PC_incbad'),
                         xlog=True,
                         ylog=True
                        )

png

You'll note I captured the return in variable names which will be familiar to users of pyplot and can be ignored by everyone else because you'll need to be familiar with pyplot to take advantage of the fact that plotLightCurve returnes them for you.

The third way - save from a variable

We've covered two obvious ways of getting light curve data: downloading them straight to disk or to variables. But there is a third way (and it does not involve steep stairs or giant spiders): you pull the data into variables as above, and then save them to disk from there (OK, I guess maybe it's a two-and-a-halfth way, but that doesn't scan and can't be linked to Tolkien).

To do this we use the function saveLightCurves(), and the reason for providing this option as well as the saveData=True option above is that this gives you some more control over how the data are saved.

This function actually doesn't do very much itself, most of the work is done by another common function: saveLightCurveFromDict(), and most of the arguments you may pass to saveLightCurves() are just keyword arguments that will get passed straight through, but there are a few things about saveLightCurves() to note here.

First, you may have spotted that the function name contains lightCurves plural, whereas the common function is singular; that is because saveLightCurves() allows you to save more than one light curve in a single command - if you retrieved more than one of course! This means it has two extra parameters for use when you supplied a list or tuple to getLightCurves(). We will discuss those parameters in a moment.

The second thing is that it will override the default values for the timeFormatInFname and binningInFname arguments for saveLightCurveFromDict(), setting them both to False unless you explicitly specify them. This is because GRB light curves are always in MET and counts per bin.

So, the parameters for saveLightCurves() are:

One quick note, if lcData is just a light curve dict, e.g. you called getLightCurves() with a single GRB name or targetID, not a list/tuple, then subDirs is ignored.

Right: a little less talk, a little more action is called for now. I'm going to give a new getLightCurves() call first, so this example is standalone.

lcData = udg.getLightCurves(GRBName=("GRB 220427A","GRB 070616", "GRB 080319B", "GRB 130925A"),
                            saveData=False,
                            returnData=True)

And now let's demonstrate saving things. I will only save two of these, and I'll also only save a couple of datasets. Oh and just to demonstrate that you can, I will set the column separator to some custom value.

udg.saveLightCurves(lcData,
                    destDir='/tmp/APIDemo_GRBLC6',
                    whichGRBs=('GRB 070616', 'GRB 080319B'),
                    whichCurves=('WTHR_incbad', 'PCHR_incbad'),
                    sep=';',
                    verbose=True,
                   )
Making directory /tmp/APIDemo_GRBLC6
Making directory /tmp/APIDemo_GRBLC6/GRB 070616
Writing file: `/tmp/APIDemo_GRBLC6/GRB 070616/WT_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 070616/WTHard_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 070616/WTSoft_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 070616/WTHR_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 070616/PC_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 070616/PCHard_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 070616/PCSoft_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 070616/PCHR_incbadNone`
Making directory /tmp/APIDemo_GRBLC6/GRB 080319B
Writing file: `/tmp/APIDemo_GRBLC6/GRB 080319B/WT_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 080319B/WTHard_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 080319B/WTSoft_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 080319B/WTHR_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 080319B/PC_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 080319B/PCHard_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 080319B/PCSoft_incbadNone`
Writing file: `/tmp/APIDemo_GRBLC6/GRB 080319B/PCHR_incbadNone`

If you are looking at this and thinking, "Where did those arguments, whichCurves, sep, and verbose come from, the answer is of course that they are arguments taken by saveLightCurveFromDict(), as detailed in that function's documentation.


Rebinning light curves

Everything detailed above was about getting at the automated GRB light curves. But, if you are an afficionado of our website then you will know that from there you can do more than just get the automated results, you can rebin them too. Wouldn't it be nice if you could rebin them via the Python API? Luckily for you, I'm nice (sometimes).

Actually, rebinning appears in a few places so it is one of the common functions. This module (swifttools.ukssdc.data.GRB, in case you've forgotten) provides its own rebinLightCurve() function, which requires either the GRBName or targetID parameter (as everything in this module); all the other arguments are passed straight to the common function.

One note before we give an example, for this function GRBName and targetID can ONLY be single values, not lists (or tuples); this is because the function sends a job request to our servers, and we don't want you accidentally overloading our servers with 300 jobs (we don't want you deliberately doing it either).

Right, let's plunge in with a demo, rebinning a GRB by one bin per observation, and asking for MJD on the time axis.

JobID = udg.rebinLightCurve(GRBName="GRB 070616",
                            binMeth='obsid',
                            timeFormat='MJD')

That was easy enough. We'll unpack the arguments in a moment, but let's follow this example through to the end first. First, note that the function returned some identifier which we captured in the JobID variable. This really is critical because it's the only way that we can actually access our rebin request.

Data are not rebinned instantaneously, so we need to see how it's getting on. There are a couple of ways we can do this:

udg.checkRebinStatus(JobID)
{'statusCode': 3, 'statusText': 'Running'}

This function returns a dict telling you how things were going. On my computer it's telling me that the status 'running', but if you're running this notebook yourself, you may have a different status. Of course, you may not care about the status and just want to know if it's complete. You can do this either by knowing that the 'statusCode' will be 4 (and the text 'Complete'), or bypass that and call rebinComplete() which gives a simple bool (True meaning, "Yes, it's complete").

udg.rebinComplete(JobID)
True

If the above is not True, then give it 30 seconds and try again, rinse and repeat until it is True, because the next steps will fail if the job hasn't completed.

OK, complete now? Great!

To get the light curve we use the function getRebinnedLightCurve() function. This again asks the common getLightCurve() function to do all the work, so if you didn't do so earlier (or jave jumped straight to the rebin docs) you should read its documentation. If you've worked through this notebook to this point, the only thing really to point out is that we don't specify which GRB to get, instead we tell it which rebin request we want the results of, by passing in the JobID. In this example, I will grab the data in a variable, rather than saving to disk (remember, you can do both, they're not mutually exclusive).

lcData = udg.getRebinnedLightCurve(JobID,
                                   saveData=False,
                                   returnData=True)
list(lcData.keys())
['WT_incbad',
 'WTHard_incbad',
 'WTSoft_incbad',
 'WTHR_incbad',
 'PC_incbad',
 'PCUL_incbad',
 'PCHard_incbad',
 'PCSoft_incbad',
 'PCHR_incbad',
 'Datasets',
 'Binning',
 'TimeFormat',
 'T0',
 'URLs']

As you can see, this is a light curve dict just like earlier. I hope that doesn't surprise you. For the sake of sanity (or, as I write this, one last-minute test), let's confirm that the binning method and time format of this new light curve are what I asked for:

print(f"Binning: {lcData['Binning']}")
print(f"TimeFormat: {lcData['TimeFormat']}")
Binning: ObsID
TimeFormat: MJD

OK phew!

The only other thing to introduce here is the fact that you can cancel a rebin job if you change your mind or submitted it accidentally:

udg.cancelRebin(JobID)
False

This returns a bool telling you whether the job was successfully cancelled or not. If it failed, as in this example, this is usually because the job had already completed, which we can check as above:

udg.checkRebinStatus(JobID)
{'statusCode': 4, 'statusText': 'Complete'}

And a last note: the functions checkRebinStatus(), rebinComplete(), getRebinnedLightCurve() and cancelRebin() are really common functions, and technically should have been documented with the common functions, but it really makes no sense to demonstrate rebinning here without these functions.

Right, that's it for light curves. Let's move on to spectra.


Spectra

We get spectra with the getSpectra() function. This is actually not a common function, even though the name is reused in other places -- the specifics of the use and arguments in each place are so different the functions are not conflated.

I'll follow the same pattern in this section as I did for light curves, but don't worry if you haven't read that section, this one is intended to be standalone.

Saving directly to disk.

As already discussed in the introduction (which I am assuming you've read), if you want to get a product by downloading files straight to disk, you use the saveData=True argument, and while this is the default, I think it's much more helpful to be explicit.

udg.getSpectra(GRBName="GRB 130925A",
               saveData=True,
               saveImages=True,
               destDir="/tmp/APIDemo_GRB_Spec1",
               extract=True,
               removeTar=True,
               silent=False,
               verbose=True
               )
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['OK', 'targetID', 'APIVersion'])
Resolved `GRB 130925A` as `571830`.
Making directory /tmp/APIDemo_GRB_Spec1
Getting GRB 130925A
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['T0', 'DeltaFitStat', 'rnames', 'interval0', 'late_time', 'OK', 'APIVersion'])
Saving `interval0` spectrum
Downloading file `/tmp/APIDemo_GRB_Spec1/interval0.tar.gz`
Saving file `/tmp/APIDemo_GRB_Spec1/interval0.tar.gz`
Extracting `/tmp/APIDemo_GRB_Spec1/interval0.tar.gz`
README.txt
interval0wtsource.pi
interval0wtback.pi
interval0pcsource.pi
interval0pcback.pi
interval0wt.pi
interval0wt.arf
interval0wt.rmf
interval0pc.pi
interval0pc.arf
interval0pc.rmf
interval0.areas
GRB_info.txt
models/interval0wt.xcm
models/interval0pc.xcm
interval0wt_fit.fit
interval0pc_fit.fit

Removing file /tmp/APIDemo_GRB_Spec1/interval0.tar.gz
Downloading file `/tmp/APIDemo_GRB_Spec1/interval0wt_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec1/interval0wt_plot.gif`
Downloading file `/tmp/APIDemo_GRB_Spec1/interval0pc_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec1/interval0pc_plot.gif`
Saving `late_time` spectrum
Downloading file `/tmp/APIDemo_GRB_Spec1/late_time.tar.gz`
Saving file `/tmp/APIDemo_GRB_Spec1/late_time.tar.gz`
Extracting `/tmp/APIDemo_GRB_Spec1/late_time.tar.gz`
README.txt
late_timepcsource.pi
late_timepcback.pi
late_timepc.pi
late_timepc.arf
late_timepc.rmf
late_time.areas
GRB_info.txt
models/late_timepc.xcm
late_timepc_fit.fit

Removing file /tmp/APIDemo_GRB_Spec1/late_time.tar.gz
Downloading file `/tmp/APIDemo_GRB_Spec1/late_timepc_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec1/late_timepc_plot.gif`

I turned on verbose mode to help you see what's going on here, but you may be wondering what all of those arguments were. GRBName and saveData were introduced in the, er, introduction, and silent and verbose have been introduced on the front page. The other parameters all belong to the common, module-level function, saveSpectrum(), which is documented here. You can probably guess at what these did from the output above - the images of the spectra were downloaded, as the tar archives of the actual spectral data. The latter were also extracted, and then removed.

I will demonstrate here one argument of this common function: the ability to choose which spectra get saved to disk. You may have noticed - and if you're familiar with the XRT GRB spectra it won't surprise you - there were two spectra that were downloaded, called 'interval0' and 'late_time', and we saved both of them. The common saveSpectrum() function has a spectra argument (default: 'all') which determined which are saved. Since our udg.getSpectra() calls that common function behind the scenes, we can give it the spectra argument if we want, like this:

udg.getSpectra(GRBName="GRB 130925A",
               saveData=True,
               saveImages=True,
               spectra=('late_time',),
               destDir="/tmp/APIDemo_GRB_Spec2",
               extract=True,
               removeTar=True,
               silent=False,
               verbose=True
               )
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['OK', 'targetID', 'APIVersion'])
Resolved `GRB 130925A` as `571830`.
Making directory /tmp/APIDemo_GRB_Spec2
Getting GRB 130925A
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['T0', 'DeltaFitStat', 'rnames', 'interval0', 'late_time', 'OK', 'APIVersion'])
Saving `late_time` spectrum
Downloading file `/tmp/APIDemo_GRB_Spec2/late_time.tar.gz`
Saving file `/tmp/APIDemo_GRB_Spec2/late_time.tar.gz`
Extracting `/tmp/APIDemo_GRB_Spec2/late_time.tar.gz`
README.txt
late_timepcsource.pi
late_timepcback.pi
late_timepc.pi
late_timepc.arf
late_timepc.rmf
late_time.areas
GRB_info.txt
models/late_timepc.xcm
late_timepc_fit.fit

Removing file /tmp/APIDemo_GRB_Spec2/late_time.tar.gz
Downloading file `/tmp/APIDemo_GRB_Spec2/late_timepc_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec2/late_timepc_plot.gif`

Don't forget (see the introduction), you can supply targetID instead of GRBName if you want, and this can be a list/tuple if you want to get multiple objects. If you do this then the subDirs argument of getSpectra() becomes important: if True (the default) then each GRB's data will be saved into a subdirectory which will be the GRB name or targetID (depending on which you called the function with). If it is False then the name/targetID will be prepended to the file names.

Warning there is one exception to the above: if you set subDirs=False and extract=True you will get an error. This is because the contents of the tar files are basically the same, so they have to be extracted into separate directories. Also, because of the way X-ray spectra work, various file names are embedded in other files, so we can't really rename them.

Let's do a quick demo of getting multiple spectra:

udg.getSpectra(GRBName=("GRB 130925A", "GRB 071020"),
               saveData=True,
               saveImages=True,
               destDir="/tmp/APIDemo_GRB_Spec3",
               extract=True,
               removeTar=True,
               silent=False,
               verbose=True
               )
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['OK', 'targetID', 'APIVersion'])
Resolved `GRB 130925A` as `571830`.
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['OK', 'targetID', 'APIVersion'])
Resolved `GRB 071020` as `294835`.
Making directory /tmp/APIDemo_GRB_Spec3
Getting GRB 130925A
Making directory /tmp/APIDemo_GRB_Spec3/GRB 130925A
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['T0', 'DeltaFitStat', 'rnames', 'interval0', 'late_time', 'OK', 'APIVersion'])
Saving `interval0` spectrum
Downloading file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/interval0.tar.gz`
Saving file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/interval0.tar.gz`
Extracting `/tmp/APIDemo_GRB_Spec3/GRB 130925A/interval0.tar.gz`
README.txt
interval0wtsource.pi
interval0wtback.pi
interval0pcsource.pi
interval0pcback.pi
interval0wt.pi
interval0wt.arf
interval0wt.rmf
interval0pc.pi
interval0pc.arf
interval0pc.rmf
interval0.areas
GRB_info.txt
models/interval0wt.xcm
models/interval0pc.xcm
interval0wt_fit.fit
interval0pc_fit.fit

Removing file /tmp/APIDemo_GRB_Spec3/GRB 130925A/interval0.tar.gz
Downloading file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/interval0wt_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/interval0wt_plot.gif`
Downloading file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/interval0pc_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/interval0pc_plot.gif`
Saving `late_time` spectrum
Downloading file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/late_time.tar.gz`
Saving file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/late_time.tar.gz`
Extracting `/tmp/APIDemo_GRB_Spec3/GRB 130925A/late_time.tar.gz`
README.txt
late_timepcsource.pi
late_timepcback.pi
late_timepc.pi
late_timepc.arf
late_timepc.rmf
late_time.areas
GRB_info.txt
models/late_timepc.xcm
late_timepc_fit.fit

Removing file /tmp/APIDemo_GRB_Spec3/GRB 130925A/late_time.tar.gz
Downloading file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/late_timepc_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec3/GRB 130925A/late_timepc_plot.gif`
Getting GRB 071020
Making directory /tmp/APIDemo_GRB_Spec3/GRB 071020
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['T0', 'DeltaFitStat', 'rnames', 'interval0', 'late_time', 'OK', 'APIVersion'])
Saving `interval0` spectrum
Downloading file `/tmp/APIDemo_GRB_Spec3/GRB 071020/interval0.tar.gz`
Saving file `/tmp/APIDemo_GRB_Spec3/GRB 071020/interval0.tar.gz`
Extracting `/tmp/APIDemo_GRB_Spec3/GRB 071020/interval0.tar.gz`
README.txt
interval0wtsource.pi
interval0wtback.pi
interval0pcsource.pi
interval0pcback.pi
interval0wt.pi
interval0wt.arf
interval0wt.rmf
interval0pc.pi
interval0pc.arf
interval0pc.rmf
interval0.areas
GRB_info.txt
models/interval0wt.xcm
models/interval0pc.xcm
interval0wt_fit.fit
interval0pc_fit.fit

Removing file /tmp/APIDemo_GRB_Spec3/GRB 071020/interval0.tar.gz
Downloading file `/tmp/APIDemo_GRB_Spec3/GRB 071020/interval0wt_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec3/GRB 071020/interval0wt_plot.gif`
Downloading file `/tmp/APIDemo_GRB_Spec3/GRB 071020/interval0pc_plot.gif`
Saving file `/tmp/APIDemo_GRB_Spec3/GRB 071020/interval0pc_plot.gif`
Saving `late_time` spectrum

If you look through the output above, you will see that I was not lying: the two GRBs' data were saved in their own subdirectories.

Storing the spectral data in variables

Let's move on now to the returnData=True case. As I told you earlier this will return a dict containing the data. All spectra returned by anything in the swifttools.ukssdc module have a common structure which I call a "spectrum dict", and you can read about this here. A key thing to note about this structure is that, unlike the light curve dict it does not give you the actual spectral data in some Python data structure; instead it gives you the results of the automated spectral fits. The rationale here is that X-ray spectral data cannot simply be manipulated numerically, they need handling through tools such as xspec, so they don't really make sense as Python variables and are not likely to be useful to you. Spectral fit results, on the other hand, are very likely to be useful to you, and are just numbers.

So, let's go straight to a demo. First, let's get the data for GRB 130925A again, but this time as a variable only.

specData = udg.getSpectra(GRBName="GRB 130925A",
                          saveData=False,
                          saveImages=False,
                          returnData=True
                          )

Our specData variable is now a spectrum dict. I'm not going to spend much time unpacking this because it's already well documented, but let's give you a bit of help. Generally, I imagine that what you are going to want to access are the spectral fit parameters for a specific spectrum, and if you don't fancy ploughing through the definition of this data structure then I'll be nice and save you some effort. Let's see what happened for the late-time spectrum fit to PC data. I know that a power-law will have been fitted to it, because that's all that GRBs are fitted with, so I can go straight to the right part of my variable:

specData['late_time']['PC']['PowerLaw']
{'GalacticNH': 1.74728e+20,
 'NH': 3.15504e+22,
 'NHPos': 3.276992999999999e+21,
 'NHNeg': -3.0701709200000014e+21,
 'Redshift_abs': 0.347,
 'Gamma': 2.68868,
 'GammaPos': 0.14820270499999966,
 'GammaNeg': -0.13902135400000004,
 'ObsFlux': 6.826529950876817e-11,
 'ObsFluxPos': 3.594728145422862e-12,
 'ObsFluxNeg': -3.3616900080457612e-12,
 'UnabsFlux': 2.852200160603111e-10,
 'UnabsFluxPos': 6.096089137778289e-11,
 'UnabsFluxNeg': -4.542858750653171e-11,
 'Cstat': 450.7716251,
 'Dof': 470,
 'FitChi': 458.30322,
 'Image': 'https://www.swift.ac.uk/xrt_spectra/00571830/late_timepc_plot.gif'}

And you see that what we had was a dict with all the fit parameters. Obviously, I can actually get at specific values as well:

specData['late_time']['PC']['PowerLaw']['Gamma']
2.68868

I am not going to explore specData further here, because of the much-mentioned dedicated documentation, but let's quickly explicitly see what happens if we ask for more than one spectrum:

specData = udg.getSpectra(GRBName=["GRB 060729", "GRB 070616", "GRB 130925A"],
                            returnData=True,
                            saveData=False,
                            saveImages=False,
                           )
specData.keys()
dict_keys(['GRB 060729', 'GRB 070616', 'GRB 130925A'])

As I trust you expected (if you read the introduction), we now have an extra layer tagged onto the front of our dict, and because I used the GRBName argument in my function call, the keys of this are the GRB names. How we go about accessing a specific spectral fit property should be obvious, but in case not:

specData['GRB 130925A']['late_time']['PC']['PowerLaw']['Gamma']
2.68868

The last thing to make explicit here is the point that the getSpectra() function doesn't count how many entries there are in the GRBName (or targetID) parameter, just whether it is a string/int, or a list/tuple. So if you supply a tuple (or list) with just one entry, you still get this extra layer in the dict, albeit only with one entry:

specData = udg.getSpectra(GRBName=("GRB 060729",),
                            returnData=True,
                            saveData=False,
                            saveImages=False,
                           )
specData.keys()
dict_keys(['GRB 060729'])

And if we'd done this with saveData=True then the data would have been saved in the "GRB 060729" subdirectory (or had "GRB 060729" prepended to the filename, if we said subDirs=False).

The third way - save from a variable

As with the light curves, there is a third way† to use the data, you can pull the data into a variable, and then use that to request the files be saved to disk. This option is provided in case you want to filter your set of GRBs before saving (e.g. maybe you wanted to get a dozen spectra, identify those where the intrinsic NH was <1021 cm-2, and then save the spectral files for those).

We do this by calling getSpectra(returnData=True) to get the spectral fits, and then we use the saveSpectra() function to decide which to save. The arguments to this are essentially the same as when we called getSpectra(saveData=True), indeed the back-end is the module-level saveSpectrum() function, alluded to earlier and documented here. The one, very important, addition is the argument whichGRBs. This can either be 'all' (the default) or a list/tuple of which GRBs' spectra to save. The entries in this list/tuple should be valid keys in the specData variable we pass to the function.

This is all a bit abstract, but it will all become clear (I hope) with the following example:

(† For Cirith Ungol-related humour, you will have to read the light curve section.)

# Get the data for 3 GRBs in to the `specData` variable:

specData = udg.getSpectra(GRBName=["GRB 060729", "GRB 070616", "GRB 130925A"],
                            returnData=True,
                            saveData=False,
                            saveImages=False,
                           )

# In real code there would probably be some stuff here that leads us to deciding
# that we only want the interval0 spectra and only some of the above GRBs, but for this demo
# it's just hard coded.

udg.saveSpectra(specData,
                destDir='/tmp/APIDemo_GRBspec3',
                whichGRBs=('GRB 060729', 'GRB 130925A'),
                spectra=('interval0',),
                saveImages=True,
                verbose=True,
                clobber=True,
                extract=True,
                removeTar=True
               )
Making directory /tmp/APIDemo_GRBspec3
Making directory /tmp/APIDemo_GRBspec3/GRB 060729
Saving `interval0` spectrum
Downloading file `/tmp/APIDemo_GRBspec3/GRB 060729/interval0.tar.gz`
Saving file `/tmp/APIDemo_GRBspec3/GRB 060729/interval0.tar.gz`
Extracting `/tmp/APIDemo_GRBspec3/GRB 060729/interval0.tar.gz`
README.txt
interval0wtsource.pi
interval0wtback.pi
interval0pcsource.pi
interval0pcback.pi
interval0wt.pi
interval0wt.arf
interval0wt.rmf
interval0pc.pi
interval0pc.arf
interval0pc.rmf
interval0.areas
GRB_info.txt
models/interval0wt.xcm
models/interval0pc.xcm
interval0wt_fit.fit
interval0pc_fit.fit

Removing file /tmp/APIDemo_GRBspec3/GRB 060729/interval0.tar.gz
Downloading file `/tmp/APIDemo_GRBspec3/GRB 060729/interval0wt_plot.gif`
Saving file `/tmp/APIDemo_GRBspec3/GRB 060729/interval0wt_plot.gif`
Downloading file `/tmp/APIDemo_GRBspec3/GRB 060729/interval0pc_plot.gif`
Saving file `/tmp/APIDemo_GRBspec3/GRB 060729/interval0pc_plot.gif`
Making directory /tmp/APIDemo_GRBspec3/GRB 130925A
Saving `interval0` spectrum
Downloading file `/tmp/APIDemo_GRBspec3/GRB 130925A/interval0.tar.gz`
Saving file `/tmp/APIDemo_GRBspec3/GRB 130925A/interval0.tar.gz`
Extracting `/tmp/APIDemo_GRBspec3/GRB 130925A/interval0.tar.gz`
README.txt
interval0wtsource.pi
interval0wtback.pi
interval0pcsource.pi
interval0pcback.pi
interval0wt.pi
interval0wt.arf
interval0wt.rmf
interval0pc.pi
interval0pc.arf
interval0pc.rmf
interval0.areas
GRB_info.txt
models/interval0wt.xcm
models/interval0pc.xcm
interval0wt_fit.fit
interval0pc_fit.fit

Removing file /tmp/APIDemo_GRBspec3/GRB 130925A/interval0.tar.gz
Downloading file `/tmp/APIDemo_GRBspec3/GRB 130925A/interval0wt_plot.gif`
Saving file `/tmp/APIDemo_GRBspec3/GRB 130925A/interval0wt_plot.gif`
Downloading file `/tmp/APIDemo_GRBspec3/GRB 130925A/interval0pc_plot.gif`
Saving file `/tmp/APIDemo_GRBspec3/GRB 130925A/interval0pc_plot.gif`

Time-slicing spectra

For GRBs you can request 'time-sliced' spectra, that is, spectra of the GRB created over a specific time interval, or set of intervals. This is essentially the same as submitting an XRTProductRequest with the GRB details and a set of time slices, except that you have no control over the model to be fitted; this is always the standard GRB model with Galactic absorption, intrinsic absorber (with redshift if available) and a power-law spectrum.

To request time-sliced spectra for a GRB we use the timesliceSpectrum() function. This requires the details of the timeslices and the GRB identifier, and then has various other arguments you can add, such as which grades to use and redshift information, if that which the system has for the GRB is not what you want.

The key parameter is slices, a dict defining the time slices. The keys of this dict are the names you want to give to your spectra, the values give the times, and can be in one of two formats:

  1. A tuple comprising the time interval(s) and which mode(s) to extract data for.
  2. A simple string giving the time interval(s) to extract data over (e.g. 100-400,500-700).

As, for example:

slices = {
    'early': ['100-800', 'WT']
    'mixed': '100-300,500-1000',
}

I've deliberately shown both formats above (because you can mix and match). This would request a spectrum called 'early', which covers times 100-800 seconds since T0, and will only use WT-mode data. A second spectrum called 'mixed' will also be created, and that will be made of data collected between 100-300 seconds and 500-1000 seconds after T0.

You may have noticed that the second option, the string, doesn't give you the ability to request a specific mode (in fact, even using the first option the "mode" entry is optional); so what mode is used? The answer to that lies in the mode argument to timesliceSpectrum(); this can be 'PC', 'WT' or 'both' (default: 'both') and this is used when no mode is specified.

Let's do an actual demo to explore this properly. I think it reads a bit better if I define the slices variable outside the function call, so I will.

slices = {
    'early': ['100-800', 'WT'],
    'mixed': '100-300,500-1000',
}

JobID = udg.timesliceSpectrum(targetID='00635887', slices=slices, verbose=True)
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['OK', 'JobID', 'APIVersion'])
Checking returned data for required content.
Success: JobID=1518

As with the light-curve rebinning, this returns the JobID which we need to retain if we are to do anything. One of the things we can do is to cancel the job, which I won't do here (you can if you want, just uncomment the command) but I'll show you how:

#udg.cancelTimeslice(JobID)

And as for rebinning, this returns a bool telling whether it succeeded or not. We can also check the job status, which is a bit more useful:

udg.checkTimesliceStatus(JobID)
{'statusCode': 3, 'statusText': 'Running'}

Or just whether it is complete:

udg.timesliceComplete(JobID)
False

While the commands above - deliberately - look like those for rebinning light curves, time-slicing spectra takes a bit longer so you may need to go and make a drink, then come back and try the cell above again until it returns True.

Now that it is true, we can get at the data, and in this case we can actually use exactly the same function as above - getSpectra(), but instead of giving a GRBName or targetID we give a JobID. Beyond that, the function looks and behaves exactly as above - because it is the same function! I will call the function now and both return the data and save it to disk:

specData = udg.getSpectra(JobID = JobID,
                          returnData=True,
                          saveData=True,
                          saveImages=True,
                          destDir="/tmp/APIDemo_slice_spec",
                          extract=True,
                          removeTar=True,
                          silent=False,
                          verbose=True,
                    )
Making directory /tmp/APIDemo_slice_spec
Getting 1518
Uploading data to https://www.swift.ac.uk/API/main.php
Returned keys: dict_keys(['T0', 'DeltaFitStat', 'rnames', 'early', 'mixed', 'OK', 'APIVersion'])
Saving `early` spectrum
Downloading file `/tmp/APIDemo_slice_spec/early.tar.gz`
Saving file `/tmp/APIDemo_slice_spec/early.tar.gz`
Extracting `/tmp/APIDemo_slice_spec/early.tar.gz`
README.txt
earlywtsource.pi
earlywtback.pi
earlywt.pi
earlywt.arf
earlywt.rmf
early.areas
GRB_info.txt
models/earlywt.xcm
earlywt_fit.fit

Removing file /tmp/APIDemo_slice_spec/early.tar.gz
Downloading file `/tmp/APIDemo_slice_spec/earlywt_plot.gif`
Saving file `/tmp/APIDemo_slice_spec/earlywt_plot.gif`
Saving `mixed` spectrum
Downloading file `/tmp/APIDemo_slice_spec/mixed.tar.gz`
Saving file `/tmp/APIDemo_slice_spec/mixed.tar.gz`
Extracting `/tmp/APIDemo_slice_spec/mixed.tar.gz`
README.txt
mixedwtsource.pi
mixedwtback.pi
mixedpcsource.pi
mixedpcback.pi
mixedwt.pi
mixedwt.arf
mixedwt.rmf
mixedpc.pi
mixedpc.arf
mixedpc.rmf
mixed.areas
GRB_info.txt
models/mixedwt.xcm
models/mixedpc.xcm
mixedwt_fit.fit
mixedpc_fit.fit

Removing file /tmp/APIDemo_slice_spec/mixed.tar.gz
Downloading file `/tmp/APIDemo_slice_spec/mixedwt_plot.gif`
Saving file `/tmp/APIDemo_slice_spec/mixedwt_plot.gif`
Downloading file `/tmp/APIDemo_slice_spec/mixedpc_plot.gif`
Saving file `/tmp/APIDemo_slice_spec/mixedpc_plot.gif`

specData is a spectral dict as before, but let's have a quick look at our newly-made spectrum:

specData['early']['WT']
{'Models': ['PowerLaw'],
 'PowerLaw': {'GalacticNH': 3.04902e+20,
  'NH': 7.049640000000001e+21,
  'NHPos': 5.380738330000001e+20,
  'NHNeg': -5.0929806100000086e+20,
  'Redshift_abs': 0.593,
  'Gamma': 2.3113,
  'GammaPos': 0.056347732999999955,
  'GammaNeg': -0.05441983200000022,
  'ObsFlux': 2.0003684675208915e-09,
  'ObsFluxPos': 5.852778910380283e-11,
  'ObsFluxNeg': -5.6770129607812125e-11,
  'UnabsFlux': 3.4302833868315383e-09,
  'UnabsFluxPos': 1.2663680630597594e-10,
  'UnabsFluxNeg': -1.163813803282715e-10,
  'Cstat': 525.5201359,
  'Dof': 613,
  'FitChi': 531.5606583,
  'Image': 'https://www.swift.ac.uk/xrt_spectra/tprods/sliceSpec_1518/earlywt_plot.gif'},
 'Exposure': 169.57439661026,
 'MeanTime': 194.560444951057}

As you can see, the data were fitted here with an absorber at redshift 0.593; this is because that redshift has been recorded for the GRB in our (UKSSDC) GRB system. Maybe you think this is wrong, or want to fit without a redshifted absorber, you can do that by supplying the redshift parameter to timesliceSpectrum().

By default this is None (i.e. the Python entity None) which means "Use whatever you have stored in the GRB system". You can supply either a redshift value to use, or the string 'NONE' which means "Do not use a redshift". i.e.

# Uncomment the line you want to try:

#JobID = udg.timesliceSpectrum(targetID='00635887', slices=slices, redshift=2.3)
#JobID = udg.timesliceSpectrum(targetID='00635887', slices=slices, redshift='NONE')

You can then check the status and get the spectrum as above, and you will find that either no redshift was applied to the absorber, or a redshift of 2.3, depending which one you tried.


Burst analyser

This API gives access to all the data in the burst analyser, with some enhancements to which I will return in a moment. First, a reminder that this webpage is documenting the API, not the burst analyser, so if you don't understand some of what I'm discussing, I advise you to look at the burst analyser paper and/or online documentation. Surprisingly enough, we get at burst analyser data with the function: getBurstAnalyser(). The burst analyser is built on top of light curves, and I will in places refer to things in the light curve section so it may be advisable to read that before this.

The burst analyser is a rather complex beast, because it has so many different datasets in it. We have 3 instruments, multiple energy bands, unabsorbed and observed fluxes, hardness ratios, inferred photon indices and energy conversion factors... oh yes, and a few different options for the BAT binning and multiple UVOT filters. All in all, it's complicated. On the website this is all managed through dividing the page into sections and giving various controls. For the API, it's handled by defining a data structure, the burst analyser dict, that contains everything, allowing you to explore it. This does mean that there are an awful lot of parameters available for us to consider when saving burst analyser data.

We will return to the burst analyser dict, and those parameters, in a minute, but first let's discuss the concept of saving data, because for the burst analyser this is a little more complicated than for the above products.

Conceptually, the situation is exactly the same as for light curves and spectra: you can either get the files from the website and save them straight to disk, or you can download them into a dict and, if you want to, write files to disk based on that. However, the way the files are organised for the website is optimised for online visualisation rather than access and manipulation, whereas the whole point of the API is access and manipulation. As a result, I decided to prioritise usefulness over uniformity, and give getBurstAnalyser() three options for what it does:

Here saveData is effectively "the third way" defined for light curves and spectra, but automated: the data are downloaded into a burst analyser dict and then saved from that (that dict is, however, discarded unless returnData=True), but the files saved are, I think, much more helpful than those you would get just by grabbing the files from the website. downloadTar does let you pull the files straight from the web, and has accompanying boolean arguments extract and removeTar which lets you, er, extract the data from the tar file and then remove said file. I haven't included an explicit demonstration of downloadTar=True here because it should be obvious what I mean, it's easy for you to test, and it creates a lot of files; and because I personally advocate saveData=True instead. Oh, and as with the other products, these three parameters are not mutually exclusive, then can all be True (or False bur I still can't see why you would do that).

Before we plunge into some demos, though, I should elaborate briefly on the above: what are the 'enhancements' I referred to, and the difference between the files that you get with downloadTar and saveData? There are two parts to this.

First: most of the light curves in the burst analyser data actually consist of three time series: the flux (i.e. the light curve), the photon index and the energy conversion factor (ECF), and on the website (and in the downloable tar file) these are all in separate files, even though they share a common time axis. So if you want to do any manipulation, you have to read multiple files and then join them on said time axis. With saveData=True, this is done for you, so for each light curve you get one file that has columns for flux, photon index, ECF etc.

Second: The issue of error propagation for the burst analyser is complicated (do see burst analyser paper and/or online documentation if you want details). As the documentation explains, in the light curves online (and in the tar file), the errors on the flux values are derived solely from the error on the underlying count-rate light curves, the uncertainty in the spectral shape and hence flux conversion are not propagaged into those errors. The reasons for this are subtle (but important, and discussed in the documentation), and of course you can do this propagation yourself if you grab all the files. However, in the API, we do this for you! The downloaded data (which we will explore soon) contain two sets of errors — with and without propagation — and you can choose which to save to disk as we shall demonstrate in a second.

Right, that's enough talk, let's get to work.

Getting the burst analyser data into a variable

I'm going to begin the burst analyser tutorial with the returnData=True case (unlike for the earlier products) because this introduces the data that we save to disk with saveData=True.

As with all previous products, this needs the GRBName or targetID arguments (see the introduction) which can be single values or lists/tuples, and it returns a dict. Unlike the light curve and spectral dicts, the burst analyser dict only appears for GRBs, and so while it is described in the data structure documentation it is only touched on lightly and I will give a full demonstration here.

The burst analyser dict is not too complicated, and in concept is intentionally reminscent of the way the spectral and light curve dicts were built. The burst analyser dict has (up to three) layers:

For obvious reasons, the middle layer is only present for the BAT data, and the different instruments have slighly different contents as we'll see.

You can see a detailed schematic in the data structure documentation but let's instead here explore interatively. First, let's get a single GRB:

data = udg.getBurstAnalyser(GRBName="GRB 201013A",
                                returnData=True,
                                saveData=False)

Right, now we can explore data. The top level of this dict is all about the instruments:

data.keys()
dict_keys(['Instruments', 'BAT', 'BAT_NoEvolution', 'XRT', 'UVOT'])

If you've followed any of the other data structures you can probably guess what this means. The 'Instruments' entry is a list, telling us what intstruments' data we have; the other entries are all the dicts containing those data, obviously indexed by the instrument, so:

data['Instruments']
['BAT', 'BAT_NoEvolution', 'XRT', 'UVOT']

should not be a surprise.

You will note that, as for the website, the BAT data, and the BAT data without spectral evolution are separate. I spent a while looking into putting them both inside the same entry and then decided it was much more sensible to keep them separate. We'll explore these data, one instrument at a time. The details of this dict differ slightly for each instrument so we'll go through them separately:

BAT data

As you can see from the description of the overall structure of the burst analyser dict, the BAT data should contain a level which divides the data according to how the BAT data were binned:

list(data['BAT'].keys())
['HRData',
 'SNR4',
 'SNR4_sinceT0',
 'SNR5',
 'SNR5_sinceT0',
 'SNR6',
 'SNR6_sinceT0',
 'SNR7',
 'SNR7_sinceT0',
 'TimeBins_4ms',
 'TimeBins_64ms',
 'TimeBins_1s',
 'TimeBins_10s',
 'Binning']

It may not be immediately obvious with all the keys, but again this is the same design as the higher level and in other contexts: we have the key 'Binning', which is a list of the different binning methods available, which themselves exist as keys in this dict. There is also an entry HRData which we will start off with.

The burst analyser works by taking a hardness ratio time series and using it to infer the spectral properties and ECF at a given time. The hardness ratio is created with a fixed binning (and then we interpolate), which is why HRData appears at this level. It is simply a DataFrame like a light curve, and contains a lot of columns; the hardness ratio and then the various ECFs (to the different energy bands the burst analyser light curves are in) and the photon index.

Let's look at it:

data['BAT']['HRData']
Time TimePos TimeNeg HR HRPos HRNeg ECF_XRTBand ECF_XRTBandPos ECF_XRTBandNeg ECF_BATBand ... ECF_BATBandNeg Gamma GammaPos GammaNeg ECF_Density ECF_DensityPos ECF_DensityNeg ECF_ObservedDensity ECF_ObservedDensityPos ECF_ObservedDensityNeg
0 -215.568 23.88 -23.88 1.457744 5.746180 -5.746180 1.469556e-07 2.045085e+23 -1.468156e-07 2.586909e-07 ... -2.372653e-07 1.255057 18.231662 -2.905123 0.004733 7538.397454 -0.004712 0.002379 -0.002379 -0.001220
1 -24.488 23.92 -23.92 0.975023 1.761095 -1.761095 1.005992e-06 2.045085e+23 -9.954903e-07 3.118125e-07 ... -1.571576e-07 1.971329 17.515390 -1.833497 0.011933 7538.390254 -0.011138 0.001894 0.000521 -0.001894
2 -0.428 0.14 -0.14 1.766082 0.400998 -0.400998 6.445159e-08 1.324292e-07 -3.691165e-08 2.281995e-07 ... -3.436470e-08 0.915177 0.456382 -0.362488 0.002881 0.002681 -0.001258 0.002502 0.000024 -0.000185
3 -0.188 0.10 -0.10 2.190595 0.413010 -0.413010 2.634057e-08 3.636942e-08 -1.343418e-08 1.919938e-07 ... -2.914643e-08 0.533546 0.370137 -0.307107 0.001572 0.001259 -0.000641 0.002524 -0.000019 -0.000074
4 -0.048 0.04 -0.04 2.421296 0.552121 -0.552121 1.742665e-08 3.339662e-08 -1.000845e-08 1.750320e-07 ... -3.388627e-08 0.355699 0.459029 -0.366748 0.001165 0.001303 -0.000558 0.002491 0.000030 -0.000148
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
83 7.812 0.10 -0.10 1.215590 0.292271 -0.292271 3.360225e-07 1.007671e-06 -2.093564e-07 2.847554e-07 ... -3.126703e-08 1.577677 0.491350 -0.382546 0.007320 0.006061 -0.002970 0.002189 0.000219 -0.000374
84 8.592 0.68 -0.68 1.118462 0.256073 -0.256073 5.016495e-07 1.449489e-06 -3.107691e-07 2.955954e-07 ... -2.813149e-08 1.726027 0.465669 -0.366711 0.008848 0.006551 -0.003378 0.002084 0.000240 -0.000370
85 29.992 20.72 -20.72 1.621385 0.722003 -0.722003 9.261551e-08 1.456033e-06 -7.266977e-08 2.420956e-07 ... -6.157395e-08 1.066538 1.049646 -0.652690 0.003611 0.010519 -0.002324 0.002458 0.000047 -0.000682
86 112.872 20.72 -20.72 0.326495 1.041229 -1.041229 1.120180e-03 2.045085e+23 -1.119985e-03 3.814297e-07 ... -1.132404e-07 3.977098 15.509622 -2.608968 0.084318 7538.317869 -0.078782 0.000530 0.001789 -0.000530
87 858.792 20.72 -20.72 0.421534 0.765571 -0.765571 1.837580e-04 2.045085e+23 -1.833819e-04 3.717537e-07 ... -8.383805e-08 3.499771 15.986948 -1.879888 0.056260 7538.345928 -0.048528 0.000763 0.001397 -0.000763

88 rows × 21 columns

There's not much more to say really, so let's turn our attention to the 'Binning' list:

data['BAT']['Binning']
['SNR4',
 'SNR4_sinceT0',
 'SNR5',
 'SNR5_sinceT0',
 'SNR6',
 'SNR6_sinceT0',
 'SNR7',
 'SNR7_sinceT0',
 'TimeBins_4ms',
 'TimeBins_64ms',
 'TimeBins_1s',
 'TimeBins_10s']

I trust the contents of this aren't a particular shock. One thing you may be wondering about is why, for the SNR (signal-to-noise ratio) binning there are two sets, some ending "_sinceT0". This is just because you get somewhat different results with the SNR-binning approach if you consider only data taken at t>0 (necessary for plotting a logarithmic time axis) or all data, so these are available separately.

Each of these entries is itself a dict, taking us to the next level of the burst analyser dict, so let's pick one as an example:

data['BAT']['SNR4'].keys()
dict_keys(['ObservedFlux', 'Density', 'XRTBand', 'BATBand', 'Datasets'])

This is just a light curve dict (documented here); 'Datasets' will list the light curves present, and the other keys are those light curves.

There are no 'Binning' or 'TimeFormat' keys because for the burst analyser everything is in seconds since T0, and the binning was set by which dict we're in. Let's just check the contents:

data['BAT']['SNR4']['Datasets']
['ObservedFlux', 'Density', 'XRTBand', 'BATBand']
data['BAT']['SNR4']['Density']
Time TimePos TimeNeg Flux FluxPos FluxNeg FluxPosWithECFErr FluxNegWithECFErr Gamma GammaPos GammaNeg ECF ECFPos ECFNeg BadBin
0 -215.568 23.88 -23.88 0.000012 0.000010 -0.000010 19.557086 -0.000016 1.255057 18.231662 -2.905123 0.004733 7538.397454 -0.004712 True
1 -167.808 23.88 -23.88 -0.000009 0.000013 -0.000013 -11.310830 0.000015 1.434087 14.358402 -2.226665 0.005964 7186.458888 -0.004665 True
2 -120.048 23.88 -23.88 0.000001 0.000016 -0.000016 1.204314 -0.000016 1.613118 12.641135 -1.717817 0.007515 6438.842527 -0.005127 True
3 -72.288 23.88 -23.88 -0.000002 0.000020 -0.000020 -1.529345 0.000021 1.792149 13.903136 -1.555094 0.009469 5860.853417 -0.007034 True
4 -24.528 23.88 -23.88 0.000016 0.000026 -0.000026 9.799804 -0.000030 1.971179 17.511724 -1.833113 0.011931 7535.354362 -0.011133 True
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
215 692.772 20.74 -20.74 0.000108 0.000112 -0.000112 11.503893 -0.000136 3.606010 12.899198 -1.572613 0.061561 6528.752230 -0.043222 True
216 734.252 20.74 -20.74 -0.000155 0.000110 -0.000110 -17.407297 0.000158 3.579466 13.567159 -1.625472 0.060191 6778.430813 -0.044259 True
217 775.732 20.74 -20.74 -0.000023 0.000107 -0.000107 -2.707486 0.000108 3.552922 14.311350 -1.695630 0.058852 7032.106861 -0.045526 True
218 817.212 20.74 -20.74 -0.000052 0.000105 -0.000105 -6.541562 0.000113 3.526379 15.120521 -1.781044 0.057543 7286.151456 -0.046964 True
219 858.692 20.74 -20.74 0.000171 0.000102 -0.000102 22.942664 -0.000180 3.499835 15.984805 -1.879636 0.056263 7537.744214 -0.048524 True

220 rows × 15 columns

You will note what I have said in the introduction above: the flux, photon index and ECF are all in this one table, and there are two sets of flux errors, with and without the ECF errors propagated. There is also a column, 'BadBin'. Some BAT data on the burst analyser are flagged as 'bad' ('unreliable' may be a better word, with hindsight) - you can read the burst analyser docs to find out why. On the website a checkbox lets you decide whether or not to plot these: in this API, you get all the bins, and the 'BadBin' bool column tells you which ones were marked as bad.

You may have noticed that the three different energy bands are in three different light curves. I could have combined these into one curve with lots of columns, but this way seems neater to me. If you really want to combine them yourself then DataFrame.merge() is your friend.

Having covered the BAT in detail, we can pass through the other instruments a bit more rapidly.

BAT_NoEvolution data

The BAT data without spectral evolution are a bit simpler because, well, they don't account for spectral evolution. So there is no hardness ratio, no photon index and ECF time series. Instead we have a single set of ECFs taken from a spectral fit to the data taken over T90. So, let's have a quick look:

list(data['BAT_NoEvolution'].keys())
['ECFs',
 'SNR4',
 'SNR4_sinceT0',
 'SNR5',
 'SNR5_sinceT0',
 'SNR6',
 'SNR6_sinceT0',
 'SNR7',
 'SNR7_sinceT0',
 'TimeBins_4ms',
 'TimeBins_64ms',
 'TimeBins_1s',
 'TimeBins_10s',
 'Binning']

This looks rather like the BAT data, except that there is no 'HRData' entry, and there is an 'ECFs' one. The latter simply gives those ECFs from the T90 spectrum:

data['BAT_NoEvolution']['ECFs']
{'ObservedFlux': 0.00243424007095474,
 'Density': 0.00231776220799315,
 'XRTBand': 4.70354458026304e-08,
 'BATBand': 6.85253672780114e-07}

Beyond this, the BAT_NoEvolution data look just like the BAT data. i.e. if I pick a binning method, say SNR4 (again) and explore it:

list(data['BAT_NoEvolution']['SNR4'].keys())
['ObservedFlux', 'Density', 'XRTBand', 'BATBand', 'Datasets']
data['BAT_NoEvolution']['SNR4']['Datasets']
['ObservedFlux', 'Density', 'XRTBand', 'BATBand']
data['BAT_NoEvolution']['SNR4']['Density']
Time TimePos TimeNeg Flux FluxPos FluxNeg BadBin
0 -167.808 23.88 -23.88 -3.647946e-06 0.000005 -0.000005 True
1 -120.048 23.88 -23.88 4.335117e-07 0.000005 -0.000005 True
2 -72.288 23.88 -23.88 -6.048022e-07 0.000005 -0.000005 True
3 -24.528 23.88 -23.88 3.014273e-06 0.000005 -0.000005 True
4 -0.588 0.06 -0.06 5.243474e-04 0.000130 -0.000130 False
... ... ... ... ... ... ... ...
216 775.732 20.74 -20.74 -8.923795e-07 0.000004 -0.000004 True
217 817.212 20.74 -20.74 -2.080905e-06 0.000004 -0.000004 True
218 858.692 20.74 -20.74 7.054583e-06 0.000004 -0.000004 True
219 900.172 20.74 -20.74 -2.328557e-06 0.000004 -0.000004 True
220 941.772 20.86 -20.86 3.924220e-07 0.000004 -0.000004 True

221 rows × 7 columns

The only real different here is that this DataFrame is much simpler, because we are not accounting for spectral evolution; there is also no propagated ECF error as the ECF comes from the spectrum not the hardness ratio (this doesn't mean it's without error, but the actual burst analyser processing never calculates the ECF error in this case. If I decide it should, then this part of the dict will of course be updated, but no existing contents will be changed).

XRT data

If you refer way back up this notebook to the burst analyser dict introduction, you'll remember that the 'binning' layer is only present for BAT, because this is the only instrument for which the burst analyser has multiple binning options. So, when we explore XRT data we should come straight into a light curve dict.

list(data['XRT'].keys())
['ObservedFlux_PC_incbad',
 'Density_PC_incbad',
 'XRTBand_PC_incbad',
 'BATBand_PC_incbad',
 'HRData_PC',
 'Datasets']

And we do! Although eagle-eyed readers will realise there is an extra "HRData_PC" entry. This is analogous to the BAT entry, giving the hardness ratio and its conversion to photon index and ECF. The HR data is separated out for the two XRT modes; although this GRB only has PC mode data. For completeness, let's have a quick look at things:

data['XRT']['HRData_PC']
Time TimePos TimeNeg HR HRPos HRNeg ECF_XRTBand ECF_XRTBandPos ECF_XRTBandNeg ECF_BATBand ... ECF_BATBandNeg Gamma GammaPos GammaNeg ECF_Density ECF_DensityPos ECF_DensityNeg ECF_ObservedDensity ECF_ObservedDensityPos ECF_ObservedDensityNeg
0 1299.585 6237.269 -1210.053 25.816354 5.46367 -5.46367 3.091866e-10 6.807572e-11 -2.764928e-11 1.679185e-10 ... -8.572934e-11 1.757906 0.340221 -0.269655 0.000005 0.000002 -0.000002 1.012730e-08 5.083110e-09 -2.895573e-09

1 rows × 21 columns

I must point out one little problem here: there should, really, be an 'HRData_incbad' key here, and there isn't, which is why there is only one bin in this hardness ratio. It turns out that this file doesn't exist (the _incbad hardness ratio exists and is used to create the burst analyser light curves, but this nice, combined file of everything isn't saved to disk). I will look into fixing this, and I guess I'll have to remake all the burst analyser data to create all the files(!) which may take a while, but the problem is in the burst analyser, not the API. When I've fixed it, the '_incbad' entries will appear.

The other things are just the flux light curves, analogous to the BAT one:

data['XRT']['Density_PC_incbad']
Time TimePos TimeNeg Flux FluxPos FluxNeg FluxPosWithECFErr FluxNegWithECFErr Gamma GammaPos GammaNeg ECF ECFPos ECFNeg
0 98.141 11.448 -8.610 5.659597e-04 1.276641e-04 -1.276641e-04 6.047195e-04 -4.465586e-04 -0.709013 0.796733 -0.612578 1.421906e-04 1.485043e-04 -1.075101e-04
1 124.376 12.795 -14.786 3.277544e-04 7.201962e-05 -7.201962e-05 3.246673e-04 -2.430883e-04 -0.482790 0.747759 -0.576065 1.049657e-04 1.013866e-04 -7.435566e-05
2 151.556 15.702 -14.386 2.297554e-04 5.041549e-05 -5.041549e-05 2.130558e-04 -1.617686e-04 -0.294056 0.707369 -0.545949 8.148340e-05 7.341489e-05 -5.451438e-05
3 199.129 23.289 -31.871 8.787795e-05 1.933691e-05 -1.933691e-05 7.396474e-05 -5.744147e-05 -0.033366 0.652436 -0.504982 5.743304e-05 4.665880e-05 -3.535004e-05
4 245.091 27.474 -22.672 7.400815e-05 1.630769e-05 -1.630769e-05 5.757704e-05 -4.561260e-05 0.164947 0.611460 -0.474412 4.401511e-05 3.284077e-05 -2.533430e-05
5 293.881 23.816 -21.315 6.529621e-05 1.432802e-05 -1.432802e-05 4.722850e-05 -3.814375e-05 0.338307 0.576347 -0.448201 3.488048e-05 2.403989e-05 -1.888379e-05
6 336.711 31.132 -19.014 4.651303e-05 1.056935e-05 -1.056935e-05 3.191084e-05 -2.623635e-05 0.468223 0.550545 -0.428928 2.930073e-05 1.896747e-05 -1.512705e-05
7 385.434 17.511 -17.591 5.913151e-05 1.295729e-05 -1.295729e-05 3.813577e-05 -3.183823e-05 0.597275 0.525420 -0.410144 2.464212e-05 1.494703e-05 -1.211960e-05
8 436.679 21.427 -33.733 3.212099e-05 7.058155e-06 -7.058155e-06 1.961571e-05 -1.665841e-05 0.716475 0.502726 -0.393160 2.099996e-05 1.196534e-05 -9.864994e-06
9 493.437 32.366 -35.331 2.228694e-05 4.917785e-06 -4.917785e-06 1.289284e-05 -1.114431e-05 0.833163 0.481057 -0.376921 1.795655e-05 9.602384e-06 -8.057433e-06
10 566.799 26.702 -40.995 1.970442e-05 4.223285e-06 -4.223285e-06 1.065543e-05 -9.389811e-06 0.965523 0.457228 -0.359031 1.503470e-05 7.464342e-06 -6.398953e-06
11 632.505 41.230 -39.004 1.363840e-05 3.013630e-06 -3.013630e-06 7.052379e-06 -6.328031e-06 1.070261 0.439021 -0.345327 1.306359e-05 6.107327e-06 -5.329834e-06
12 719.643 61.906 -45.908 8.277664e-06 1.869939e-06 -1.869939e-06 4.059342e-06 -3.717332e-06 1.193509 0.418441 -0.329790 1.107246e-05 4.819479e-06 -4.297499e-06
13 831.681 35.117 -50.132 8.660232e-06 1.956362e-06 -1.956362e-06 3.993886e-06 -3.733246e-06 1.331679 0.396619 -0.313237 9.198785e-06 3.698451e-06 -3.377316e-06
14 933.210 46.417 -66.412 5.653324e-06 1.273362e-06 -1.273362e-06 2.489355e-06 -2.360530e-06 1.441667 0.380341 -0.300812 7.936678e-06 3.002971e-06 -2.790418e-06
15 1038.259 51.689 -58.632 5.059712e-06 1.137993e-06 -1.137993e-06 2.143696e-06 -2.055056e-06 1.543528 0.366250 -0.289980 6.922819e-06 2.485655e-06 -2.341313e-06
16 1156.741 81.139 -66.793 3.250238e-06 7.396556e-07 -7.396556e-07 1.336571e-06 -1.291628e-06 1.646717 0.353063 -0.279748 6.027721e-06 2.064580e-06 -1.963730e-06
17 1290.877 77.383 -52.997 3.247742e-06 7.304581e-07 -7.304581e-07 1.295156e-06 -1.255691e-06 1.751486 0.340921 -0.270210 5.237246e-06 1.724678e-06 -1.647037e-06
18 1418.981 59.600 -50.722 3.363178e-06 7.608633e-07 -7.608633e-07 1.321223e-06 -1.279625e-06 1.841837 0.331566 -0.262744 4.639317e-06 1.490002e-06 -1.419237e-06
19 1622.771 179.246 -144.190 1.474854e-06 2.712396e-07 -2.712396e-07 5.382698e-07 -5.137254e-07 1.969983 0.320254 -0.253487 3.906445e-06 1.231470e-06 -1.155583e-06
20 5128.211 61.811 -56.033 1.974105e-07 4.329651e-08 -4.329651e-08 1.072472e-07 -7.745461e-08 3.068727 0.337518 -0.254801 8.943915e-07 4.445406e-07 -2.909711e-07
21 5250.377 85.070 -60.355 1.445571e-07 3.273932e-08 -3.273932e-08 7.979772e-08 -5.761001e-08 3.091208 0.339920 -0.256351 8.678154e-07 4.368720e-07 -2.845733e-07
22 5418.039 82.890 -82.593 1.220509e-07 2.752421e-08 -2.752421e-08 6.828557e-08 -4.893154e-08 3.121225 0.343227 -0.258500 8.335586e-07 4.268000e-07 -2.763004e-07
23 5602.630 103.900 -101.700 9.424274e-08 2.123045e-08 -2.123045e-08 5.351901e-08 -3.807466e-08 3.153217 0.346875 -0.260889 7.985353e-07 4.162695e-07 -2.678043e-07
24 5791.243 90.798 -84.714 1.060140e-07 2.401006e-08 -2.401006e-08 6.115574e-08 -4.325063e-08 3.184835 0.350602 -0.263346 7.653672e-07 4.060630e-07 -2.597146e-07
25 5990.211 87.401 -108.170 6.824165e-08 1.779552e-08 -1.779552e-08 4.092729e-08 -2.942990e-08 3.217092 0.354525 -0.265949 7.329485e-07 3.958510e-07 -2.517581e-07
26 6159.463 91.154 -81.851 1.000091e-07 2.248157e-08 -2.248157e-08 5.923782e-08 -4.134833e-08 3.243698 0.357850 -0.268168 7.072441e-07 3.875768e-07 -2.454090e-07
27 6339.775 123.963 -89.158 7.741763e-08 1.755234e-08 -1.755234e-08 4.650555e-08 -3.232989e-08 3.271251 0.361374 -0.270531 6.815754e-07 3.791479e-07 -2.390277e-07
28 6540.716 106.055 -76.978 8.666849e-08 1.964973e-08 -1.964973e-08 5.278666e-08 -3.649125e-08 3.301048 0.365277 -0.273161 6.548639e-07 3.701897e-07 -2.323381e-07
29 6733.254 81.508 -86.482 9.162066e-08 2.066177e-08 -2.066177e-08 5.648038e-08 -3.881702e-08 3.328751 0.368988 -0.275672 6.309685e-07 3.620049e-07 -2.263062e-07
30 6919.921 97.933 -105.160 5.405230e-08 1.434027e-08 -1.434027e-08 3.455951e-08 -2.427585e-08 3.354864 0.372557 -0.278097 6.092440e-07 3.544157e-07 -2.207792e-07
31 7101.220 127.248 -83.366 6.809063e-08 1.543770e-08 -1.543770e-08 4.299557e-08 -2.931256e-08 3.379561 0.375993 -0.280440 5.893869e-07 3.473490e-07 -2.156877e-07
32 7385.907 150.947 -157.440 6.590947e-08 1.218800e-08 -1.218800e-08 4.144298e-08 -2.734975e-08 3.417095 0.381326 -0.284091 5.604390e-07 3.368125e-07 -2.081906e-07
33 11729.988 148.045 -177.904 2.110738e-08 5.306712e-09 -5.306712e-09 1.656238e-08 -1.076719e-08 3.858814 0.452379 -0.333919 3.098363e-07 2.303026e-07 -1.375225e-07
34 25208.251 4424.959 -2076.327 2.594379e-09 6.082105e-10 -6.082105e-10 2.643509e-09 -1.629126e-09 4.589345 0.591602 -0.434608 1.162630e-07 1.152866e-07 -6.772812e-08
35 40295.514 963.536 -844.227 1.193078e-09 2.358357e-10 -2.358357e-10 1.389799e-09 -8.379556e-10 5.037265 0.684334 -0.502629 6.374296e-08 7.317641e-08 -4.296007e-08
36 48622.227 4535.501 -3132.456 7.351397e-10 1.273647e-10 -1.273647e-10 8.994349e-10 -5.382675e-10 5.216637 0.722441 -0.530699 5.010832e-08 6.068917e-08 -3.564730e-08
37 114520.802 7400.553 -11229.443 4.673005e-10 1.263795e-10 -1.263795e-10 5.276442e-10 -3.261832e-10 4.889789 0.653380 -0.479872 7.769062e-08 8.516960e-08 -4.999349e-08

UVOT data

Lastly, let's check out the UVOT data. This is very simple and like XRT we get straight into a light curve dict:

data['UVOT'].keys()
dict_keys(['white', 'b', 'u', 'v', 'uvw1', 'uvw2', 'uvm2', 'Datasets'])

UVOT data only appear in the 'observed flux in their native bands' plot in the burst analyser, which is why we only have one entry per filter. In some cases there will be upper limits as well as detections, which would give datasets like 'b_UL'.

The UVOT light curves should appear as you expect, but let's look:

data['UVOT']['uvm2']
Time TimePos TimeNeg FluxDensity FluxDensityPos FluxDensityNeg
0 650.323099 9.88220 -9.88220 0.000099 0.000032 -0.000032
1 1080.320059 9.88220 -9.88220 0.000127 0.000036 -0.000036
2 1601.299979 9.87665 -9.87665 0.000105 0.000032 -0.000032
3 6202.576450 99.89200 -99.89200 0.000076 0.000009 -0.000009
4 11718.445460 161.48350 -161.48350 0.000087 0.000008 -0.000008
5 23510.119850 374.35250 -374.35250 0.000081 0.000005 -0.000005

Yep, nothing surprising there.

Other arguments, and a note on memory usage

The above example was a very simple one, in which all of the data for GRB 201013A were retrieved. Sometimes you may not want to get all of the data (why waste time transferring things you don't need?) and indeed, there are cases where you can't get all of the data in one go, as it violates the memory resource limits of our web server (try replacing "GRB 201013A" with "GRB 130427A" in the example above and you'll hit this problem). So there are three arguments you can pass to udg.getBurstAnalyser() to request only some of the data be retrieved. I strongly advocate using some of these if only because I find it hard to believe that you most users will actually want all of the data. These arguments are below. All of them can be the string 'all' (the default if unspecified), or a list/tuple of strings. The contents of these are case insensitive. But what are they?

Saving the data

Now we've dug through what the returned data look like, we can get to the point of writing them to disk. We can do this in two ways, either by calling udg.saveBurstAnalyserData(data, **kwargs), where the data object is the one we created above; or by setting saveData=True in the call to getBurstAnalyserData. These two things act in exactly the same way behind the scenes, so can be considered identical.

As usual when saving data, you can specify the destDir, and the subDirs option is True by default, behaving exactly as it has done for light curves and spectra. That is (in case you've forgotten while wading through the dict above): if you requested a single GRB, it does nothing. If you requested multiple GRBs (or provided a list/tuple with one entry) then it will create a subdirectory per GRB, named either by the GRB name or the targetID, depending on which one you used to select the GRBs. If subDirs was False and you supplied a list/tuple of GRBs, the GRB name or targetID will be prepended to the file names.

For the burst analyser there are a few extra options you can specify when saving data: these are the same whichever of the two methods above you use, because behind the scenes they both call udg.saveSingleBurstAn() and just pass **kwargs to it. There are quite a few, and I'll discuss the key ones in a moment, or you can read all about them by executing the following cell.

help(udg.saveSingleBurstAn)
Help on function saveSingleBurstAn in module swifttools.ukssdc.data.GRB:

saveSingleBurstAn(data, destDir='burstAn', prefix='', instruments='all', asQDP=False, header=False, sep=',', suff=None, usePropagatedErrors=False, badBATBins=False, clobber=False, skipErrors=False, silent=True, verbose=False, **kwargs)
    Save downloaded burst analyser data to disk.

    This takes a data structure for a previously-downloaded burst
    analyser dataset, and saves it to disk.

    NOTE: This is for a **single** object, not a set of objects, i.e if
    you downloaded a set of objects, then you received a dict, with one
    entry per object; a single entry should be passed here as the data
    argument.

    Parameters
    ----------

    data : dict
        A dictionary of burst analyser data, previously downloaded.

    destDir : str
        The directory in which to save the data.

    prefix : str, optional
        A string to prepend to the filenames when saving them.

    instruments : str or list, optional
        Which instrument data to save. Must be 'all' or a list of
        instruments (from: BAT, XRT, UVOT) (default: 'all').

    asQDP : bool, optional
        Whether to save in qdp format. Overrides ``sep``
        (default: ``False``).

    header : bool, optional
        Whether to print a header row (default: ``False``).

    sep : str, optional
        Separator to use for columns in the file (default: ',').

    suff : str
        If specified, the file suffix to use (default: ``.dat``, or
        ``.qdp`` if ``asQDP=True``).

    usePropagatedErrors=False : bool, optional
        Whether the flux errors to write to the files are those which
        have had the uncertainty on the ECF propagated (default:
        ``False``) **only effective if ``asQDP=True``**

    badBATBins : bool, optional
        Whether to write out BAT bins flagged as 'bad' (default:
        ``False``).

    clobber : bool, optional
        Whether to overwrite files if they exist (default: ``False``).

    skipErrors : bool, optional
        Whether to continue if a problem occurs with one file
        (default: ``False``).

    silent : bool, optional
        Whether to suppress all output (default: ``True``).

    verbose : bool, optional
        Whether to write verbose output (default: ``False``).

    **kwargs : dict, optional
        Not needed, but stops errors due to the way this can be called.

Things like asQDP, incbad, and nosys you should already be familiar with from the light curves, but there are a few new ones, which are quite important, to draw your attention to.

So, just for the sake of completeness, let's give an example. First, let's save the data we've just downloaded (uncomment the verbose line if you want to see where the files are saved):

udg.saveBurstAnalyser(data,
                      destDir='/tmp/APIDemo_burstAn1',
                      # verbose=True,
                      badBATBins=True)
Ignoring subDirs as only a single source was provided.

But of course, we could have done this without first pulling the data into a variable:

udg.getBurstAnalyser(GRBName="GRB 201013A",
                     saveData=True,
                     returnData=False,
                     destDir='/tmp/APIDemo_burstAn2',
                     badBATBins=True,
                     #verbose=True
                    )

Exercise for reader: confirm that these two have produced exactly the same files.

Saving the data and the tar file

I said I wasn't going to demonstrate getting the tar file, but I do one to make one little point. If you run this (and don't feel obliged to):

udg.getBurstAnalyser(GRBName="GRB 201013A",
                     saveData=False,
                     returnData=False,
                     downloadTar=True,
                     extract=True,
                     removeTar=True,
                     destDir='/tmp/APIDemo_burstAn3',
                     )

And then this:

udg.getBurstAnalyser(GRBName="GRB 201013A",
                     saveData=True,  ### This line has changed compared to the last cell
                     returnData=False,
                     downloadTar=True,
                     extract=True,
                     removeTar=True,
                     destDir='/tmp/APIDemo_burstAn4',
                     )

Then if you compare the two directories created by the last two cells, you will notice that the second one has a subdirectory fromTar, and the tar file has been extracted there. This is just because if you extract the tar file in the same place as you save the data directly, it's frankly a mess, so things are kept separate for you.

In all of these examples I've downloaded a single GRB and done so by name, but as with every other product, you can supply a list/tuple of names, or use targetIDs instead.

Positions

Positions are so much simpler than everything above! There are very few options to worry about, and a really simple return strucutre. Positions come with no files, just a simple dict of positions. Let's take a peek:

pos = udg.getPositions(GRBName='GRB 080319B')
pos
{'Best_RA': 217.92067,
 'Best_Decl': 36.30226,
 'Best_Err90': '1.4',
 'Enhanced_RA': 217.92067,
 'Enhanced_Decl': 36.30226,
 'Enhanced_Err90': 1.4,
 'Standard_RA': 217.9197187854,
 'Standard_Decl': 36.3024413265,
 'Standard_Err90': 3.5,
 'SPER_RA': None,
 'SPER_Decl': None,
 'SPER_Err90': None,
 'Onboard_RA': 217.9196,
 'Onboard_Decl': 36.3041,
 'Onboard_Err90': 4.7}

Well, that was easy. For GRBs there are a bunch of positions produced, and by default this function will return all of them, with None for missing things. The only thing to note is that Best doesn't necessarily mean "has the smallest error", rather it in terms of precedence. Enhanced positions are preferred, if there is none then the Standard position is given, failing that the SPER or then Onboard.

The only extra argument this function has (apart from the usual silent and verbose) is positions. This defaults to 'all', but can instead be a list/tuple of which positions you want to get, of the set above. And of course, we can request multiple GRBs in one go if we want, as with the other products. To show these both in one go:

pos = udg.getPositions(GRBName=('GRB 080319B', 'GRB 101225A'),
                       positions=('Enhanced', 'SPER'))
pos
{'GRB 080319B': {'Enhanced_RA': 217.92067,
  'Enhanced_Decl': 36.30226,
  'Enhanced_Err90': 1.4,
  'SPER_RA': None,
  'SPER_Decl': None,
  'SPER_Err90': None},
 'GRB 101225A': {'Enhanced_RA': 0.19792,
  'Enhanced_Decl': 44.60034,
  'Enhanced_Err90': 1.4,
  'SPER_RA': None,
  'SPER_Decl': None,
  'SPER_Err90': None}}

And, of course, as with everything in here, we could have supplied targetID instead of GRBName if we wanted.

Obs Data

Phew, nearly at the end of this module. There is one last bit of functionality and we are going to deal with this one really, really quickly.

If you want to download all of the actual obs data for a GRB we can do that using the function getObsData. This is basically a wrapper around downloadObsData() in the parent module, which was described in the parent module documentation. It takes the GRBName or targetID parameter, exactly like everything else in this notebook, it takes verbose and silent, like everything else everywhere, and any other parameters are just **kwargs passed to downloadObsData.

So, one example, only getting a little data to save time:

udg.getObsData(GRBName="GRB 201013A",
              instruments=['XRT',],
              destDir='/tmp/APIDemo_downloadGRB',
              silent=False,
              )
Resolved `GRB 201013A` as `999948`.
Have to get targetIDs: ['00999948']
Making directory /tmp/APIDemo_downloadGRB
Downloading 5 datasets
Making directory /tmp/APIDemo_downloadGRB/00999948000
Making directory /tmp/APIDemo_downloadGRB/00999948000/xrt/
Making directory /tmp/APIDemo_downloadGRB/00999948000/xrt/event/
Making directory /tmp/APIDemo_downloadGRB/00999948000/xrt/hk/
Making directory /tmp/APIDemo_downloadGRB/00999948000/auxil/

Downloading 00999948000:   0%|          | 0/30 [00:00<?, ?files/s]

Making directory /tmp/APIDemo_downloadGRB/00999948001
Making directory /tmp/APIDemo_downloadGRB/00999948001/xrt/
Making directory /tmp/APIDemo_downloadGRB/00999948001/xrt/event/
Making directory /tmp/APIDemo_downloadGRB/00999948001/xrt/hk/
Making directory /tmp/APIDemo_downloadGRB/00999948001/auxil/

Downloading 00999948001:   0%|          | 0/25 [00:00<?, ?files/s]

Making directory /tmp/APIDemo_downloadGRB/00999948002
Making directory /tmp/APIDemo_downloadGRB/00999948002/xrt/
Making directory /tmp/APIDemo_downloadGRB/00999948002/xrt/event/
Making directory /tmp/APIDemo_downloadGRB/00999948002/xrt/hk/
Making directory /tmp/APIDemo_downloadGRB/00999948002/auxil/

Downloading 00999948002:   0%|          | 0/25 [00:00<?, ?files/s]

Making directory /tmp/APIDemo_downloadGRB/00999948003
Making directory /tmp/APIDemo_downloadGRB/00999948003/xrt/
Making directory /tmp/APIDemo_downloadGRB/00999948003/xrt/event/
Making directory /tmp/APIDemo_downloadGRB/00999948003/xrt/hk/
Making directory /tmp/APIDemo_downloadGRB/00999948003/auxil/

Downloading 00999948003:   0%|          | 0/25 [00:00<?, ?files/s]

Making directory /tmp/APIDemo_downloadGRB/00999948004
Making directory /tmp/APIDemo_downloadGRB/00999948004/xrt/
Making directory /tmp/APIDemo_downloadGRB/00999948004/xrt/event/
Making directory /tmp/APIDemo_downloadGRB/00999948004/xrt/hk/
Making directory /tmp/APIDemo_downloadGRB/00999948004/auxil/

Downloading 00999948004:   0%|          | 0/25 [00:00<?, ?files/s]

Well done, you got to the end of a long tutorial. I hope you found reading it rather less trying than I found writing it, and more to the point, that it was helpful. Don't forget to remove all the /tmp/APIDemo* directories we've created!