The
Basics of Hectospec Reductions with HSRED
1. The basics
- Some nomenclature:
- RERUN - This
code has
adopted the SDSS standard of using a rerun number to keep track of
different reductions. If you don't want to use this number, not
setting this keyword will
use a rerun==0000 throughout the code. The rerun must be an
integer and four digits or fewer. Throughout this example, I will use
rerun=100 for example.
- MAP FILE - The mapfiles
are the _map files that are distributed with the Hectospec data.
These
files list the RA and DEC for each of the fibers in an
observation. In
a default reduction this file is used to set the RA and DEC information
for each fiber. If more information is needed, a PLUGCAT must be
supplied.
- PLUGCAT - This
is very
similar to a mapfile, but lists more information (such as the
magnitude, object type, etc) which can later be added to the reduction
output. If the data are to
be fluxed this
must be set (the procedure below will
outline how to create this file)
- Initial Preprocessing:
- The preprocessing (creation of master biases, flats, etc)
require input lists. You should start IDL in the directory
containing all of your raw data for a given night. Create the
initial lists with:
- This script creates the lists
directory and the needed lists to combine each of the
observations.
- bias.list
- Contains all of the bias observations to be used in
the reductions. It is vital
that you check this file by hand to be sure that no non-biases are
included. Often non-biases are coded as biases, and you
don't want to be include these. Also, you don't want to to
include biases that are contaminated by light in the enclosure (these
will have more counts than a typical bias).
- arc.list
- Lists all of the arc files in the directory. Again,
these should be check to ensure that none of the exposure are
contaminated by light in the enclosure and that none of the arcs are
under or over exposed.
- dflat.list
- Contains dome flat files. Be sure none are saturated
- sflat.list
- Contains twilight flat files. Again, check to be
sure that these flats are not under- or over- exposed.
- dark.list
- cal.list
- Ignore this file for now - it is used for reductions
in batch mode, and will be covered when we get there.
- Once the lists are properly prepared, you can complete the
initial processing of the calibration files. The proper task is hs_calibproc which has several
possible modes.
- If you want to process the bias, arcs, dome
flats, and sky flats, you could use the command
- IDL > hs_calibproc, /doall,
rerun=100
- If, however, you do not have all of these files to
process (you may not have sky flats for example), you can process
everything else with:
- IDL >
hs_calibproc, /dobias, /doarc, /dodome, dowave
- Darks are not generally needed for reductions. If you
find that you need to use the provided darks, you can combine the darks
with
- IDL >
hs_calibproc, /dodark
- This command makes the master version of each
calibration type. If /doall or /dowave are set, the domeflats are
traced and the solution is saved and the initial wavelength solution is
created. This takes up to an hour of run time. All of the
calibration data are stored in the calibration/XXXX/
directory where XXXX is the four digit rerun number.
- Making the plugcat file (optional
for general reductions - required for fluxed data or if
sky fibers are hidden in the targets)
- The plugcat files are specific to each
configuration, and thus you only need to create one per config.
- If you simply want to add some extra info
to the output fiber info (other than the default RA,DEC), you will need
a to specify a catalog file. This need not be specific to the
configuration you are working on, but can included all of the targets
in the input catalog to XFITFIBS, for example.
- If you are hiding blank sky patches as
targets, a second sky catalog will need to be input as well.
- The resultant plugcat files will be in
the directory plugdir and
will look like (where config is the root of the input map name):
- config_all, config_unused, config_sky,
config_target
- The code hs_readcat
can be used to generate the needed plugcat files. This
program is a bit annoying, but it works. To run it follow:
- 1. Locate you catalog. Here is a
small sample of one - sample.cat. NOTE - Any non standard formated
columns MUST be commented out with a '#'
- 2. Specify the names of the white-space
delimited columns in the file. While you can name the columns
anything you want, only certain column names with be recognized and
passed to the plugcat. The recognized keywords are:
- RA - required
in DECIMAL HOURS
- DEC - <>required in DECIMAL DEGREES
- TYPE - optional
- RANK - optional
- RMAG - optional
- RAPMAG - optional
- BCODE - optional
- ICODE - optional
- RCODE - optional
- It should be noted that while we
originally had specific intentions for each of these keywords for our
own purposes, the only required ones are marked.
- For the Example file above, I would give
make the following command to list the columns
- IDL >
<>cols = ['ra', 'dec',
'type',
'rank', 'rmag', 'rapmag',
'ncode', 'icode', 'rcode', 'etime',
'z', 'snr']
- 3. Specify the format of each column
(A=string, D=Double, F=Float/Single, I=Integer, L=long)
- IDL >
format = ['D', 'D', 'A', 'D', 'D', 'D', 'A',
'L', 'L','L','D','D']
- 4. If you are providing the locations of
blank sky target fibers, then do the same for the skyfibers. Here is an
example catalog - sample.cat_sky.
- IDL
> scat = 'sample.cat_sky'
- IDL
> scol = ['ra', 'dec', 'type', 'junk1', 'junk2', 'junk3', 'junk4']
- IDL
> sformat = ['D','D','A','L',D','D','A']
- 5. Set the mapfile that you will match
to. I typically choose the first for a series of exposures of one
config, but this doesn't matter. Suppose I am matching the
targets for the data in bootes040616_3.1131.fits. I want to match the
map file associated with this, namely
- IDL>
mapfile = bootes040616_3.1131_map
- 6. Now you can run hs_readcat.
- If you do not have hidden skies:
- IDL>
hs_readcat, cat, mapfile, cols, format
- If you DO have hidden skies:
- IDL>
hs_readcat, cat, mapfile, cols, format, skyfile=scat,
skycolumnnames=scol, skycolformat=sformat
- 7. Now you will want to check the
resulting file in the plugdir
directory. You should check
- a. That config_all has EXACTLY 300
entries and that none are repeated. (ie line XX should be for
fiber XX)
- b. See that the config_sky has all of
your skies listed properly (NOTE - if you are hiding skies, theses will
be coded as 'sdsssky' while others are 'sky'
- Now that you have the plugcat ready to
go, then there is one last initial step - ONLY REQUIRED IF YOU WANT TO FLUX YOUR
DATA
- In order to flux your data, a list of
standard stars in the HSRED distribution must be updated. The
file $HSRED_DIR/etc/standstar.dat must be updated. In order to do
this, create a list of standard stars (these are F type stars in our
case, which is all that is supported currently). The format of the file
should be RA DEC u_psf g_psf r_psf i_psf z_psf reddening_r. Where
the x_psf are the SDSS psf magnitudes of each star and reddening_r is
the galactic reddening in the light of sight toward the star in the r -
band. This reddening should be the value from the SFD dust maps.
Suppose this information is in foo.cat then
- % more
foo.cat >> $HSRED_DIR/etc/standstar.dat
- will update the needed file.
- Extraction:
- You are now ready to run the main extraction
routine. A couple of bookkeeping notes: if you have a
sflat.fits file in the calibration/XXXX/
directory, both the skyflat and the domeflats will be used for the
calibration of your data. If sflat is not included, then only the
domeflat is used. The full documentation for hs_extract, the
workhorse for the extraction, can be found in the source code, but the
general operations are found here. First, you should decide which
observations to extract and coadd. These should all be the same
configuration, obviously.
- IDL>
files = findfile('bootes04061_3*fits')
- I will outline several possible setups (NOTE - you can add /docosmic to any of these calls
to enable cosmic ray rejection on EACH exposure):
- Standard extraction with no plugcat and no fluxing:
- IDL>
hs_extract, files, /dostand, rerun=100, outname='spHect-bootes3.fits'
- Standard extraction with a plugcat but no fluxing (note:
when specifying the plugcat, include only the config name (ie not the
_all from the filename in the plugdir
directory))
- IDL>
hs_extract, files, plugfile='plugdir/bootes040616_3', /plugcat,
/dostand, rerun=100, outname='spHect-bootes3.fits'
- The final step in the coaddition is to look at high
signal to noise spectra and find a correction factor to correct for
exposure-to-exposure corrections to the spectra. This can die if
there are no high signal-to-noise spectra in the data. In that
case, you can use either the above calls with the /noftweak option.
- Extraction with flux and telluric correction:
- IDL>
hs_extract, files, plugfile='plugdir/bootes040616-3', /plugcat,
/uberextract, rerun=100, outname='sphect-bootes3.fits'
- A couple of other options :
- /qaplot -
makes Quality Assurance plots (must have plugcat with RMAG info)
- /dodark -
use the master dark file
- All of these will create the reduction/XXXX/
directory (XXXX is again the four digit rerun number) and the output
will data products will be saved there. There are a number of
files, but those of most interest are the :
- spObs file - these are the extraction for each exposure
- not fluxed
- spHect file - these are the coadded spectra - fluxed if
requested
- Batch mode extraction:
- If you have a number of configs to be reduced on a given
night, you might want to extract them in batch mode.
- Update the list/cal.list file.
- This file should contain one line for each
configuration. Each line should list all of the individual exposures
for that config (separated by commas, not spaces) and then the plugcat
file separated by whitespace. If you are not using a
plugcat, you should use a placeholder in its place (as single 'a' will
work). Here is an example of a file WITHOUT a plugcat- cal.list.
- You can then use hs_batch_extract
with the same parameters from hs_extract
(except the outnames) to reduce the data. Examples:
- No fluxing
- IDL
> hs_batch_extract, rerun=100, /dostand
- Fluxing (with plugcat)
- IDL>
hs_batch_extract, rerun=100, /uberextract, /plugcat
- No fluxing (with plugcat)
- IDL>
hs_batch_extract, rerun=100, /dostand, /plugcat
- Data model:
- spObs:
- ext[0] - wavelength / Angstroms
- ext[1] - flux
- ext[2] - inverse variance
- ext[3] - mask
- ext[4] - or mask
- ext[5] - plugmap - contains targeting information
- ext[6] - sky structure (CHIP 1)
- ext[7] - scales determined from skylines (CHIP 1)
- ext[8] - sky structure (CHIP 2)
- ext[9] - scales determined from skylines (CHIP 2)
- spHect:
- ext[0] - wavelength / Angstroms
- ext[1] - flux
- ext[2] - inverse variance
- ext[3] - mask
- ext[4] - or mask
- ext[5] - plugmap - contains targeting information
- ext[6] - sky structure (CHIP 1)
- ext[7] - scales determined from skylines (CHIP 1)
- ext[8] - sky structure (CHIP 2)
- ext[9] - scales determined from skylines (CHIP 2)
- What's with this wavelength and flux being separate - IRAF
can't plot these files:
- This was a decision made early on and has now been
entrained in the code. If you would prefer a file that can be
read and plotted with IRAF, the hs_toiraf
program will give you that. Enter the reduction/XXXX/
directory.
- IDL
> sphectfile = 'spHect-bootes3.fits'
- IDL
> outfile = 'bootes3-iraf.fits'
- IDL
> hs_toiraf, specfile, outfile
- The output file will be plotable with SPLOT in IRAF
- 1D processing:
- This pipeline will also give redshifts for each of the
spectra through a cross-correlation routine.
- For data that is already fluxed:
- IDL>
files = findfile('spHect*fits')
- Non-fluxed data can be run as well, but an artificial
fluxing must be applied. While the fluxing solution is OK (at least to
find redshifts), the telluric correction is not appropriate for all
data as the atmospheric absorption lines change - it is VERY important that you examine
each redshift to be sure this isn't affecting the solution.
- IDL>
hs_reduce1d, files, /pseudoflux
- The 1D reduction pipeline creates
spZbest, spZall, spZline files in the working directory. The data model
for these can be found at the SDSS spectroscopic page.
- For more description of the codes, look
at the documentation in the headers or in the in code
documentation.