next up previous contents index
Next: 7.4 Flux calibration accuracy Up: 7. SWS Calibration Previous: 7.2 Overall Calibration


7.3 Flux calibration

Flux calibration involves (1) fitting a slope to the 24Hz voltage detector samples and (2) converting this to a flux. This is, of course, complicated by the presence of non-linearities in the system, glitches, and how well the conversion of voltage/sec to fluxes is known. Parts 1 and 2 are handled by Derive-SPD and AA separately.

At constant illumination the output of the SWS detectors can be approximated as a voltage changing linearly with time;


\begin{displaymath}
V(t) = S \cdot t + O
\end{displaymath} (7.1)

The increase of this voltage (i.e. the slope S) is dependent on the radiation falling onto the detector, the physical quantity of interest. In Derive-SPD a slope is derived from the 24 Hz data for each reset interval. See section 8.4 for a discussion of this and the errors on it.

In normal data frames all samples in a reset interval are used. For a 1 second integration this time is 17/24 seconds - the first 7 samples are thrown away as being affected by the reset, leaving 17 samples that can be used. For a two second integration the time is (17+23)/24 seconds, as 1 sample is thrown away in the last second due to the reset pulse. For an integration lasting K seconds the effective integration time is $\sim(K-2)+(17+23)/24$ seconds.

The accuracy is directly estimated from the fit residuals which allow the computation of the standard deviation of the derived photo-current. Obviously, the accuracy depends not only on the intensity of the source (I), but also on how well the ramps have been previously linearized and therefore on the measurement error of the RC time constants. A statistical weight is computed which is inversely proportional to the error on I and proportional to the number of measurements between two detector resets. This weight will be used by Auto-Analysis to compute the average photo-current for each ramp. It is expected that this error will dominate all previously described ones.

If within a reset period a glitch (or any other anomaly) is detected, processing of the reset integration is stopped. Processing is subsequently continued after the glitch until a reset pulse (or another glitch) is detected. If glitches have occurred within a reset interval the slopes S of the different parts of the reset interval are averaged together (weighted by the standard deviation $\sigma_S$ of those slopes).

Currently the SWS flux calibration as performed in AA rests on the assumption that the measured current slope $S(t,\lambda)$ (in $\mu$V/sec) is a linear combination of source flux $I(\lambda)$ (in Jy), instrumental gain $G(t,\lambda)$ ($\mu$V/sec/Jy) and dark current D(t) ($\mu$V/sec);


\begin{displaymath}
S(t,\lambda) = G(t,\lambda) \cdot I(\lambda) + D(t)
\end{displaymath} (7.2)

Note that in this equation it is implicitly assumed that all memory effects (see section 5.8) can be neglected or have been removed. A full treatment of these effects would result in some sort of convolution integral for the right hand side of eqn. 7.2.

Following this equation the actual source flux $I(\lambda)$ is reconstructed by first subtracting the dark current from the measured slopes, and subsequently dividing them by the instrumental gain.

The instrumental gain is split into several (hopefully) orthogonal components;


\begin{displaymath}
G(t,\lambda) = G(t) \cdot G_0 \cdot G(\lambda)
\end{displaymath} (7.3)

Here G(t) contains all the gain variations occurring on the timescale of the observation. G(t) is derived from the observation itself, from up-down scan data - see section 4.6.1. In principle G(t) should be unity, in practice it will vary around unity during an observation, and in OLP V6 it is set to 1.

The factor G0 is used to account for long term (i.e. different SWS observations) variations in the responsivity of the instrument. It is determined by comparing the instrument response when the internal calibrator is switched on to the expected value for that response based on calibration observations.

The conversion from $\mu$V/sec to Jy is contained in $G(\lambda)$. It is taken from a calibration table (one for each AOT band) which in turn is derived from special calibration observations (section 8.3.6).


next up previous contents index
Next: 7.4 Flux calibration accuracy Up: 7. SWS Calibration Previous: 7.2 Overall Calibration
SWS Instrument & Data Manual, Issue 1.0, SAI/98-095/Dc