The SKA is an inferometer where each pair of baselines measures the visibility on a single mode. The whole procedure can be devided into 3 steps: the first step is to generate the UV map distribution according to the station layout; secondly one should sample the noise ampliude on each UV mode; thirdly, one should convert the UV space observations into real space noise images using different weights. I’ll introduce this steps one by one

UV map generation

To be finished later.

Sampling the noise magnitude

First we assume all the baselines are identical. Each time it measures, it’ll get the true 21 cm visibility \(V_{\rm{21\ cm}}\) with a noise visibility \(V_N\) with standard deviation \(\sigma_{N}\):

\[\begin{equation} \sigma_{N} = \frac{\lambda^2T_{\rm sys}}{\epsilon \Omega A_{\rm Dish}\sqrt{2\Delta \nu t_{\rm int}N_d}}, \label{eqn:singnoi} \end{equation}\]

where \(\Omega\approx \theta^2\) is the beam area, \(\lambda\) is the signal wavelength, \(T_{\rm sys}\) is the system temperature of the telescope, \(\Delta \nu\) is the frequency resolution, \(t_{\rm int}\) is the integration time, \(A_{\rm Dish}\) is the physical area of station and \(\epsilon\) is an efficiency factor which follows

\[\begin{align} \begin{split} \epsilon= \left \{ \begin{array}{ll} 1, &\nu<\nu_{\rm c}\\ \left ( \frac{\nu_{\rm c}}{\nu}\right )^2, & \nu\ge\nu_{\rm c}. \end{array} \right. \end{split} \end{align}\]

However, what UV map tell us is how many times is a UV mode measured, each time with a noise \(V_{N_i}\). So you can imagine that on any UV mode \(\tilde{\mathcal{x}}\), the outcome visibility is the sum of all \(N_{uv}\) independent observations (this is called the natural weighting): \(\begin{equation} V_{\rm{obs}}(\tilde{\mathcal{x}}) = N_{uv}(\tilde{\mathcal{x}})\times V_{\rm{21\ cm}}(\tilde{\mathcal{x}}) + \sum_{i=1}^{N_{uv}(\tilde{\mathcal{x}})}V_{N_i} \label{eqn:vis} \end{equation}\)

Our goal is to sample the noise amplitude, so here comes two way:

  • we can simply sample the \(\sum_{i=1}^{N_{uv}(\tilde{\mathcal{x}})}V_{N_i}\). It’s simply a sum of \(N_{uv}\) Gaussian variables, equivalent to sample one Gaussian variable with std: \(\sigma_N^{\rm pixel} = \sigma_{N}\sqrt{N_{uv}}\). In this case the sampled noise correspond to the final noise map, and no additional procedure is needed.

  • Or we can transform the equation \eqref{eqn:vis} into \(V_{\rm{obs}}(\tilde{\mathcal{x}}) = N_{uv}(\tilde{\mathcal{x}})( V_{\rm{21\ cm}}(\tilde{\mathcal{x}}) + 1/N_{uv}(\tilde{\mathcal{x}})\sum_{i=1}^{N_{uv}(\tilde{\mathcal{x}})}V_{N_i})\). In this case we need to sample a Gaussian varable with std: \(\sigma_N^{\rm pixel} = \sigma_{N}/\sqrt{N_{uv}}\), and a weighting is required in the next step if not using the natural weighting.

Personally I prefer the second way: In this case I can write the final product of “observation step” as \(V_{\rm final}(\tilde{\mathcal{x}}) = V_{\rm{21\ cm}}(\tilde{\mathcal{x}}) + 1/N_{uv}(\tilde{\mathcal{x}})\sum_{i=1}^{N_{uv}(\tilde{\mathcal{x}})}V_{N_i}\), and leave all the weighting stuffs in the next section, because first “averaging” is something should be automatically performed in my mind; secondly this quatity is closest to the original \(V_{\rm{21\ cm}}\).

Weighting

After we got the \(V_{\rm final}\):

\[\begin{equation} V_{\rm final}(\tilde{\mathcal{x}}) = V_{\rm{21\ cm}}(\tilde{\mathcal{x}}) + 1/N_{uv}(\tilde{\mathcal{x}})\sum_{i=1}^{N_{uv}(\tilde{\mathcal{x}})}V_{N_i} \end{equation}\]

The final step is transform these visibilities (in \(uv-\) space or Fourier space) to the final image. The simplest way to do this is just do a Fourier transformation. This way provided the best angular resolution (You may see it when we discuss the natural weighting later), however largest noise. Because that in those outer (far from 0 point) regions in \(uv-\) space, few baselines hit these modes. Low hits result it high noise amplitude (higher \(1/N_{uv}(\tilde{\mathcal{x}})\sum_{i=1}^{N_{uv}(\tilde{\mathcal{x}})}V_{N_i}\) in \(V_{\rm final}\)). This would lead to super-high noise amplitude oscillating in super-small scale. One way to solve this problem is adopting the “natural weighting”.

The idea behind natural weighting is, since some modes are hit fewer, some are hit more often, then the weight is just the hits. This is a natural way of doing the weighting, because \(N_{uv}(\tilde{\mathcal{x}})\) is the only thing we have in the above equations. In this case, these frequently measured modes become much more important, while those less hit modes become nothing. The consequense is, the noise amplitide of long modes is suppressed, but the signal is also suppressed. Thus the result has a significant lower noise, but also lack of small scale information.