How to acquire a raw datacube?

With a working slit spectrometer, we are now one step from being able to make a raw datacube.

An hyperspectral datacube is an image acquired on an hyperspectral camera made of two spatial dimensions (x,y) and one spectral dimension (lambda). The slit spectrometer delivers a full slit spectrum (x,λ) and the full datacube is acquired through a line-scan along the direction normal to the slit (y).

We do this by employing a motorized rotation stage placed under the camera.

rotation stage 2rotation stage 1

The rotation speed of the motorized stage is optimized to get a square pixel. The side of the pixel along the slit (x direction) should indeed have the same side size along the scanning direction (y direction).

While the rotation stage moves, each spectrum is then recorded and the camera image looks like this :

On the image above, the vertical axis corresponds to the spatial direction along the slit (x axis), and the horizontal axis represents the spectral direction along the slit position (λ). The system is a “spatial scanning” device, and the scan produced by the rotation stage could be replaced in the future by the scanning motion of an airborne platform.

There are 3 obvious dark vertical bands which correspond to the 3 absorption bands of water, with the large one on the right centred  on 1.4 micron.

Once the scan is completed, the datacube is sliced in the (x,y) direction and the image is reconstructed as shown on the following image:

image car

This image is taken at the wavelength offering the best contrast and the best atmospheric transmission at around 1 micron. Each spectra is accessible by post processing for every pixel within the image using a python GUI.

data cube

The next stage will be to perform the system spectral calibration.  We will remove the effect of the camera optics and sensor from the measured image by acquiring a dark and white frame which will subsequently be used to correct the datacube. The dark frame calibration will remove the sensor response produced by the camera electronics (which depends on temperature or exposure time). The white reference frame will allow us to remove the illumination spectral signature from the scene. With this calibration, it will be possible to ultimately derive the scene reflectance, which is the absolute spectral reflectivity revealing the physical or chemical nature of the sample.

First atmospheric spectrum!

Today, I have acquired my first atmospheric spectrum taken from my office windows. The system was looking at an autumnal cloud (so a pretty grey one!). A simple python GUI has been made recently to control the camera via the CL2USB3 framegrabber, and is shown on the image below:


The GUI displays the camera sensor image in the main window (in a gray scale colormap) with 2 associated profiles respectively on the bottom and on the right hand side of the sensor image. These profiles give a cross-section across the spatial (horizontal) and spectral (vertical) directions. The exposure and frame rate of the camera can be controlled separately, but I have used the ALC (automatic Light Control function) to optimize the InGaAs sensor dynamic.

The horizontal profile shows the intensity distribution of the cloud across the slit, and we can notice that it is composed of 2 brighter features. The vertical black lines therefore correspond to darker areas, within the cloud. The spectral profile is taken on the brightest part of the cloud. The horizontal lines are the interesting features. They represent the main atmospheric absorption bands.

The image below shows the light path from the scene (i.e. the cloud observed through my window – in fact this is not the cloud used for the spectra shown here above!!!) up to the camera sensor. The scene is imaged onto the spectrograph slit using an objective lens (shown in the previous post). The light passes through the slit and is then dispersed by the freeform grating producing the spectra (vertically) for every spatial points across the slit.


The atmospheric transmission is composed,  in the Infrared, of low absorption spectral windows or bands. Each of these bands is denominated by a letter. Ground astronomical IR instrumentation, which aims to look at the light from the stars and galaxies through the atmosphere, have been optimized to deliver the best possible image and contrast in these bands. The image below shows the main bands in the spectral range of the Owl mini 640 camera:

spectrum 3

There are 3 main bands, at around 1020nm, 1220nm and 1630nm. The strongest water absorption is between the J and H bands at around 1400nm.

So the above spectrum, which is  acquired by our system, is the spectrum of the sun (black body spectrum with the radiance peaking in the visible range around 500nm) observed through the atmospheric transmission window, and finally reaching the camera sensor, which has in turn, a maximum sensitivity between 1000nm and 1500nm.

The image below shows the position of the different IR bands on a spectrum acquired with the system.

spectrum 5.PNG

Next time, I will look at calibrating the system and at assessing its performances by measuring the smallest spatial and spectral features that it can image.

OWL 640 Mini VIS-SWIR camera

I have received my new OWL 640 Mini Vis-SWIR camera with its IR KOWA 1″ 25mm/F1.4 optic. The camera is optimized for the 0.6 to 1.7 micron waveband and was selected because of this SWIR waveband extending into the visible. On the image below, the quantum efficiency (rate of electron conversion per photon received) of the InGaAs (Indium Gallium Arsenide) camera sensor is displayed and spans from the visible spectrum up to the mid-SWIR (where thermal effects due to the heating up of the electronics start to kick in).


The camera is small and could possibly be used in the airborne hyperspectral imaging tests that we will do later in this project. The sensor resolution is 640 x 512 (VGA) and the pixel dimension is 15 microns.

raptor photonics

The Camera Link connection directly interfaces into an PCI-e board in the PC. I will not be using the camera with this setup because I want run it either from a laptop or from a Raspberry pi 4. As a result, I will be using a CAMPORT Euresys CL2USB3 (see image below) to control it from the USB3 port.


The control interface will be developed in Python, and the camera functions such as exposure time, camera gain, binning etc… will be accessed via the serial port of the camera.

Here is a first Infra-red picture of me.


On the IR picture below, I am holding a glass of water (hum… yes it is) in front of my eyes. Due to the water absorption, at a wavelength band centered on 1450nm, the glass appears to be opaque!

We are used to imaging in the visible where the glass and the water are totally transparent. The spectrograph, which has been described in the previous post, will look at imaging and quantifying the water absorption in the vicinity of this absorption band to be able to extract the moisture content.



Collaboration with Newcastle University – training day on 2 Headwall hyperspectral sensors at Nafferton Farm

On the 27th and 28th of June, I was at Newcastle University’s Nafferton Farm in Northumberland where 2 Headwall hyperspectral sensors (VNIR and SWIR) were installed and tested for the first time.


The 2 sensors are a :

  • Micro-Hyperspec VNIR e-serie (1600 pixels in the spatial dimension)  and a
  • Micro-Hyperspec SWIR 384 equipped with a Stirling cooled MCT (384 pixels in the spatial dimension).

They will be soon used to measure the development and phenotype of potatoes. Their large spectral bands and versatility will allow Newcastle to use them on a wide range of experimental activities, either in a laboratory or airborne.


Dr Ankush Prashar, the Newcastle University scientific lead, is keen on applying this technology to analyse 300 species of potatoes which are currently cultivated both organically or conventionally at the farm.


These amazing sensors will be a great tool for comparison of optical performances with the Durham’s prototype currently being built at the UK Remote Sensing Technology Centre in NetPark.

The team is composed of Chris Holder from Durham University – specialised in deep learning (from left to right), Dr Ankush Prashar from Newcastle University – Lecturer in Crop Science, myself, Isaac Gilbert from Analytik Ltd, Francesco Beccari from Headwall, Pigeon from Newcastle University – future PhD student working on the project, and Dr Hiran Vegad from Analytik Ltd.


This is the start of a productive collaboration between Newcastle and Durham! Please keep following the blog for updates!

Machining and Assembly

We now have completed the machining of a prototype in aluminium. The primary & tertiary mirrors have been diamond turned as one single surface on a Moore Nanotech 250 machine, and the M2 freeform grating has been ruled on the 4 axis Nanotech 350 FG.

The resulting grating looks very nice :

M2 convex grating

When looking at the reflected light, the image through the grating appears multiple, corresponding to the different diffraction order.

The 10 micron period of the grating’s line have a nice triangular profile, to maximize the throughput into the first diffraction order :

blazed grating

The 3 mirrors have been assembled on the breadboard, with the gold coated prismatic folding mirror and the adjustable slit. This slit will allow to adjust and optimize the spectral resolution while maximizing the system’s throughput.

MAIT system 2

MAIT system

The system has then been tested with a white light from an halogen bulb. Looking at the different orders, we note a pretty good match of the spectral lines position and order of appearance with the predicted ones computed on Zemax.

first spectrum

The next stage will be to image the first spectrum on the InGaAs sensor of the Raptor Photonics OWL 640 Mini VIS-SWIR !