We are creating some awesome events for you. Kindly bear with us.

Enhanced face recognition tech effective in adverse environments

Two Australian Defence scientists have enhanced a facial recognition algorithm, improving the odds of identifying someone in adverse environments such as across a distant carpark and in dark alleys.

According to a recent press release, the scientists, who are both members of the Defence Science and Technology biometrics team, explained that while iris and fingerprint biometric data are the most accurate, a comparison of facial characteristics is the most common technique used because it is reasonably accurate and CCTV footage is commonplace.

About the initiative

The aim of this research was to gauge if face recognition algorithms could be used in adverse environments.

Examples of adverse environments are long distances of up to 250 metres and in really dark environments such as in an alley on a moonless night.

Enhancements were made to an in-house facial recognition algorithm.

After which, they conducted trials that involved long lenses across fields on bright sunny days, as well as in a dark tunnel facility where the scientists say that they could barely see their hands when the lights were turned off.

The results of the trials showed that face recognition with the new algorithms is indeed effective and works in the aforementioned environments.

Literature review pushed them in the right direction

According to Sau Yee Yiu, one of the scientists, literature review in the early stages helped direct their energy.

She then came up with a model of how heat propagates through the atmosphere, and this turns out to be similar to the way noise from atmospheric turbulence distorts images over long distances.

The atmosphere moves and shifts around and the image gets sheared and blurry. Applying her heat dispersal model gets rid of that turbulence and brings it back closer to a focused, sharp image.

The low-light enhancement uses various filter passes to remove graininess from images.

The algorithm can be tweaked interactively, using an interface that allows several parameters to be controlled by sliders.

This controls the deconvolutions applied to the images. As the user moves the sliders, the output will be updated in real-time thereby allowing a tailoring of the algorithm to get the best results for a particular environment.

The team presented its results at the 2018 Digital Image Computing: Techniques and Applications (DICTA) conference.

In the paper the colleagues demonstrate the improvements in recognition and face matching delivered by the enhanced algorithm.

A further algorithm was used to calculate a metric for the overall quality of facial images.

Effective results

This revealed that images processed with the modified algorithm all had superior quality to the originals, concurring with visual checks.

Dmitiri Kamenetsky, the other scientist on the project, shared that they are very happy with the results as this will be of benefit to stand-off surveillance.

A description of the algorithm had been released in order to allow other researchers to implement it and make further improvements.

Interestingly, most of the research presented at DICTA was using deep learning in some way, theirs is just a relatively simple yet effective mathematical approach.

Send this to a friend