[Zurück]


Zeitschriftenartikel:

M. Hörhan, H. Eidenberger:
"Gestalt descriptions for deep image understanding";
Springer Pattern Analysis and Applications, July (2020).



Kurzfassung deutsch:
In this work, we present a novel visual perception-inspired local description approach as a preprocessing step for deep learning.
With the ongoing growth of visual data, efficient image descriptor methods are becoming more and more important.
Several local point-based description methods were defined in the past decades before the highly accurate and popular deep
learning methods such as convolutional neural networks (CNNs) emerged. The method presented in this work combines a
novel local description approach inspired by the Gestalt laws with deep learning, and thereby, it benefits from both worlds.
To test our method, we conducted several experiments on different datasets of various forensic application domains, e.g.,
makeup-robust face recognition. Our results show that the proposed approach is robust against overfitting and only little
image information is necessary to classify the image content with high accuracy. Furthermore, we compared our experimental
results to state-of-the-art description methods and found that our method is highly competitive. For example it outperforms
a conventional CNN in terms of accuracy in the domain of makeup-robust face recognition.

Kurzfassung englisch:
In this work, we present a novel visual perception-inspired local description approach as a preprocessing step for deep learning.
With the ongoing growth of visual data, efficient image descriptor methods are becoming more and more important.
Several local point-based description methods were defined in the past decades before the highly accurate and popular deep
learning methods such as convolutional neural networks (CNNs) emerged. The method presented in this work combines a
novel local description approach inspired by the Gestalt laws with deep learning, and thereby, it benefits from both worlds.
To test our method, we conducted several experiments on different datasets of various forensic application domains, e.g.,
makeup-robust face recognition. Our results show that the proposed approach is robust against overfitting and only little
image information is necessary to classify the image content with high accuracy. Furthermore, we compared our experimental
results to state-of-the-art description methods and found that our method is highly competitive. For example it outperforms
a conventional CNN in terms of accuracy in the domain of makeup-robust face recognition.

Schlagworte:
Image analysis, Deep learning-based methods, Gestalt descriptors, Image classification, Face recognition, Person identification


"Offizielle" elektronische Version der Publikation (entsprechend ihrem Digital Object Identifier - DOI)
http://dx.doi.org/10.1007/10044-020-00904-6

Elektronische Version der Publikation:
https://publik.tuwien.ac.at/files/publik_293716.pdf


Erstellt aus der Publikationsdatenbank der Technischen Universitšt Wien.