My Profile

My photo
He graduated from the Dept. of Mathematics of the Aristotle University of Thessaloniki in 2001. He continued his studies at the School of Medicine of the same University until 2003, where he obtained the M.Sc. in Medical Informatics. In 2008, he obtained the Ph.D. in Informatics entitled as "Digital Processing Techniques in Speech Emotion Recognition" at the Computer Science faculty of the same University. He has been awarded the ERCIM fellowship for 2009-2011. In 2009, he was with VTT Technical Research Center of Finland working on Alzheimer's disease and Neuraly Adjusted Ventilation Assist (NAVA). In 2010-2011, he was with IAIS Fraunhofer Institute in Bonn working on Speech Analysis. From 2012 until now he is a researcher and software developer in Centre for Research and Technology Hellas (CERTH). In the 15 years of his professional career, he has experience in signal processing and statistical pattern recognition with Python and Matlab, Android development, Javascript-PHP development for WordPress, Joomla, Three.js frameworks, Augmented Reality with Layar-Wikitude frameworks, Virtual Reality with Unity3D, dance recognition with Kinect, and gesture recognition with Myo.

Thursday, February 9, 2017

DigiArt project


An EU founded project in the H2020 framework (3M€ 2015-2018). This project is about how to make a VR game without knowing gaming technologies. It targets for archaelogists that own 3D models and want to make a VR tour, but they do not know how. We are using web technologies for remotely updating the game, and desktop technologies for compiling the game. My role is to integrate all code contributions from several people into a solid product.

Project main site: http://digiart-project.eu
Product release site: http://digiart.mklab.iti.gr

DigiArt seeks to provide a new, cost efficient solution to the capture, processing and display of cultural artefacts. It offers innovative 3D capture systems and methodologies, including aerial capture via drones, automatic registration and modelling techniques to speed up post-capture processing (which is a major bottleneck), semantic image analysis to extract features from digital 3D representations, a “story telling engine” offering a pathway to a deeper understanding of art, and also augmented/virtual reality technologies offering advanced abilities for viewing, or interacting with the 3D models. The 3D data captured by the scanners and drones, using techniques such as laser detection and ranging (LIDAR), are processed through robust features that cope with imperfect data. Semantic analysis by automatic feature extraction is used to form hyper-links between artefacts. These links are employed to connect the artefacts in what the project terms “the internet of historical things”, available anywhere, at any time, on any web-enabled device. The contextual view of art is very much enhanced by the “story telling engine” that is developed within the project. The system presents the artefact, linked to its context, in an immersive display with virtual and/or with augmented reality. Linkages and information are superimposed over the view of the item itself. The major output of the project is the toolset that will be used by museums to create such a revolutionary way of viewing and experiencing artefacts. These tools leverage the interdisciplinary skill sets of the partners to cover the complete process, namely data capture, data processing, story building, 3D visualization and 3D interaction, offering new pathways to deeper understanding of European culture. Via its three demonstration activities, the project establishes the viability of the approach in three different museum settings, offering a range of artefacts posing different challenges to the system.