Projects funded by the DFG

Trilateral
Augmented Synopsis of Surveillance Videos in Adaptive Camera Networks
Duration: 1.10.2015 - 30.9.2017
funded by the Deutsche Forschungsgemeinschaft DFG
Partners: The Hebrew University of Jerusalem
Department of Computer Science and Information Technology, College of Science & Technology, Palästina
In this project we will develop a novel system which will pave the way for a new kind of smart video surveillance networks.
The video streams from surveillance cameras will feed into servers which will perform real-time object detection and recognition.
_________________ ________________________________________________________________________
SOTAG
SynergiesOfTrackingAndGait
Duration: 1.3.2015 - 28.2.2018
funded by the Deutsche Forschungsgemeinschaft DFG
In this project, synergies of two research areas will be used in order to obtain completely new insights as well as new algorithms and methods for identifying a person in realistic surveillance scenarios. The research areas underlying this project are people identification based on gait as well as other methods for people tracking from video data. The combination of these two research areas and the synergies between the two topics shall be exploited to enable new security systems.

up

Projects funded by the government

Datengetriebene Wertschöpfung von Multimedia Content
Datengetriebene Wertschöpfung von Multimedia Content
Laufzeit: 01.01.2017 - 30.06.2018
Gefördert durch das Bayerische Staatsministerium für Wirtschaft und Medien, Energie und Technologie
Partner: ProSiebenSat.1 Media SE und munich media intelligence
Zentrales Ziel des Projekts ist die Entwicklung von Methoden im Bereich der datengetriebenen Verarbeitung von Multimedia Content. Dies geschieht durch die Erforschung, Anwendung und Anpassung von Methoden aus dem Bereich der künstlichen Intelligenz und des Maschinellen Lernens mit dem Ziel sie für datengetriebene Anwendungen im Multimedia Umfeld einsetzen zu können. Viele zentrale Algorithmen in dem Bereich von Multimodaler Bild- und Videoverarbeitung befinden sich derzeit in einem wenig ausgereiften Entwicklungsstadium. Für einen Einsatz in einem wirtschaftlichen Kontext ist daher eine vorangestellte industrielle Forschungsphase unumgänglich. Die zu entwickelnden Methoden können potentiell jedoch diverse Use Cases unterstützen, welchen allen eine automatisierte Identifikation, Metadatenextraktion und algorithmische Verwertung von Big Data Multimedia Content zugrunde liegt.

up

EU Projects

SAFEE
SAFEE, Security of Aircraft in the Future European Environment
EU Sixth Framework Programme
Finished: 30.4.2008
The integrated project Security of Aircraft in the Future European Environment (SAFEE) within the Sixth Framework Programme is designed to restore full confidence in the air transport industry.

iHEARu
iHEARu : Intelligent systems' Holistic Evolving Analysis of Real-life Universal speaker characteristics
FP7 ERC Starting Grant
Duration: 01.01.2014 - 31.12.2018

The iHEARu project aims to push the limits of intelligent systems for computational paralinguistics by considering Holistic analysis of multiple speaker attributes at once, Evolving and self-learning, deeper Analysis of acoustic parameters - all on Realistic data on a large scale, ultimately progressing from individual analysis tasks towards universal speaker characteristics analysis, which can easily learn about and can be adapted to new, previously unexplored characteristics.
_________________ ________________________________________________________________________

HOL-I-WOOD
HOL-I-WOOD PR : Holonic Integration of Cognition, Communication and Control for a Wood Patching Robot
EU FP7 STREP
Duration: 01.01.2012 - 31.12.2014
Partners: Microtec S.R.L, Italy; Lulea Tekniska Universitet, Sweden; TU Wien, Austria; Springer Maschinenfabrik AG, Austria; Lipbled, Slovenia; TTTech Computertechnik AG, Austria
Repair and patching of resin galls and lose dead knots is a costly and disruptive process of inline production in timber industry. The human workforce involved in these production tasks is hard to be replaced by a machine. Another request for human recognition and decision-making capabilities, occurring at a previous stage of the production line, is the detection and classification of significant artefacts in wooden surfaces. This project proposes a holonic concept that subsumes automated visual inspection and quality/artefact classification by a skilled robot visually guided and controlled by non-linear approaches that combine manipulation with energy saving in trajectory planning under real-time conditions – enabling the required scalability for a wide range of applications. The interaction of these holonic sub-systems is implemented in agent technology based on a real-time communication concept while fusing multi-sensoric data and information at different spatial positions of the production line. The feasibility of inter-linking independent autonomous processes, i.e. agents for inspection, wood-processing, transport (conveying) to repair by a patching robot, is demonstrated by a pilot in a glue lam factory. A mobile HMI concept makes interaction with the machine park easy to control, reliable and efficient, while at the same time increasing the safety for workers within a potentially dangerous working environment of a glue lam factories and saw mills.
_________________ ________________________________________________________________________

ASC_Inclusion
ASC-Inclusion: Integrated Internet-based Environment for Social Inclusion of Children with Autism Spectrum Conditions (ASC)
EU FP7 STREP
Duration: 01.11.2011 - 31.12.2014
Role: Coordinator, Coauthor Proposal, Project Steering Board Member, Workpackage Leader
Partners: TUM, University of Cambridge, Bar Ilan University, Compedia, University of Genoa, Karolinska Institutet, Autism Europe
The ASC-Inclusion project is developing interactive software to help children with autism understand and express emotions through facial expressions, tone-of-voice and body gestures. The project aims to create and evaluate the effectiveness of such an internet-based platform, directed for children with ASC (and other groups like ADHD and socially-neglected children) and those interested in their inclusion. This platform will combine several state-of-the art technologies in one comprehensive virtual world, including analysis of users' gestures, facial and vocal expressions using standard microphone and webcam, training through games, text communication with peers and smart agents, animation, video and audio clips. User's environment will be personalized, according to individual profile & sensory requirements, as well as motivational. Carers will be offered their own supportive environment, including professional information, reports of child's progress and use of the system and forums for parents and therapists.
_________________ ________________________________________________________________________

ALIAS
ALIAS: Adaptable Ambient Living Assistant
AAL-2009-2-049
Duration: 01.07.2010 - 30.06.2013
Partners: Cognesys, Aachen; EURECOM, Sophia-Antipolis, France; Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.; Guger Technologies, Graz, Austria; MetraLabs, Ilmenau; PME Familienservice GmbH, Berlin; Technische Universität Ilmenau; YOUSE GmbH, Berlin
The objective of the project Adaptable Ambient LIving ASsistant (ALIAS) is the product development of a mobile robot system that interacts with elderly users, provides cognitive assistance in daily life, and promotes social inclusion by creating connections to people and events in the wider world. ALIAS is embodied by a mobile robot platform with the capacity to monitor, interact with and access information from on-line services, without manipulation capabilities. The function of ALIAS is to keep the user linked to the wide society and in this way to improve her/his quality of life by combating loneliness and increasing cognitively stimulating activities.
_________________ ________________________________________________________________________

CustomPacker
CustomPacker: Highly Customizable and Flexible Packaging Station for mid- to upper sized Electronic Consumer Goods using Industrial Robots
EU FP7 - ICT
Duration: 01.07.2010 - 30.06.2013
Partners: FerRobotics Compliant Robot Technology GmbH, Austria; Loewe AG, Kronach; MRK-Systeme GmbH, Augsburg; PROFACTOR GmbH, Steyr-Gleink, Austria; Tekniker, Eibar, Gipuzkoa, Spain; VTT, Finland
The project Highly Customizable and Flexible Packaging Station for mid- to upper sized Electronic Consumer Goods using Industrial Robots (CustomPacker) aims at developing and integrating a scalable and flexible packaging assistant that aids human workers while packaging mid- to upper sized and mostly heavy goods. Productivity will be increased by exploiting direct human-robot cooperation overcoming the need and requirements of current safety regularities. The main goal of CustomPacker is to design and assemble a packaging workstation mostly using standard hardware components resulting in a universal handling system for different products.
_________________ ________________________________________________________________________

Prometheus
PROMETHEUS: Prediction and inteRpretatiOn of huMan bEhaviour based on probabilisTic structures and HeterogEneoUs sensorS
EU SEVENTH FRAMEWORK PROGRAMME THEME ICT-1.2.1
Duration: 01.01.2008 - 31.12.2010
The overall goal of the project is the development of principled methods to link fundamental sensing tasks using multiple modalities, and automated cognition regarding the understanding of human behaviour in complex indoor environments, at both individual and collective levels. Given the two above principles, the consortium will conduct research on three core scientific and technological objectives:
- sensor modelling and information fusion from multiple, heterogeneous perceptual modalities,
- modelling, localization, and tracking of multiple people,
- modelling, recognition, and short-term prediction of continuous complex human behavior.
_________________ ________________________________________________________________________

Semaine
SEMAINE: Sustained Emotionally coloured Machine-humane Interaction using Nonverbal Expression
EU FP7 STREP, Duration: 01.01.2008 - 31.12.2010
Partners: DfKI, Queens University Belfast (QUB), Imperial College of Science, Technology and Medicine London, University Twente, University Paris VIII, CNRS-ENST.
The Semaine project aims to build a Sensitive Artificial Listener (SAL), a multimodal dialogue system which can: interact with humans with a virtual character, sustain an interaction with a user for some time, react appropriately to the user's non-verbal behaviour. In the end, this SAL-system will be released to a large extent as an open source research tool to the community.
_________________ ________________________________________________________________________
AMIDA

AMIDA, Augmented Multi-party Interaction - Distance Access
EU-IST-Programm
_________________ ________________________________________________________________________
AMI


AMI, Augmented Multi-party Interaction
EU-IST-Programm
_________________ ________________________________________________________________________
FGNet


FGNet, European face and gesture recognition working group
EU-IST-Programm
_________________ ________________________________________________________________________
m4

m4, Multi-Modal Meeting Manager
EU-IST-Programm
_________________ ________________________________________________________________________

up

Excellence Initiative


COTESYS
The CoTeSys cluster of excellence (which stands for "COgnition for TEchnical SYstems") investigates cognition for technical systems such as vehicles, robots, and factories. Cognitive technical systems are equipped with artificial sensors and actuators, integrated and embedded into physical systems, and act in a physical world. They differ from other technical systems in that they perform cognitive control and have cognitive capabilities. read more
Projects:
RealEYE (1.08-12.09), ACIPE (11.06-12.09), JAHIR (11.06-12.10).
_________________ ________________________________________________________________________
Graduate School of Systemic Neurosciences
In the framework of the German excellence initiative (supported by Wissenschaftsrat and Deutsche Forschungsgemeinschaft, DFG) the Ludwig-Maximilians-Universität Munich (LMU) has founded a new Graduate School of Systemic Neurosciences (GSN-LMU). The school offers clearly structured training programs for PhD and MDPhD candidates. Tight links exist to the Elite Network Bavaria Master Program Neurosciences. read more

up

Industrial Projects


ISPA
IIntelligent Support for Prospective Action
Part of the framework programCAR@TUM between BMW Forschung und Technik GmbH, TU München and the Universität Regensburg
Runtime: 1.4.2008 – 31.3.2010
Current display elements in cars range from the central instrument cluster and the navigation display to the Head Up Display (HUD). Those large display areas can be used for immediate, short- and long-term assistance for the driver. The introduction of the HUD imposes the question which information can be displayed on it in a sensible way. Also the possibilities of a digital instrument cluster or alternative display technologies like LED stripes are being investigated. Together with data provided by Car2X communication an assistance system shall be developed to support anticipatory driving.
_________________ ________________________________________________________________________
TCVC
ITalking Car and Virtual Companion
Cooperation with Continental Automotive GmbH
Runtime: 01.06.2008 - 30.11.2008
TCVC provides an expertise on emotion in the car with respect to a requirement analysis, potential and near-future use-cases, technology assessment and a user acceptance study.
_________________ ________________________________________________________________________
ICRI
IIn-Car Real Internet
Cooperation with Continental Automotive GmbH
Runtime: 01.06.2008 - 30.11.2008
ICRI aims at benchmarking of internet browsers on embedded platforms as well as at development of an integrated multimodal demonstrator for internet in the car. Investigated modalities contain hand-writing and touch-gestures and natural speech apart from conventional GUI interaction. The focus lies on MMI development with an embedded realisation.
_________________ ________________________________________________________________________
cUSER2
IcUSER 2
Cooperation with Toyota
Runtime: 08.01.2007 - 31.07.2007
The aim of the cUSER follow-up project is to establish a system to interpret human interest by combined speech and facial expression analysis basing on multiple input analyses. Besides the aim of improved highest possible accuracy by subject adaptation, class balancing strategies, and fully automatic segmentation by individual audio and video stream analysis, cUSER 2 focuses on real-time application by cost-sensitive feature space optimization, graphics processing power utilization, and high-performance programming methods. Furthermore, feasibility and real recognition scenarios will be evaluated.
_________________ ________________________________________________________________________
cUSER
IcUSER
Cooperation with Toyota
Runtime: 01.08.2005 - 30.09.2006
The aim of this project was an audiovisual approach to the recognition of spontaneous human interest.
_________________ ________________________________________________________________________
NaDia
INaDia Natural Dialogs for komplex in-vehicle information services
Cooperation with the BMW Group and CLT Sprachtechnologie GmbH.
Runtime: 1.9.2001 - 31.5.2003 and 1.4.2004 - 31.9.2005
_________________ ________________________________________________________________________

FERMUS
IFERMUS, Error robust, multimodal speech dialogs
Cooperation with the BMW AG, the DaimlerChrysler AG and SiemensVDO Automotive.
Duration: 01.03.2000-30.06.2003
The primary intention of the FERMUS-project was to localize and evaluate various strategies for a dedicated analysis of potential error patterns during human-machine interaction with information and communication systems in upper-class cars. For reaching this goal, we have employed a huge bundle of progressive and mainly recognition-based input modalities, like interfaces for natural speech and dynamic gestural input. Particularly, emotional patterns of the driver have beeen integrated for generating context-adequate dialog structures.
_________________ ________________________________________________________________________
ADVIA
IADVIA, Adaptive In-car Dialogs
Cooperation with the BMW Group.
Runtime: 01.07.1998-31.12.2000
_________________ ________________________________________________________________________
SOMMIA
ISOMMIA, Speech-Oriented Man-Machine Interface in the Automobile
Cooperation with SiemensVDO Automotive.
Runtime: 01.06.2000-30.09.2001
The SOMMIA project focused on the design and evaluation of an ergonomic and generic operation concept for a speech-based MMI integrated in a car MP3-player or in comparable automotive applications. In addition, the system was subject to several economic and geometrical boundary conditions: a two line, 16 characters display with a small set of LEDs and a speaker- independent full word recognizer with 30 to 50 words active vocabulary. Nevertheless, the interface had to meet high technical requirements: its handling should be easy to learn, comfortable and, above all, intuitive and interactively explorable.

up

Other Projects

AGMA
IAGMA, Automatic Generation of Audio Visual Metadata in the MPEG-7 Framework (BMBF, Subcontractor of FHG-IMK)
_________________ ________________________________________________________________________
MUGSHOT
IMUGSHOT, Face profile recognition using hybrid pattern recognition techniques (DFG)
_________________ ________________________________________________________________________
Robust Speech Recognition
IRobust analysis, recognition and interpretation of speech based on a single-stage stochastic decoder (DFG)
Duration: 01.04.2001-31.05.2004