Thema
Joint Segmentation and Tracking of Targets in Video Using Deep Learning
Typ Master, Forschungspraxis
Betreuer Maryam Babaee, M.Sc.
Tel.: +49 (0)89 289-28543
E-Mail: maryam.babaee@tum.de
Sachgebiet Computer Vision
Beschreibung In some video surveillance applications such as activity recognition, it is required to segment objects in video as well as their tracking. Both segmentation and tracking of multi targets in video are challenging problems in computer vision. In joint segmentation and tracking approaches, much detailed information in level of pixel or super pixel is used compared to detection boxes. To track people in a video, the mapping between observations in consequent frames can be formulated as a probabilistic graphical model such as CRF (Conditional Random Field). CRF is a powerful framework in solving discrete optimization problems like tracking as well as segmentation.
Based on a research work on the semantic image segmentation [1], a CRF model can be casted to a Recurrent Neural Network (RNN). The goal is to extend this deep learning technique for joint segmentation and multi people tracking problem. To do this, these two problems would be first formulated as a unified CRF model and then we develop a deep RNN that could mimic the proposed CRF. Below you can see three frames of a video sequence captured at different times as well as their corresponding segmentation.



Ref:
[1] www.robots.ox.ac.uk/~szheng/crfasrnndemo
Voraussetzung Basic knowledge in probabilistic graphical model and neural network as well as solid programming skill are required. In case you have any question, write me an email.
Bewerbung If you are interested in this topic, we welcome the applications via the email address above. Please set the email subject to “<Type of application> application for topic 'XYZ'”, ex. “Master’s thesis application for topic 'XYZ'”, while clearly specifying why are you interested in the topic in the text of the message. Also make sure to attach your most recent CV (if you have one) and grade report.