Semantic segmentation for visual indoor localization

Show simple item record

dc.contributor.author Kaminskyi, Yurii
dc.date.accessioned 2019-02-19T10:19:06Z
dc.date.available 2019-02-19T10:19:06Z
dc.date.issued 2019
dc.identifier.citation Kaminskyi, Yurii. Semantic segmentation for visual indoor localization : Master Thesis : manuscript / Yurii Kaminskyi ; Supervisor Jiri Sedlar, Ph. D. ; Ukrainian Catholic University, Department of Computer Sciences. – Lviv : [s.n.], 2019. – 30 p. : ill. uk
dc.identifier.uri http://er.ucu.edu.ua/handle/1/1329
dc.language.iso en uk
dc.subject semantic segmentation uk
dc.subject indoor localization uk
dc.subject Mask R-CNN uk
dc.title Semantic segmentation for visual indoor localization uk
dc.type Preprint uk
dc.status Публікується вперше uk
dc.description.abstracten The problem of visual localization and navigation in the 3D environment is a key to solving a vast variety of practical tasks. For example in robotics, where the machine is required to locate itself on the 3D map and steer to a specific location. Another example is a personal assistant in the form of a mobile phone or smart glasses that uses augmented reality techniques to navigate the user seamlessly in large indoor spaces such as airports, hospitals, shopping malls or office buildings. The purpose of this work was to improve the performance of the InLoc localization pipeline that gives state-of-the-art results for indoor visual localization problem. That was done by developing relevant semantic features. Namely, we introduce a variety of features as a result of two different segmentation models: Mask R-CNN and CSAIL. We evaluate the quality of generated features and add the features of the better performing model into the InLoc localization pipeline. With the introduced features we improved the performance of the InLoc localization pipeline and introduced approaches for further research. uk


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Browse

My Account