Lund University will, assisted by Ericsson and Katam, develop high-level algorithms for model building, semantic SLAM, navigation, recognition and interpretation, based ondifferent sensor modalities, but primarily based on image data. One focus will beon developing semantic algorithms for SLAM using representation learning. Representation learning deals with a wide variety of classification problems that can be used in SLAM to classify real-world objects from sensor data. Furthermore, the classification can be used to create geometric models of the environment. Specifically, using semi-supervised algorithms, e.g. neural networks, clustering, dictionary learning, we will create semantic maps in a SLAM framework. Learning can be used for object classification,understanding object properties, depth perception, and more.Exploiting semantic map information can improve the SLAM performance, for example finding for drone trajectory with increased accuracy and robustness.