Map construction, or mapping, plays an important role in robotic applications. Mapping relies on inherently noisy sensor measurements to construct an accurate representation of a surrounding environment. Generally, individual sensors suffer from performance degradation issues under certain conditions in the environment. Sensor fusion allows to obtain statistically more accurate perception and to cope with performance degradation issues by combining data from multiple sensors of different modalities. This article reviews modern sensor fusion methods for map construction applications based on optical sensors, such as cameras and laser range finders. State-of-the-art mapping solutions built upon different mathematical theories and concepts, such as machine learning, are considered.