Despite the scale-related advantages of online crowdsourcing, human computation systems are prone to human-based errors. Fortunately, machines can be used in complementary fashion to detect and correct those errors either autonomously or by enlisting the help of humans during key steps. Herein we consider the relevant case of OpenStreetMap, which is a world leader in collecting map data contributed by users. We have little knowledge about its contributors in terms of their skills, knowledge, and patterns of data collection. Furthermore, OpenStreetMap has loose coordination and no top-down quality assurance processes. This makes the crowdsourced map data more vulnerable to errors and gaps, which would need to be corrected in order for the resultant map to be navigable. The present work has been conducted to identify errors in OpenStreetMap data, using a metropolitan area in Punjab as test data for finding inconsistencies. We conclude that our test data contains many such errors and is not mature enough to support practical use. We also describe various open-source algorithms that could be used remedially to address these shortcomings.