Abstract-The problem of discovering and removing duplicated records is one of the main problems in the wide area of data cleaning and data quality in the data warehouse. In this paper, researchers try to find a similar data from a set of data records. A similarity grade is assigned to the data records in relation to other data records based on a similarity between tokens of the data records. Data records whose similarity score with respect to each other is greater than a threshold from one or more groups of data records. In this system, a key is created for each record in the database, as shown in suggested algorithms, where this key is input to Q-grams similarity algorithm that calculates the percentage of similarity between each key and another. We have identified the percentage threshold to be 0.68. If the similarity threshold between the key values is exceeded, it enters to the Neural Network algorithm that works with two-phases training data and testing. The suggested approach is tested through several different data warehouse for the evaluating the efficiency. The accuracy acquired from multi DW has been found to be 96.94%.