Visual data retrieval and management have become significant research areas due to the rapid expansion of image data on the internet and the rise of multimedia technologies. Many professional organizations, such as journalists, engineers, agriculture, urban planning to meteorology and art historians, have a shared requirement to discover a particular image in an extensive collection. Researchers were inspired by their difficulties with text-based picture retrieval to create new ways to encode visual index input. Researchers are focused on Content-Based Image Retrieval (CBIR) to overcome these issues and improve the performance of image retrieval models. Using the optical characteristics of an image, such as colour, body, and texture, CBIR searches massive databases for user-requested photos with a query image. An image's visual features are extracted using CBIR's feature extraction method, which is done automatically, without human intervention. The extraction of crucial points in a picture is done using the feature extraction algorithm in this research. It is possible to think of a CBIR system's design as a group of modules that work together to get database images in response to a specific query. This research proposes an Interlinked Feature Query-based Image Retrieval Model (IFQ-IRM) for accurate image extraction from the image database. Image queries are transformed into feature vectors by the system. The extracted features of the query instance and the photos in the databases are then compared, and an indexing strategy is used to retrieve the images. The proposed model results show that image extraction accuracy is better than traditional models.