Low power, high speed and high flexibility are today's requirements for SoC (System On Chip) designers. Because average bandwidth at the side of main memory is crucial for system performance, our research focuses on the development of digital architectures for low-level and very highthroughput data processing. Based on an associative computing paradigm, this paper presents the implementation of a scalable associative processor dedicated to textual retrieval in huge databases by means of approximate matching techniques. It exposes the internal architecture of the system and shows an efficient use of pipelining within the scalable and highly parallel processing core. As a key feature to the architecture, the hardware implementation of sorting and merging algorithms based on comparator networks yields very short time for the ranking operations. Moreover, it permits to keep the final processing speed higher enough to reach the maximum peripheral data throughput.