2018
DOI: 10.1049/el.2018.0592
|View full text |Cite
|
Sign up to set email alerts
|

Precharge free dynamic content addressable memory

Abstract: A precharge free dynamic content addressable memory (DCAM) is introduced for low-power and high-speed search applications. Elimination of precharge prior to search allows hardware engine to perform more number of searches within the stipulated time. The proposed DCAM cell not only removes precharge of matchline (ML) but also utilises decoupling of bitline and searchline so that unwanted capacitive couplings are minimised at charge storage nodes. A 512 bit of the proposed scheme is implemented using 45 nm CMOS … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 4 publications
0
7
0
Order By: Relevance
“…Segmented ML technique proved to be the best to handle all the performance metrics. Table 2 Performance comparison summary of ML-sensing techniques Precharge high [6,14,15] Current-race scheme [22,33] Precharge free [29][30][31][32] Low swing [11,[34][35][36] Segmented [37][38][39][40][41][42][43][44][45] Sele. precharge [46][47][48]…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Segmented ML technique proved to be the best to handle all the performance metrics. Table 2 Performance comparison summary of ML-sensing techniques Precharge high [6,14,15] Current-race scheme [22,33] Precharge free [29][30][31][32] Low swing [11,[34][35][36] Segmented [37][38][39][40][41][42][43][44][45] Sele. precharge [46][47][48]…”
Section: Resultsmentioning
confidence: 99%
“…Most efforts have been made in reducing the ML switching power either by lowering the charging current or ML swing and few on enhancing the speed of ML evaluation [28]. Later designs have also introduced pre-charge free structures to invoke an increased number of searches in the same bandwidth [29][30][31][32]. Segmenting the MLs into multiple sections and performing/blocking of evaluation in the partitions separately with different ML structures can save ML power significantly.…”
Section: Sensing Techniquesmentioning
confidence: 99%
See 3 more Smart Citations