Deep learning has become ubiquitous, touching daily lives across the globe. Today, traditional computer architectures are stressed to their limits in efficiently executing the growing complexity of data and models. Compute‐in‐memory (CIM) can potentially play an important role in developing efficient hardware solutions that reduce data movement from compute‐unit to memory, known as the von Neumann bottleneck. At its heart is a cross‐bar architecture with nodal non‐volatile‐memory elements that performs an analog multiply‐and‐accumulate operation, enabling the matrix‐vector‐multiplications repeatedly used in all neural network workloads. The memory materials can significantly influence final system‐level characteristics and chip performance, including speed, power, and classification accuracy. With an over‐arching co‐design viewpoint, this review assesses the use of cross‐bar based CIM for neural networks, connecting the material properties and the associated design constraints and demands to application, architecture, and performance. Both digital and analog memory are considered, assessing the status for training and inference, and providing metrics for the collective set of properties non‐volatile memory materials will need to demonstrate for a successful CIM technology.