The growing data volume and complexity of Deep Neural Networks (DNN) require new architectures to surpass the limitation of the von-Neumann bottleneck, with Computing-inmemory (CIM) as a promising direction for implementing energyefficient neural networks. However, CIM's peripheral sensing circuits are usually power and area hungry components. We propose a time-multiplexing Computing-in-Memory architecture (TM-CIM) based on memristive analog computing to share the peripheral circuits and process one column at a time. The memristor array is arranged in a column-wise manner that avoids wasting power/energy on unselected columns. In addition, DAC (digital-to-analog converter) power and energy efficiency, which turns out to be an even greater overhead than ADC (analog-to-digital converter), can be fine-tuned in TM-CIM for significant improvement. For a 256*256 crossbar array with a typical setting, TM-CIM saves 18.4× in energy with 0.136 pJ/MAC efficiency, and 19.9× area for 1T1R case and 15.9× for 2T2R case. Performance estimation on VGG-16 indicates that TM-CIM can save over 16× area. A trade-off between the chip area, peak power, and latency is also presented, with a proposed scheme to further reduce the latency on VGG-16, without significantly increasing chip area and peak power.