This work presents an energy-efficient SRAM with embedded dot-product computation capability, for binary-weight convolutional neural networks. A 10T bit-cell based SRAM array is used to store the 1-b filter weights. The array implements dotproduct as a weighted average of the bit-line voltages, which are proportional to the digital input values. Local integrating ADCs compute the digital convolution outputs, corresponding to each filter. We have successfully demonstrated functionality (> 98% accuracy) with the 10,000 test images in the MNIST handwritten digit recognition dataset, using 6-b inputs/outputs. Compared to conventional full-digital implementations using small bit-widths, we achieve similar or better energy-efficiency, by reducing data transfer, due to the highly parallel in-memory analog computations.