Modern data center solid state drives (SSDs) integrate multiple general-purpose embedded cores to manage ash translation layer, garbage collection, wear-leveling, and etc., to improve the performance and the reliability of SSDs. As the performance of these cores steadily improves there are opportunities to repurpose these cores to perform application driven computations on stored data, with the aim of reducing the communication between the host processor and the SSD. Reducing host-SSD bandwidth demand cuts down the I/O time which is a bottleneck for many applications operating on large data sets. However, the embedded core performance is still signicantly lower than the host processor, as generally wimpy embedded cores are used within SSD for cost eective reasons. So there is a trade-o between the computation overhead associated with near SSD processing and the reduction in communication overhead to the host system. In this work, we design a set of application programming interfaces (APIs) that can be used by the host application to ooad a data intensive task to the SSD processor. We describe how these APIs can be implemented by simple modications to the existing Non-Volatile Memory Express (NVMe) command interface between the host and the SSD processor. We then quantify the computation versus communication tradeos for near storage computing using applications from two important domains, namely data analytics and data integration. Using a fully functional SSD evaluation platform we perform design space exploration of our proposed approach by varying the bandwidth and computation capabilities of the SSD processor. We evaluate static and dynamic approaches for dividing the work between the host and SSD processor, and ⇤ Gunjae and Kiran contributed equally to the paper.