Real-time deep video analytic at the edge is an enabling technology for emerging applications, such as vulnerable road user detection for autonomous driving, which requires highly accurate results of model inference within a low latency. In this paper, we investigate the accuracy-latency trade-off in the design and implementation of real-time deep video analytic at the edge. Without loss of generality, we select the widely used YOLObased object detection and WebRTC-based video streaming for case study. Here, the latency consists of both networking latency caused by video streaming and the processing latency for video encoding/decoding and model inference. We conduct extensive measurements to figure out how the dynamically changing settings of video streaming affect the achieved latency, the quality of video, and further the accuracy of model inference. Based on the findings, we propose a mechanism for adapting video streaming settings (i.e. bitrate, resolution) online to optimize the accuracy of video analytic within latency constraints. The mechanism has proved, through a simulated setup, to be efficient in searching the optimal settings.