Microservice applications consist of a set of smaller services interacting in a graph structure to deliver the full application. Jobs will traverse this graph in different paths, both depending on the type of job, but also depending on the current load of different service replicas. Different paths will incur different scenario-specific costs, dependent on, e.g., deployment and the underlying cloud system. In this paper, we demonstrate how automatic differentiation over data-driven fluid models can be used to optimize a running microservice application, by designing a load balancer that minimizes some holistic cost function under response time constraints. First, a fluid model describing the load in each service is learned through parsing tracing data from the application. We introduce a cost function based on performance metrics such as mean queue length and response time percentiles, all retrieved from the fluid model. By using automatic differentiation on this cost function, we can find the gradient of the cost with respect to the load balancing parameters. This enables us to update these parameters, using e.g. gradient descent, in a manner that steers the application towards a more optimal setting. In an experimental evaluation on a small microservice application running on Ericsson Research Datacenter, it is shown that the method can quickly step towards optimal values while supporting complicated cost functions such as solutions to a system of ordinary differential equations.