Wearable augmented reality (AR) can help the task of a user by adding virtual objects to his view on the real world. To save power in the mobile unit, rendering can be offloaded to the backbone as much as possible. However, because of low latency requirements, images for mobile AR can not be rendered completely in the backbone. We developed a system capable of end-to-end latencies of 10ms, with a seamless fitting dynamic level-of-detail framework extending the VRML and Inventor language, and building on current trends in QoS handling. In this paper we outline the structure and components of our system, and discuss a demo application projecting a statue on the campus.