We consider congestion control in peer-to-peer distributed systems. The problem can be reduced to the following scenario: Consider a set V of n peers (called clients in this paper) that want to send messages to a fixed common peer (called server in this paper). We assume that each client v ∈ V sends a message with probability p(v) ∈ [0, 1) and the server has a capacity of σ ∈ N, i.e., it can recieve at most σ messages per round and excess messages are dropped. The server can modify these probabilities when clients send messages. Ideally, we wish to converge to a state with p(v) = σ and p(v) = p(w) for all v, w ∈ V . We propose a loosely self-stabilizing protocol with a slightly relaxed legitimate state. Our protocol lets the system converge from any initial state to a state where. This property is then maintained for Ω(n c ) rounds in expectation. In particular, the initial client probabilities and server variables are not necessarily well-defined, i.e., they may have arbitrary values.Our protocol uses only O(W + log n) bits of memory where W is length of node identifiers, making it very lightweight. Finally we state a lower bound on the convergence time an see that our protocol performs asymptotically optimal (up to some polylogarithmic factor). * This is an extended version of a paper which will appear in SSS 2019. This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center On-The-Fly Computing (GZ: SFB 901/3) under the project number 160364472.Consider a set of n nodes (called clients in this paper) that want to continuously send messages to a fixed node (called server) with a certain probability in each round. The server is not aware of its connections and has limited capabilities with regard to the number of messages it is able to receive in each round and its internal memory. The task for the server is to use a congestion control protocol to modify the client probabilities such that the server receives only a constant amount of messages in each round (on expectation). As client probabilities may be arbitrary at the beginning, we further require the protocol to be self-stabilizing, i.e., it should be able to reach its goal starting from any arbitrary initial state. Self-stabilization comes with the advantage that the protocol is able to recover from transient faults like message loss or blackout of processes automatically. As the system grows larger, these kinds of faults occur more often, which makes self-stabilization as a concept very desirable.At first glance, one may think that this setting only applies to client/server-architectures. However, we believe that solving this problem is quite important for distributed systems where nodes constantly have to communicate with their neighbors. Also there are distributed systems where nodes are not aware of their incoming connections, e.g. in rooted trees, random graphs [MS06] or linearized de Bruijn networks [RSS11]. On these networks one is able to effectively perform many important techniques relevant to ...