Enhancing speech processed by lossy codecs can significantly improve the resultant signal quality, providing a richer listening experience while reducing listening fatigue. Since there are a multitude of codecs, each supporting several bitrates, deep-learning-based solutions typically train networks in a codecspecific manner and use multi-condition training for each codecspecific network, to improve generalizability for the different bit-rates. In contrast, we propose a bitrate-informed model for improving the inter-bitrate generalizability of the model for coded speech enhancement. The well-known Convolutional Recurrent U-Net Speech Enhancement (CRUSE) encoder-decoder model is selected for the enhancement, however, we propose modifying only the initial few layers of the encoder, to introduce bitrate dependency. The rest of the network is shared for all bitrates. Evaluation is carried out on two contemporary codecs: the Bluetooth low complexity communication codec plus (LC3plus) and the 3GPP adaptive multi-rate wideband (AMR-WB) codec. The experimental study shows that using bitrate-informed layers improves generalisation capability. More importantly, this only causes a small increase (< 1%) in model footprint and no increase in the computational cost. Further, to provide better insights into where such bitrate-informed layers can be useful, we propose using the histogram of the ideal training target masks. The radically different histograms at the different bitrates for LC3plus codec indicate a stronger benefit with the bitrateinformed model -which is also seen in the instrumental metrics. This paper lays the foundation for further work on bitrate and codec-informed models -allowing for the development of a single, universal model for coded speech enhancement.