This paper presents a novel adaptive vector quantisation scheme based on the SOFM neural network. All adaptatio'n is performed directly from the quantised image with no explicit adaptation information transimitted or stored. Thus the network learns an input distribution it has never actually seen. Training sets are generated from the received image by scaling the image t o approximate the statistics of the original image and selecting blocks in such a way as to capture edges and other image features. This data is fed t o a SOFM neural network t o update the codebook. A new method is also presented for ensuring that all neurons are well used, by estimating directly from the quantised image how much distortion each neuron introduces. The ability of thiis scheme to adapt successfully is verified by simulation.