Audio classification and restoration are among major downstream tasks in audio signal processing. However, restoration derives less of a benefit from pretrained models compared to the overwhelming success of pretrained models in classification tasks. Due to such unbalanced benefits, there has been rising interest in how to improve the performance of pretrained models for restoration tasks such as speech enhancement (SE). Previous works have shown that the features extracted by pretrained audio encoders are effective for SE tasks, but these speech-specific encoder-only models usually require extra decoders to become compatible with SE tasks, and involve complicated pretraining procedures or complex data augmentation. Therefore, in pursuit of a universal audio model, the audio masked autoencoder (MAE) whose backbone is the autoencoder of Vision Transformers (ViT-AE), is extended from audio classification toward restoration tasks in this paper. ViT-AE naturally learns mel-to-mel mapping that is compatible with restoration tasks during pretraining. Among many restoration tasks, SE is chosen due to its wellestablished evaluation metrics and test data. We propose variations of ViT-AE to improve the SE performance, where the mel-to-mel variations yield high scores for non-intrusive metrics and the STFToriented variation is effective at standard intrusive metrics such as PESQ. Different variations can be used in accordance with the scenarios. Comprehensive evaluations and ablation studies show that MAE pretraining is also beneficial to SE tasks and help the ViT-AE to better generalize to out-of-domain distortions. We further found that large-scale noisy data of general audio sources, rather than clean speech, is sufficiently effective for pretraining.