Topic modeling algorithms can better understand data by extracting meaningful words from text collection, but the results are often inconsistent, and consequently difficult to interpret. Enrich the model with more contextual knowledge can improve coherence. Recently, neural topic models have emerged, and the development of neural models, in general, was pushed by BERT-based representations. We propose in this paper, a model named AraBERTopic to extract news from Facebook pages. Our model combines the Pre-training BERT transformer model for the Arabic language (AraBERT) and neural topic model ProdLDA. Thus, compared with the standard LDA, pre-trained BERT sentence embeddings produce more meaningful and coherent topics using different embedding models. Results show that our AraBERTopic model gives 0.579 in topic coherence.