Named entity recognition (NER) is one of the topics that get the attention of NLP (natural language processing) researchers. Most NER research uses English datasets in other languages, such as Bahasa Indonesia. However, NER is essential in recognizing corpus entities that can improve NLP performance. Deep learning approaches are currently a trend, including in NER research. In this article, we propose TWCAM (transformer-Word2Vec-CNN-attention model). Combining these models can improve NER performance in Bahasa Indonesia by obtaining better vector representations of words, extracting features in sentences, and notice to the surrounding context. The dataset in Bahasa Indonesia comes from several online news sites. Our annotation scheme is BIOLU (Begin, inside outside, last, unit). Using the learning-rate finder, the maximum F1-Score in the tests we conducted on the TWCAM was 0.8178, while the BiLSTM (Bidirectional long short-term memory) was only 0.7200. The following research opportunity is how to reduce computational complexity but not decrease overall NER performance.