2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00347
|View full text |Cite
|
Sign up to set email alerts
|

DG-Labeler and DGL-MOTS Dataset: Boost the Autonomous Driving Perception

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(27 citation statements)
references
References 36 publications
0
18
0
Order By: Relevance
“…In this paper, YOLOv5SCB is trained using the public dataset KITTI [11] . The effectiveness of the proposed module is demonstrated using ablation experiments.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…In this paper, YOLOv5SCB is trained using the public dataset KITTI [11] . The effectiveness of the proposed module is demonstrated using ablation experiments.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…Additionally, a light gradient boosting machine (LGBM) is also widely used with outstanding performance [ 48 , 49 ] due to the advantages of fast training speed and high efficiency, accuracy, and capability to handle large-scale data [ 49 , 50 ]. Recently, deep learning (DL) algorithms (e.g., convolutional neural network (CNN)) have demonstrated superior prediction performance in the field of bioinformatics, such as in the prediction of modification sites on DNA, RNA, and proteins [ 26 , 27 , 31 , 51 , 52 , 53 ] and many domains of social relevance [ 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 ]…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, incorporating external knowledge bases has been found to enhance the model's ability to generate factually accurate responses [13,14,15,16]. The use of multimodal data, integrating text with images or videos, has further been shown to improve the contextual understanding of those models [17,18,19]. Techniques for optimizing model architecture, such as attention mechanisms and transformer layers, have been pivotal in increasing the efficiency and accuracy of LLMs [20,8,21,22].…”
Section: Enhancing Llm Accuracymentioning
confidence: 99%
“…Adaptive learning rates and fine-tuning strategies have also been identified as crucial for tailoring models to specific tasks or domains, thereby improving their performance [8,14,23,24]. Lastly, the implementation of robust evaluation metrics enables a more nuanced assessment of model accuracy, guiding iterative improvements [17,25,14,26,16].…”
Section: Enhancing Llm Accuracymentioning
confidence: 99%