2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC) 2019
DOI: 10.1109/icivc47709.2019.8981065
|View full text |Cite
|
Sign up to set email alerts
|

A Large Benchmark for Fabric Image Retrieval

Abstract: The rapid evolution of large language models necessitates effective benchmarks for evaluating their role knowledge, which is essential for establishing connections with the real world and providing more immersive interactions. This paper introduces RoleEval, a bilingual benchmark designed to assess the memorization, utilization, and reasoning capabilities of role knowledge. RoleEval comprises RoleEval-Global (including internationally recognized characters) and RoleEval-Chinese (including characters popular in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(22 citation statements)
references
References 16 publications
0
18
0
Order By: Relevance
“…The dataset newly constructed by [34] consisted of 46,656 fabric images of 972 subjects, with multiple images from the front and back sides. The dataset was tested using BoF and three types of deep learning methods, namely LeNet, AlexNet, and VGGNet.…”
Section: Feature Extraction Based On Cnn Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…The dataset newly constructed by [34] consisted of 46,656 fabric images of 972 subjects, with multiple images from the front and back sides. The dataset was tested using BoF and three types of deep learning methods, namely LeNet, AlexNet, and VGGNet.…”
Section: Feature Extraction Based On Cnn Methodsmentioning
confidence: 99%
“…CNN's extraction features globally require large amounts of data, therefore, training it on large datasets provides a sufficient knowledge base to identify objects. CNNs are more sensitive to color features, with the main advantage of automatically identifying essential features without human supervision [34]. CNNs have the ability to reduce the semantic gap in image representation [44].…”
Section: Feature Extraction Based On Cnn Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To validate the proposed GiT method's superiority, it is compared with multiple state-of-the-art vehicle reidentification approaches on three large-scale datasets, namely, VeRi776 [32], VehicleID [33] and VeRi-Wild [34]. The rank-1 identification rate (R1) [32,33,54] and mean average precision (mAP) [12,[55][56][57] are used to assess the accuracy performance.…”
Section: Experiments and Analysismentioning
confidence: 99%