2023
DOI: 10.1109/lsens.2023.3290209
|View full text |Cite
|
Sign up to set email alerts
|

Synthetic Sensor Data Generation Exploiting Deep Learning Techniques and Multimodal Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 12 publications
0
0
0
Order By: Relevance
“…In the following, several popular image processing methods and cell models are presented. Optical-based force sensors generally use a light source (e.g., light emitting diode [LED], laser, or halogen lamp) to illuminate a load-sensitive medium 32 Robotics for cell manipulation and characterization (e.g., microcantilever or grating) [4]. A photodetector (e.g., photodiode or CCD camera) is adopted to measure the ranks of illumination, refractive index, or spectrum of the reflected light from the load-sensitive medium.…”
Section: Robotics For Cell Manipulation and Characterizationmentioning
confidence: 99%
“…In the following, several popular image processing methods and cell models are presented. Optical-based force sensors generally use a light source (e.g., light emitting diode [LED], laser, or halogen lamp) to illuminate a load-sensitive medium 32 Robotics for cell manipulation and characterization (e.g., microcantilever or grating) [4]. A photodetector (e.g., photodiode or CCD camera) is adopted to measure the ranks of illumination, refractive index, or spectrum of the reflected light from the load-sensitive medium.…”
Section: Robotics For Cell Manipulation and Characterizationmentioning
confidence: 99%
“…They can be trained on unlabeled data to learn meaningful features and then optimized for a particular supervised task, such as image classification or sentiment analysis. This process, known as pre-training, enables the model to leverage the learned representations and potentially improve performance on the supervised task [53,54].…”
Section: Autoencodersmentioning
confidence: 99%