Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology 2017
DOI: 10.1145/3126594.3126635
|View full text |Cite
|
Sign up to set email alerts
|

AirCode

Abstract: air pocket optimization fabricated model AirCode embeded input mesh user-specified region & data goo.gl/1ph6g2 global component a c d e b Figure 1. (a) AirCode tagging tool takes as inputs a mesh, a user-specified region, and embedded data. (b) It first determines the air pocket parameters, including the depth d and thickness h, for a fabrication material. (c) An AirCode tag is embedded inside the object, without changing its geometry or appearance. (d) The fabricated tag is invisible under environmental light… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(4 citation statements)
references
References 41 publications
0
4
0
Order By: Relevance
“…Existing works using dielectric materials involve at least two materials that have di erent properties such as scattering or absorbance. For instance, Li et al present an Air-Code scheme to embed information such as a QR code, inside a 3D printed object, using a group of air pockets as the information material with sophisticated designs under the surface [Li et al 2017]. The air pockets are unnoticeable for human eyes, while readable using an o -the-shelf projector as a patterned light source, and a monochrome linear camera at 700 nm wavelength (i.e., red light).…”
Section: Information Embedding and Extraction Methodsmentioning
confidence: 99%
“…Existing works using dielectric materials involve at least two materials that have di erent properties such as scattering or absorbance. For instance, Li et al present an Air-Code scheme to embed information such as a QR code, inside a 3D printed object, using a group of air pockets as the information material with sophisticated designs under the surface [Li et al 2017]. The air pockets are unnoticeable for human eyes, while readable using an o -the-shelf projector as a patterned light source, and a monochrome linear camera at 700 nm wavelength (i.e., red light).…”
Section: Information Embedding and Extraction Methodsmentioning
confidence: 99%
“…), it is crucial that systems are able to understand the context of one's environment and the information that is available to users. One way to obtain such an understanding is to directly retrieve information that is embedded in barcodes, fiducial markers [22], human faces [2], or objects during fabrication processes [19,20,38]. Researchers have also explored retrieving "raw" information such as visible text [54,66], physical objects [23,51], multimodal scenes [65], human speech (e.g., Google API 4 ), and music (e.g., Shazam).…”
Section: Multimodal Information-based Interaction Techniquesmentioning
confidence: 99%
“…Besides, objects can also be tagged with a QR code to describe and provide useful information. For example, AirCode method [LNNZ17] embeds the user-defined information by placing structured air pockets under the surface of the item. The resulting code of the method is only readable by computers, and the process takes minutes to provide results.…”
Section: Previous Workmentioning
confidence: 99%