2013
DOI: 10.1145/2508363.2508371
|View full text |Cite
|
Sign up to set email alerts
|

Inverse image editing

Abstract: Applicationsre-edited image Figure 1: Given a source image and an edited copy (left), our system automatically recovers a semantic editing history (middle), which can be used for various applications, such as re-editing (right). In this case, the second editing step of the recovered history, involving hue modification, is altered to change the berries to a difference color. Image courtesy of Andrea Lein. AbstractWe study the problem of inverse image editing, which recovers a semantically-meaningful editing his… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 19 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…However, multiple segmentations are generated in [10], and the scale‐invariant feature transform (SIFT) features cannot be extracted from these segmentations. Hu et al [11] introduced PatchNets, a compact, hierarchical representation that describes structural and appearance characteristics of image regions, for use in image editing. A low‐dimensional representation of an image which preserves its inherent information from the original image space was learned in [12]; however, this approach cannot find the object region and be combined within the bag of words (BoW) framework.…”
Section: Related Workmentioning
confidence: 99%
“…However, multiple segmentations are generated in [10], and the scale‐invariant feature transform (SIFT) features cannot be extracted from these segmentations. Hu et al [11] introduced PatchNets, a compact, hierarchical representation that describes structural and appearance characteristics of image regions, for use in image editing. A low‐dimensional representation of an image which preserves its inherent information from the original image space was learned in [12]; however, this approach cannot find the object region and be combined within the bag of words (BoW) framework.…”
Section: Related Workmentioning
confidence: 99%
“…It is important to note that filter operations cannot be defined by a bijective function and, for this reason, are not considered here: for example, applying a blur filter would generate a delta that consists of all the pixels that are not black. Hu et al [30] can detect this kind of transformation as well as many others, but their approach requires at least 3 min to process an image of 512 512 pixels and so is not feasible for use in version control. When looking at both the original (A) and the modified (B) images in Figure 9, it is easy to see that an 180 ı clockwise rotation has been performed.…”
Section: Image Processing Techniquesmentioning
confidence: 99%
“…Hu et al [30] introduced an approach to recover a semantically meaningful editing history from a source image and an edited one. Their technique supports the detection of global and local linear and nonlinear color changes, the insertion and removal of objects, and cropping.…”
Section: Related Workmentioning
confidence: 99%