Abstract:The non-stationary nature of image characteristics calls for adaptive processing, based on the local image content. We propose a simple and flexible method to learn local tuning of parameters in adaptive image processing: we extract simple local features from an image and learn the relation between these features and the optimal filtering parameters. Learning is performed by optimizing a user defined cost function (any image quality metric) on a training set. We apply our method to three classical problems (de… Show more
“…Qingang et al [11] proposed a decoupled learning methodology that dynamically fits the weights of a deep network as most existing trained models rely on the configuration of a single parameter. Jinming et al [12] proposed a simple method for learning local parameter tuning in adaptive image processing by extracting local characteristics from an image and learning the relationship between them and the optimal filtering parameters, optimizing any metric that defines the image's quality.…”
Optimizing image processing parameters is often a time-consuming and unreliable task that requires manual adjustments. In this paper, we present a novel approach that utilizes a multi-agent system with Hysteretic Q-learning to automatically optimize these parameters, providing a more efficient solution. We conducted an empirical study that focused on extracting objects of interest from textural images to validate our approach. Experimental results demonstrate that our multiagent approach outperforms the traditional single-agent approach by quickly finding optimal parameter values and producing satisfactory results. Our approach's key innovation is the ability to enable agents to cooperate and optimize their behavior for the given task through the use of a multi-agent system. This feature distinguishes our approach from previous work that only used a single agent. By incorporating reinforcement learning techniques in a multi-agent context, our approach provides a scalable and effective solution to parameter optimization in image processing.
“…Qingang et al [11] proposed a decoupled learning methodology that dynamically fits the weights of a deep network as most existing trained models rely on the configuration of a single parameter. Jinming et al [12] proposed a simple method for learning local parameter tuning in adaptive image processing by extracting local characteristics from an image and learning the relationship between them and the optimal filtering parameters, optimizing any metric that defines the image's quality.…”
Optimizing image processing parameters is often a time-consuming and unreliable task that requires manual adjustments. In this paper, we present a novel approach that utilizes a multi-agent system with Hysteretic Q-learning to automatically optimize these parameters, providing a more efficient solution. We conducted an empirical study that focused on extracting objects of interest from textural images to validate our approach. Experimental results demonstrate that our multiagent approach outperforms the traditional single-agent approach by quickly finding optimal parameter values and producing satisfactory results. Our approach's key innovation is the ability to enable agents to cooperate and optimize their behavior for the given task through the use of a multi-agent system. This feature distinguishes our approach from previous work that only used a single agent. By incorporating reinforcement learning techniques in a multi-agent context, our approach provides a scalable and effective solution to parameter optimization in image processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.