The chromosphere is a thin layer of the solar atmosphere that bridges the relatively cool photosphere and the intensely heated transition region and corona. Compressible and incompressible waves propagating through the chromosphere can supply significant amounts of energy to the interface region and corona. In recent years an abundance of high-resolution observations from state-of-the-art facilities have provided new and exciting ways of disentangling the characteristics of oscillatory phenomena propagating through the dynamic chromosphere. Coupled with rapid advancements in magnetohydrodynamic wave theory, we are now in an ideal position to thoroughly investigate the role waves play in supplying energy to sustain chromospheric and coronal heating. Here, we review the recent progress made in characterising, categorising and interpreting oscillations manifesting in the solar chromosphere, with an impetus placed on their intrinsic energetics.
We present observational evidence of compressible magnetohydrodynamic wave modes propagating from the solar photosphere through to the base of the transition region in a solar magnetic pore. High cadence images were obtained simultaneously across four wavelength bands using the Dunn Solar Telescope. Employing Fourier and wavelet techniques, sausage-mode oscillations displaying significant power were detected in both intensity and area fluctuations. The intensity and area fluctuations exhibit a range of periods from 181 − 412 s, with an average period ∼290 s, consistent with the global p-mode spectrum. Intensity and area oscillations present in adjacent bandpasses were found to be out-of-phase with one another, displaying phase angles of 6.12 • , 5.82 • and 15.97 • between 4170Å continuum -G-band, G-band -Na I D 1 and Na I D 1 -Ca II K heights, respectively, reiterating the presence of upwardly-propagating sausage-mode waves. A phase relationship of ∼0 • between same-bandpass emission and area perturbations of the pore best categorises the waves as belonging to the 'slow' regime of a dispersion diagram. Theoretical calculations reveal that the waves are surface modes, with initial photospheric energies in excess of 35 000 W m −2 . The wave energetics indicate a substantial decrease in energy with atmospheric height, confirming that magnetic pores are able to transport waves that exhibit appreciable energy damping, which may release considerable energy into the local chromospheric plasma.
Decomposition-based methods are often cited as the solution to multi-objective nonconvex optimization problems with an increased number of objectives. These methods employ a scalarizing function to reduce the multi-objective problem into a set of single objective problems, which upon solution yield a good approximation of the set of optimal solutions. This set is commonly referred to as Pareto front. In this work we explore the implications of using decomposition-based methods over Pareto-based methods on algorithm convergence from a probabilistic point of view. Namely, we investigate whether there is an advantage of using a decomposition-based method, for example using the Chebyshev scalarizing function, over Pareto-based methods. We find that, under mild conditions on the objective function, the Chebyshev scalarizing function has an almost identical effect to Paretodominance relations when we consider the probability of finding superior solutions for algorithms that follow a balanced trajectory. We propose the hypothesis that this seemingly contradicting result compared with currently available empirical evidence, signals that the disparity in performance between Pareto-based and decomposition-based methods is due to the inability of the former class of algorithms to follow a balanced trajectory. We also link generalized decomposition to the results in this work and show how to obtain optimal scalarizing functions for a given problem, subject to prior assumptions on the Pareto front geometry.
The set of available multi-objective optimisation algorithms continues to grow. This fact can be partially attributed to their widespread use and applicability. However, this increase also suggests several issues remain to be addressed satisfactorily. One such issue is the diversity and the number of solutions available to the decision maker (DM). Even for algorithms very well suited for a particular problem, it is difficult—mainly due to the computational cost—to use a population large enough to ensure the likelihood of obtaining a solution close to the DM's preferences. In this paper we present a novel methodology that produces additional Pareto optimal solutions from a Pareto optimal set obtained at the end run of any multi-objective optimisation algorithm for two-objective and three-objective problem instances.
Monograph:Giagkiozis, I., Purshouse, R.C. and Fleming, P.J. (2012) ReuseUnless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version -refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher's website. TakedownIf you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request. Abstract-Decomposition-based algorithms for multi-objective optimization problems have increased in popularity in the past decade. Although their convergence to the Pareto optimal front (PF) is in several instances superior to that of Pareto-based algorithms, the problem of selecting a way to distribute or guide these solutions in a high-dimensional space has not been explored. In this work, we introduce a novel concept which we call generalized decomposition. Generalized decomposition provides a framework with which the decision maker (DM) can guide the underlying evolutionary algorithm toward specific regions of interest or the entire Pareto front with the desired distribution of Pareto optimal solutions. Additionally, it is shown that generalized decomposition simplifies many-objective problems by unifying the three performance objectives of multi-objective evolutionary algorithms -convergence to the PF, evenly distributed Pareto optimal solutions and coverage of the entire front -to only one, that of convergence. A framework, established on generalized decomposition, and an estimation of distribution algorithm (EDA) based on low-order statistics, namely the cross-entropy method (CE), is created to illustrate the benefits of the proposed concept for many objective problems. This choice of EDA also enables the test of the hypothesis that low-order statistics based EDAs can have comparable performance to more elaborate EDAs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.