BackgroundRecent years have witnessed a dramatic increase in consumer online health information seeking. The quality of online health information, however, remains questionable. The issue of information evaluation has become a hot topic, leading to the development of guidelines and checklists to design high-quality online health information. However, little attention has been devoted to how consumers, in particular people with low health literacy, evaluate online health information.ObjectiveThe main aim of this study was to review existing evidence on the association between low health literacy and (1) people’s ability to evaluate online health information, (2) perceived quality of online health information, (3) trust in online health information, and (4) use of evaluation criteria for online health information.MethodsFive academic databases (MEDLINE, PsycINFO, Web of Science, CINAHL, and Communication and Mass-media Complete) were systematically searched. We included peer-reviewed publications investigating differences in the evaluation of online information between people with different health literacy levels.ResultsAfter abstract and full-text screening, 38 articles were included in the review. Only four studies investigated the specific role of low health literacy in the evaluation of online health information. The other studies examined the association between educational level or other skills-based proxies for health literacy, such as general literacy, and outcomes. Results indicate that low health literacy (and related skills) are negatively related to the ability to evaluate online health information and trust in online health information. Evidence on the association with perceived quality of online health information and use of evaluation criteria is inconclusive.ConclusionsThe findings indicate that low health literacy (and related skills) play a role in the evaluation of online health information. This topic is therefore worth more scholarly attention. Based on the results of this review, future research in this field should (1) specifically focus on health literacy, (2) devote more attention to the identification of the different criteria people use to evaluate online health information, (3) develop shared definitions and measures for the most commonly used outcomes in the field of evaluation of online health information, and (4) assess the relationship between the different evaluative dimensions and the role played by health literacy in shaping their interplay.
Additional information: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. Abstract. In this paper we introduce the hp-version discontinuous Galerkin composite finite element method for the discretization of second-order elliptic partial differential equations. This class of methods allows for the approximation of problems posed on computational domains which may contain a huge number of local geometrical features, or microstructures. While standard numerical methods can be devised for such problems, the computational effort may be extremely high, as the minimal number of elements needed to represent the underlying domain can be very large. In contrast, the minimal dimension of the underlying composite finite element space is independent of the number of geometric features. The key idea in the construction of this latter class of methods is that the computational domain Ω is no longer resolved by the mesh; instead, the finite element basis (or shape) functions are adapted to the geometric details present in Ω. In this paper, we extend these ideas to the discontinuous Galerkin setting, based on employing the hp-version of the finite element method. Numerical experiments highlighting the practical application of the proposed numerical scheme will be presented.
The numerical approximation of partial differential equations (PDEs) posed on complicated geometries, which include a large number of small geometrical features or microstructures, represents a challenging computational problem. Indeed, the use of standard mesh generators, employing simplices or tensor product elements, for example, naturally leads to very fine finite element meshes, and hence the computational effort required to numerically approximate the underlying PDE problem may be prohibitively expensive. As an alternative approach, in this article we present a review of composite/agglomerated discontinuous Galerkin finite element methods (DGFEMs) which employ general polytopic elements. Here, the elements are typically constructed as the union of standard element shapes; in this way, the minimal dimension of the underlying composite finite element space is independent of the number of geometrical features. In particular, we provide an overview of hp-version inverse estimates and approximation results for general polytopic elements, which are sharp with respect to element facet degeneration. On the basis of these results, a priori error bounds for the hp-DGFEM approximation of both second-order elliptic and first-order hyperbolic PDEs will be derived. Finally, we present numerical experiments which highlight the practical application of DGFEMs on meshes consisting of general polytopic elements.
Additional information:Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. Abstract.We prove the convergence of an adaptive linear finite element method for computing eigenvalues and eigenfunctions of second-order symmetric elliptic partial differential operators. The weak form is assumed to yield a bilinear form which is bounded and coercive in H 1 . Each step of the adaptive procedure refines elements in which a standard a posteriori error estimator is large and also refines elements in which the computed eigenfunction has high oscillation. The error analysis extends the theory of convergence of adaptive methods for linear elliptic source problems to elliptic eigenvalue problems, and in particular deals with various complications which arise essentially from the nonlinearity of the eigenvalue problem. Because of this nonlinearity, the convergence result holds under the assumption that the initial finite element mesh is sufficiently fine.Key words. second-order elliptic problems, eigenvalues, adaptive finite element methods, convergence AMS subject classifications. 65N12, 65N25, 65N30, 65N50 DOI. 10.1137/070697264 1. Introduction. In the last decades, mesh adaptivity has been widely used to improve the accuracy of numerical solutions to many scientific problems. The basic idea is to refine the mesh only where the error is high, with the aim of achieving an accurate solution using an optimal number of degrees of freedom. There is a large amount of numerical analysis literature on adaptivity, in particular on reliable and efficient a posteriori error estimates (e.g., [1]). Recently, the question of convergence of adaptive methods has received intensive interest and a number of convergence results for the adaptive solution of boundary value problems have appeared (e.g., [8,18,19,7,6,23]).We prove here the convergence of an adaptive linear finite element algorithm for computing eigenvalues and eigenvectors of scalar symmetric elliptic partial differential operators in bounded polygonal or polyhedral domains, subject to Dirichlet boundary data. Such problems arise in many applications, e.g., resonance problems, nuclear reactor criticality, and the modelling of photonic band gap materials, to name but three.Our refinement procedure is based on two locally defined quantities, firstly, a standard a posteriori error estimator and secondly a measure of the variability (or "oscillation") of the computed eigenfunction. (Measures of "data oscillation" appear in the theory of adaptivity for boundary value problems, e.g., [18]. In the eigen...
Recent Global Positioning System observations of major earthquakes such as the 2014 Chile megathrust show a slow preslip phase releasing a significant portion of the total moment (Ruiz et al., 2014, https://doi.org/10.1126/science.1256074). Despite advances from theoretical stability analysis (Rubin & Ampuero, 2005, https://doi.org/10.1029/2005JB003686; Ruina, 1983, https://doi.org/10.1029/jb088ib12p10359) and modeling (Kaneko et al., 2017, https://doi.org/10.1002/2016GL071569), it is not fully understood what controls the prevalence and the amount of slip in the nucleation process. Here we present laboratory observations of slow slip preceding dynamic rupture, where we observe a dependence of nucleation size and position on the loading rate (laboratory equivalent of tectonic loading rate). The setup is composed of two polycarbonate plates under direct shear with a 30‐cm long slip interface. The results of our laboratory experiments are in agreement with the preslip model outlined by Ellsworth and Beroza (1995, https://doi.org/10.1126/science.268.5212.851) and observed in laboratory experiments (Latour et al., 2013, https://doi.org/10.1002/grl.50974; Nielsen et al., 2010, https://doi.org/10.1111/j.1365-246x.2009.04444.x; Ohnaka & Kuwahara, 1990, https://doi.org/10.1016/0040-1951(90)90138-X), which show a slow slip followed by an acceleration up to dynamic rupture velocity. However, further complexity arises from the effect of (1) rate of shear loading and (2) inhomogeneities on the fault surface. In particular, we show that when the loading rate is increased from 10−2 to 6 MPa/s, the nucleation length can shrink by a factor of 3, and the rupture nucleates consistently on higher shear stress areas. The nucleation lengths measured fall within the range of the theoretical limits Lb and L∞ derived by Rubin and Ampuero (2005, https://doi.org/10.1029/2005JB003686) for rate‐and‐state friction laws.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.