Likert‐type scales are often used in survey instruments, and practitioners and researchers need to clearly understand the appropriate use of a midpoint in these scales. The authors of this article explore research studies from various disciplines to indicate that there are circumstances when a midpoint should be included and others where it should not. They provide tables, summarizing the benefits and problems in each case as well as evidence‐based strategies to employ.
Purpose-The purpose of this study was to investigate factors that influence informal learning in the workplace and the types of informal learning activities people engage in at work. More specifically, the research examined (1) the relationship between informal learning engagement and the presence of learning organization characteristics, and (2) perceived factors that affect informal learning engagement. Methodology-Workplace learning and performance improvement professionals were invited to respond to an anonymous online survey, and 125 professionals volunteered to participate in the study. Findings-This study did not find a significant correlation between informal learning engagement and the presence of learning organization characteristics. While age and education level did not impact informal learning engagement, it was found that older workers tended to engage in more informal learning. There were also certain types of informal learning activities in which they were most likely to engage. The findings also include rank-ordered lists of personal and environmental factors that workers perceived to influence their engagement in informal learning. Practical implications-The rank-ordered lists of factors that influence informal learning engagement is likely to be useful to practitioners for prioritizing informal learning interventions. The results of this study suggest that the degree of engagement in informal learning alone would not be a sufficient construct for predicting the presence of learning organization characteristics. Originality/value of paper-Very little empirical research has attempted to connect individual learning to the learning organization concept. This research addresses that gap by examining the relationship between individual informal learning engagement and the presence of learning organization characteristics.
A close examination of the literature on including positively and negatively worded items in structured survey questionnaires revealed that contrary to the traditional wisdom, it is better not to use a mix of positively and negatively worded items because doing so can create threats to validity and reliability of the survey instrument. If mixing is done, it is recommended to use strategies derived from research to improve the quality of data and the instrument's validity and reliability.
Competency-based instruction can be applied to a military setting, an academic program, or a corporate environment with a focus on producing performance-based learning outcomes. In this article, the authors provide theoretical and practical information about underlying characteristics of competencies and explain how the Department of Instructional \u26 Performance Technology at Boise State University developed a set of competencies and has been modifying its curriculum on the basis of these competencies. The department\u27s curriculum architecture flowchart illustrates the process of developing and applying competencies to curriculum design for producing performance-based learning outcomes. Detailed steps taken in developing a competency-based course are described
When practitioners and researchers develop structured surveys, they may use Likert‐type discrete rating scales or continuous rating scales. When administering surveys via the web, it is important to assess the value of using continuous rating scales such as visual analog scales (VASs) or sliders. Our close examination of the literature on the effectiveness of the two types of rating scales showed both benefits and drawbacks. Many studies recommended against using sliders due to functional difficulties that cause low response rates.
Training professionals have long acknowledged the necessity of conducting behavior-based (Level 3) and results-based (Level 4) evaluations, yet organizations do not frequently conduct such evaluations. This research examined training professionals' perceptions of the utility of Level 3 and Level 4 evaluations and the factors that facilitate or obstruct their attempts to perform them. The research was conducted using Brinkerhoff's Success Case Method and Gilbert's BehaviorEngineering Model as its frameworks. The three key factors identified by study participants as having an impact upon their ability to conduct Level 3 and Level 4 evaluations were the availability of resources such as time and personnel, managerial support (organizational) and expertise in evaluative methodology (individual). The research findings indicated a need to further explore how training professionals interpret Level 3 and Level 4 and how they can better develop their evaluative expertise, which in turn may increase effectiveness in gaining organizational support for evaluation efforts.
Evaluation is one of the critical steps in the process of performance improvement. Evaluation feeds evidence-based information back to the next cycle of performance improvement. However, organizations often neglect to conduct comprehensive evaluations on their programs due to environmental barriers or the lack of practitioners’ evaluation expertise. This article presents some of the foundational evaluation-related concepts and procedures that would help human performance improvement practitioners when conducting comprehensive systematic and systemic evaluations of the interventions implemented in organizations: (a) evaluation versus research, (b) front-end evaluation versus back-end evaluation, (c) definition of program evaluation, (d) types of program stakeholders, (e) development of program logic models, (f) formative evaluation versus summative evaluation, (g) merit versus worth, and (h) development of evaluation dimensions. Such foundational knowledge is just one of the first steps to prevent evaluations from being neglected or mistaken with simple measurements through administering instruments such as smiley sheets
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.