This paper investigates the relationship between software development methodologies and usability. The point of departure is the assumption that two important disciplines in software development, one of software development methods (SDMs) and one of usability work, are not integrated in industrial software projects.Building on previous research we investigate two questions; (1) Will software companies generally acknowledge the importance of usability, but not prioritise it in industrial projects? and (2) To what degree are software development methods and usability perceived by practitioners as being integrated? To this end a survey in the Norwegian IT industry was conducted. From a sample of 259 companies we received responses from 78 companies.In response to our first research question, our findings show that although there is a positive bias towards usability, the importance of usability testing is perceived to be much less than that of usability requirements. Given the strong time and cost pressures associated with the software industry, we believe that these results highlight that there is a gap between intention and reality. Regarding our second research question our survey revealed that companies perceive usability and software development methods to be integrated. This is in contrast to earlier research, which, somewhat pessimistically, has argued for the existence of two different cultures, one of software development and one of usability. The findings give hope for the future, in particular because the general use of system development methods are pragmatic and adaptable.
Abstract-Dominated by delay-sensitive and massive data applications, radio resource management in 5G access networks is expected to satisfy very stringent delay and packet loss requirements. In this context, the packet scheduler plays a central role by allocating user data packets in the frequency domain at each predefined time interval. Standard scheduling rules are known limited in satisfying higher Quality of Service (QoS) demands when facing unpredictable network conditions and dynamic traffic circumstances. This paper proposes an innovative scheduling framework able to select different scheduling rules according to instantaneous scheduler states in order to minimize the packet delays and packet drop rates for strict QoS requirements applications. To deal with real-time scheduling, the Reinforcement Learning (RL) principles are used to map the scheduling rules to each state and to learn when to apply each. Additionally, neural networks are used as function approximation to cope with the RL complexity and very large representations of the scheduler state space. Simulation results demonstrate that the proposed framework outperforms the conventional scheduling strategies in terms of delay and packet drop rate requirements.
Abstract-Deliberate practice is important in many areas, including learning to program computers. However, beliefs about the nature of personal traits, known as mindsets, can have a profound impact on such practice. Previous research has shown that those with a fixed mindset believe their traits cannot change and tend to reduce their level of practice when they encounter difficulty. In contrast, those with the growth mindset believe their traits are flexible and tend to maintain regular practice despite the level of difficulty. However, focusing on mindset as a single construct focused on intelligence may not be appropriate in the field of computer programming. Exploring this notion, a self-belief survey was distributed to undergraduate software engineering students. It was revealed that beliefs about intelligence and programming aptitude formed two distinct constructs. Furthermore, the mindset for programming aptitude had greater utility in predicting software development practice and a follow-up survey showed that it became more fixed throughout instruction. Thus, educators should consider the role of programming-specific beliefs in the design and evaluation of introductory courses in software engineering. Particularly, the need to situate and contextualize the growth messages that motivate students who experience early setbacks.
Governments around the globe are striving to provide e-government, online products and services to all the citizens of their respective countries. This has meant that there is a shift in the conventional mode of public service delivery from a face-to-face and telephone mode to electronic means. However, not all the citizens are making use of these changes and one demographic citizens group that is currently attracting immense interest related to their welfare, health and other such issues is the older people group. Using this as reasoning, the aim of this exploratory and explanatory research is to understand the e-government initiatives in the UK, more specifically London. To conduct this research, a mixed qualitative and quantitative research approach was pursued. It was concluded that the benefits of the Internet to many of the users is relative, depending on the age, perceptions and level of innovativeness of the user. It was learnt that in relation to quality, the local authority websites do contain useful and relevant information for the elderly. However, this information is difficult to access, mainly due to the lack of knowledge, or skills in the use of computers, or Internet. From this research it is expected that a contribution to academia will emerge in the form of a better understanding of issues related to e-government, the digital divide and older citizens. For industry, the contributions of this research is the identification and understanding of issues related with online products and services and the older citizen. For policymakers, this research proffers an understanding of issues related with demand and supply of online products and services that governments are currently providing.
It has been widely acknowledged that future networks need to provide significantly more capacity than nowadays' ones in order to deal with the increasing traffic demands of the users. Particularly in regions where optical fiber are unlikely to be deployed due to economical constraints, this is a huge challenge. One option to address this issue is to complement existing narrow-band terrestrial networks with additional satellite connections. Satellites cover huge areas and recent developments have considerably increased the available capacity, while the cost are decreasing. However, geostationary satellite links have significantly different link characteristics than most terrestrial links, mainly due to the higher signal propagation time, which often renders them not suitable for delay intolerant traffic. This article surveys the current state-of-the-art of satellite and terrestrial network convergence. We mainly focus on scenarios in which satellite networks complement existing terrestrial infrastructures, i.e. parallel satellite and terrestrial links exist, in order to provide high bandwidth connections while ideally achieving a similar end user Quality-of-Experience as in high bandwidth terrestrial networks. Thus, we identify the technical challenges associated with the convergence of satellite and terrestrial networks and analyze the related work. Based on this, we identify four key functional building blocks, which are essential to distribute traffic optimally between the terrestrial and the satellite networks. These are the Traffic Requirement Identification function, the Link Characteristics Identification function as well as the Traffic Engineering function and the Execution function. Afterwards, we survey current network architectures with respect to these key functional building blocks and perform a gap analysis, which shows that all analyzed network architectures require adaptations to effectively support converged satellite and terrestrial networks. Hence, we conclude by formulating several open research questions with respect to satellite and terrestrial network convergence.
In our study, we explore the human side of the multimedia experience. The authors propose a model that assesses quality variation from three distinct levels: the network-, the media-and the content-levels; and from two views: the technical-and the user-perspective. By facilitating parameter variation at each of the quality levels and from each of the perspectives, we were able to examine their impact on user quality perception. Results show that: a significant reduction in frame rate does not proportionally reduce the user's understanding of the presentation, independent of technical parameters; the type of video clip significantly impacts user information assimilation, user level of enjoyment and user perception of quality; the display type impacts user information assimilation and user perception of quality. Finally, to ensure transfer of informational content, network parameter variation should be adapted; to maintain user enjoyment, video content variation should be adapted. KeywordsQuality of Perception, Distributed Multimedia, Quality, User Perspective Defining Multimedia QualityDistributed multimedia quality is not defined by a "single monotone dimension"; it is judged instead using numerous factors, which have been shown to influence user criteria concerning presentation excellence, e.g. delay or loss of frames, audio clarity, lip synchronisation during speech, as well as the general relationship between visual auditory components [2]. As a result, considerable work has been done looking at different aspects of distributed multimedia video quality at many different levels. Due to these multiple influences, the comparable examination of perceived quality becomes complex. To aid this comparison this paper extends a quality definition model first used by Wikstrand [33] that segregates quality into three discrete levels: the network-level, the media-level and content-level. Wikstrand showed that all factors that influence distributed multimedia quality can be categorised by assessing the information abstraction. The network-level concerns the transfer of data and all quality Defining the Users Perception of Distributed Multimedia QualityGulliver & Ghinea 2 issues related to the flow of data around the network. The media-level concerns quality issues relating to the transference methods used to convert network data to perceptible media information, i.e. the video and audio media. The content-level concerns quality factors that influence how media information is perceived and understood by the end user.• The network-level is concerned with how data is communicated over the network and includes variation and measurement of parameters including: bandwidth, delay, jitter and loss.• The media-level is concerned with how the media is coded for the transport of information over the network and / or whether the user perceives the video as being of good or bad quality. Media-level parameters include: frame rate, bit rate, screen resolution, colour depth and compression techniques.• The content-level is concer...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.