Abstract-The preservation of temporal dependencies among different media data, such as text, still images, video and audio, and which have simultaneous distributed sources as origin, is an open research area and an important issue for emerging distributed multimedia systems, such as Teleimmersion, Telemedicine, and IPTV. Although there are several works oriented to satisfy temporal dependencies in distributed multimedia systems, they are far from resolving the problem. In this paper we propose a logical synchronization model able to specify at runtime any kind of temporal relationship among the distributed multimedia data involved in a temporal scenario. The synchronization model is based on a new concept that we call logical mapping. A logical mapping, in general terms, translates a temporal relation based on a timeline to be expressed according to its causal dependencies. The logical mappings allow us to avoid the use of global references, such as a wall clock and shared memory. We note that the proposed intermedia synchronization model does not require previous knowledge of when, nor of how long, the media involved of a temporal scenario is executed. Finally, in order to show the viability of the proposed model, a syncrhonization approach is presented.
Companies, organizations and individuals use Web services to build complex business functionalities. Web services must operate properly in the unreliable Internet infrastructure even in the presence of failures. To increase system dependability, organizations, including service providers, adapt their systems to the autonomic computing paradigm. Strategies can vary from having one to all (S-CHOP, self-configuration, self-healing, self-optimization and self-protection) features. Regarding self-healing, an almost identical tool is communication-induced checkpointing (CiC), a checkpoint contains the state (heap, registers, stack, kernel state) for each process in the system. CiC is based on quasi-synchronous checkpointing where processes take checkpoints relying of control information piggybacked inside application messages; however, avoiding dangerous patterns such as Z-paths and Z-cycles; in such a regard the system takes forced checkpoints and avoids inconsistent states. CiC, unlike other tools, does not incur system performance, our proposal does not incur high overhead (as results show), and it has the advantage of being scalable. As we have shown in a previous work, CiC can be used to address dependability problems when dealing with Web services, as CiC mechanism work in a distributed and efficient manner. Therefore, in this work we propose an adaptable and dynamic generation of checkpoints to support fault tolerance. We present an alternative considering Quality of Service (QoS) criteria, and the different impact applications have on it. We propose taking checkpoints dynamically in case of failure or QoS degradation. Experimental results show that our approach has significantly reduced the generation of checkpoints of various well-known tools in the literature.
Abstract:Complex business processes are based on Web services composition (WSC). Services composition dramatically reduces the cost and risks of building new business applications. Although Web services composition has been widely researched, several issues related to dependability still need to be addressed. In this aspect, one primary concern is to provide fault handling mechanisms. Over the last decade, diverse works tackling fault tolerance for WSC have appeared; most of them are based on the checkpointing paradigm. Some of the works are oriented towards orchestration and others towards choreography. In this work, we present a study regarding the different checkpointing techniques and their applicability to WSC. This study has been done considering the different types of faults (e.g. transient, intermittent and permanent) and modes of recovery (e.g. local vs global). We introduce a novel taxonomy for fault tolerant mechanisms that groups existing works according to integration approaches, fault types and modes of recovery. We present a study of the works, illustrating their advantages and drawbacks. Finally, the paper presents a discussion and outlines several open challenges regarding fault tolerance for WSC.
Logging system activities are required to provide credibility and confidence in the systems used by an organization. Logs in computer systems must be secured from the root user so that they are true and fair. This paper introduces RootLogChain, a blockchain-based audit mechanism that is built upon a security protocol to create both a root user in a blockchain network and the first log; from there, all root events are stored as logs within a standard blockchain mechanism. RootLogChain provides security constructs so as to be deployed in a distributed context over a hostile environment, such as the internet. We have developed a prototype based on a microservice architecture, validating it by executing different stress proofs in two scenarios: one with compliant agents and the other without. In such scenarios, several compliant and non-compliant agents try to become a root and register the events within the blockchain. Non-compliant agents simulate eavesdropper entities that do not follow the rules of the protocol. Our experiments show that the mechanism guarantees the creation of one and only one root user, integrity, and authenticity of the transactions; it also stores all events generated by the root within a blockchain. In addition, for audit issues, the traceability of the transaction logs can be consulted by the root.
The electrocardiogram records the heart’s electrical activity and generates a significant amount of data. The analysis of these data helps us to detect diseases and disorders via heart bio-signal abnormality classification. In unbalanced-data contexts, where the classes are not equally represented, the optimization and configuration of the classification models are highly complex, reflecting on the use of computational resources. Moreover, the performance of electrocardiogram classification depends on the approach and parameter estimation to generate the model with high accuracy, sensitivity, and precision. Previous works have proposed hybrid approaches and only a few implemented parameter optimization. Instead, they generally applied an empirical tuning of parameters at a data level or an algorithm level. Hence, a scheme, including metrics of sensitivity in a higher precision and accuracy scale, deserves special attention. In this article, a metaheuristic optimization approach for parameter estimations in arrhythmia classification from unbalanced data is presented. We selected an unbalanced subset of those databases to classify eight types of arrhythmia. It is important to highlight that we combined undersampling based on the clustering method (data level) and feature selection method (algorithmic level) to tackle the unbalanced class problem. To explore parameter estimation and improve the classification for our model, we compared two metaheuristic approaches based on differential evolution and particle swarm optimization. The final results showed an accuracy of 99.95%, a F1 score of 99.88%, a sensitivity of 99.87%, a precision of 99.89%, and a specificity of 99.99%, which are high, even in the presence of unbalanced data.
Cryptography has become one of the vital disciplines for information technology such as IoT (Internet Of Things), IIoT (Industrial Internet Of Things), I4.0 (Industry 4.0), and automotive applications. Some fundamental characteristics required for these applications are confidentiality, authentication, integrity, and nonrepudiation, which can be achieved using hash functions. A cryptographic hash function that provides a higher level of security is SHA-3. However, in real and modern applications, hardware implementations based on FPGA for hash functions are prone to errors due to noise and radiation since a change in the state of a bit can trigger a completely different hash output than the expected one, due to the avalanche effect or diffusion, meaning that modifying a single bit changes most of the desired bits of the hash; thus, it is vital to detect and correct any error during the algorithm execution. Current hardware solutions mainly seek to detect errors but not correct them (e.g., using parity checking or scrambling). To the best of our knowledge, there are no solutions that detect and correct errors for SHA-3 hardware implementations. This article presents the design and a comparative analysis of four FPGA architectures: two without fault tolerance and two with fault tolerance, which employ Hamming Codes to detect and correct faults for SHA-3 using an Encoder and a Decoder at the step-mapping functions level. Results show that the two hardware architectures with fault tolerance can detect up to a maximum of 120 and 240 errors, respectively, for every run of KECCAK-p, which is considered the worst case. Additionally, the paper provides a comparative analysis of these architectures with other works in the literature in terms of experimental results such as frequency, resources, throughput, and efficiency.
Currently, cryptographic algorithms are widely applied to communications systems to guarantee data security. For instance, in an emerging automotive environment where connectivity is a core part of autonomous and connected cars, it is essential to guarantee secure communications both inside and outside the vehicle. The AES algorithm has been widely applied to protect communications in onboard networks and outside the vehicle. Hardware implementations use techniques such as iterative, parallel, unrolled, and pipeline architectures. Nevertheless, the use of AES does not guarantee secure communication, because previous works have proved that implementations of secret key cryptosystems, such as AES, in hardware are sensitive to differential fault analysis. Moreover, it has been demonstrated that even a single fault during encryption or decryption could cause a large number of errors in encrypted or decrypted data. Although techniques such as iterative and parallel architectures have been explored for fault detection to protect AES encryption and decryption, it is necessary to explore other techniques such as pipelining. Furthermore, balancing a high throughput, reducing low power consumption, and using fewer hardware resources in the pipeline design are great challenges, and they are more difficult when considering fault detection and correction. In this research, we propose a novel hybrid pipeline hardware architecture focusing on error and fault detection for the AES cryptographic algorithm. The architecture is hybrid because it combines hardware and time redundancy through a pipeline structure, analyzing and balancing the critical path and distributing the processing elements within each stage. The main contribution is to present a pipeline structure for ciphering five times on the same data blocks, implementing a voting module to verify when an error occurs or when output has correct cipher data, optimizing the process, and using a decision tree to reduce the complexity of all combinations required for evaluating. The architecture is analyzed and implemented on several FPGA technologies, and it reports a throughput of 0.479 Gbps and an efficiency of 0.336 Mbps/LUT when a Virtex-7 is used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.