An experimental study--involving measurements with an optical microscope, a profilometer, and a scanning electron microscope--for determination of the surface profile of x-ray tube anodes is presented. The islands on the "mud-flatting" surface are separated by approximately 8 microm deep cracks. The surface roughness on the island is typically below 1 microm, and the area ratio of cracks to the total surface is higher on the more extensively used regions (anode aging). A simple model was proposed to calculate the spectrum modification introduced by the rough surface. Loss of x-ray intensity of 4% was predicted using the roughest surface at a small emission angle.
The application of Articial Intelligence (AI) in the Software Engineering (SE) eld is always a bit delayed compared to state-ofthe-art research results. While the Generative Pre-trained Transformer (GPT-2) model was published in 2018, only a few recent works used it to SE tasks. One of such task is Automated Program Repair (APR), where the applied technique should nd a x to software bugs without human intervention. One problem emerges here: the creation of proper training data is resource intensive and requires several hours of additional work from researchers. The sole reason of it is that training a model to repair programs automatically requires both the buggy program and the xed one in large scale and presumably in an already pre-processed form. There are currently few such databases, so teaching and ne-tuning models is not an easy task. In this work we wanted to investigate how the GPT-2 model performs when it is not ne-tuned for the APR task, compered to when it is ne-tuned. From previous work we already know that the GPT-2 model can automatically generate patches for buggy programs, although the literature lacks of studies where no ne-tuning has taken place. For the sake of experiment we evaluated the GPT-2 model out-of-the-box and also ne-tuned it before the evaluation on 1559 JavaSript code snippets. Based on out results we can conclude that although the ne-tuned model was able to learn how to write syntactically correct source code almost on every attempt, the non-ne-tuned model lacked some of these positive features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.