As relics of history, ancient copper inscriptions are found in many countries. Information in the image or letter forms contained on copper ancient inscription has a very high value. The age and environmental factors caused damage to the surface of the inscription and also reduced the appearances of the image and letter. In this paper, we describe a novel segmentation methodology based on multi-texture features for ancient copper inscriptions which were severely damaged. The segmentation results of letters on ancient copper inscriptions by using the proposed method have an average accuracy of 90%. Based on these results, the proposed method is suitable for letter segmentation of the ancient copper inscriptions.
ABSTRAKSaat ini, kecerdasan buatan memungkinkan untuk dikembangkan dalam dunia robotika, khususnya untuk pengaturan gerakan robot berdasarkan pengolahan citra. Penelitian ini mengembangkan sebuah mobile robot yang dilengkapi dengan kamera katadioptrik dengan sudut pandang 3600. Citra yang didapatkan, dikonversi dari RGB menjadi HSV. Selanjutnya disesuaikan dengan proses morfologi. Nilai jarak yang terbaca oleh kamera (piksel) dengan jarak sebenarnya (cm) dihitung menggunakan Euclidean Distance. Nilai ini sebagai ekstraksi ciri data jarak yang dilatihkan pada sistem. Sistem yang dibuat pada penelitian ini memiliki iterasi sebanyak 1.000.000, dengan tingkat kelinieran R2=0.9982 dan keakuratan prediksi sebesar 99,03%.Kata kunci: Robot, HSV, Euclidean Distance, Kamera katadioptrik, Artifical Neural NetworkABSTRACTRecently, artificial intelligence is possible to be developed in robotic, specifically for robot movements control based on image processing. This research develops a mobile robot with a 3600 perspective catadioptric camera is equipped. The camera captured images were converting from RGB to HSV. Furthermore, it adapted to the morphological process. The distance value read by the camera (pixels) to the actual distance (cm) is measured using Euclidean Distance. This value is a feature extraction of distance data that has training on the system. The system built in this study has 1,000,000 iterations, with a linearity level of R2 = 0.9982 and prediction accuracy of 99.03%.Keywords: Robot, HSV, Euclidean Distance, Catadioptric Camera, Artifical Neural Network
<p class="Abstrak">Deteksi lokasi diri atau lokalisasi diri adalah salah satu kemampuan yang harus dimiliki oleh <em>mobile robot</em>. Kemampuan lokalisasi diri digunakan untuk menentukan posisi robot di suatu daerah dan sebagai referensi untuk menentukan arah perjalanan selanjutnya. Dalam penelitian ini, lokalisasi robot didasarkan pada data citra yang ditangkap oleh kamera <em>omnidirectional</em> tipe <em>catadioptric</em>. Jumlah fitur terdekat antara citra 360<sup>o</sup> yang ditangkap oleh kamera Omni dan citra referensi menjadi dasar untuk menentukan prediksi lokasi. Ekstraksi fitur gambar menggunakan metode Speeded-Up Robust Features (SURF). Kontribusi pertama dari penelitian ini adalah optimasi akurasi deteksi dengan memilih nilai <em>Hessian Threshold</em> dan jarak maksimum fitur yang tepat. Kontribusi kedua optimasi waktu deteksi menggunakan metode yang diusulkan. Metode ini hanya menggunakan fitur 3 gambar referensi berdasarkan hasil deteksi sebelumnya. Optimasi waktu deteksi, untuk lintasan dengan 28 gambar referensi, dapat mempersingkat waktu deteksi sebesar 8,72 kali. Pengujian metode yang diusulkan dilakukan menggunakan <em>omnidirectional mobile robot</em> yang berjalan di suatu daerah. Pengujian dilakukan dengan menggunakan metode <em>recall</em>, presisi, akurasi, <em>F-measure</em>, <em>G-measure</em>, dan waktu deteksi. Pengujian deteksi lokasi juga dilakukan berdasarkan metode SIFT untuk dibandingkan dengan metode yang diusulkan. Berdasarkan pengujian, kinerja metode yang diusulkan lebih baik daripada SIFT untuk pengukuran dengan recall 89,67%, akurasi 99,59%, <em>F-measure</em> 93,58%, <em>G-measure</em> 93,87%, dan waktu deteksi 0,365 detik. Metode SIFT hanya lebih baik pada presisi 98,74%.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Self-location detection or self-localization is one of the capabilities that must be possessed by the mobile robot. The self-localization ability is used to determine the robot position in an area and as a reference to determine the next trip direction. In this research, robot localization was by vision-data based, which was captured by catadioptric-types omnidirectional cameras. The number of closest features between the 360<sup>o</sup> image captured by the Omni camera and the reference image was the basis for determining location predictions. Image feature extraction uses the Speeded-Up Robust Features (SURF) method. The first contribution of this research is the optimization of detection accuracy by selecting the Hessian Threshold value and the maximum distance of the right features. The second contribution is the optimization of detection time using the proposed method. This method uses only the features of 3 reference images based on the previous detection results. Optimization of detection time, for trajectories with 28 reference images, can shorten the detection time by 8.72 times. Testing the proposed method was done using an omnidirectional mobile robot that walks in an area. Tests carried out using the method of recall, precision, accuracy, F-measure, G-measure, and detection time. Location detection testing was also done based on the SIFT method to be compared with the proposed method. Based on testing, the proposed method performance is better than SIFT for measurements with recall 89.67%, accuracy 99.59%, F-measure 93.58%, G-measure 93.87%, and detection time 0.365 seconds. The SIFT method is only better at precision 98.74%.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>
Safety should be the top priority for any automaker - because traffic accidents roughly killed 1.4 million people worldwide, ranking tenth on the World Health Organization's list of leading causes of death. Two decades ago, the focus was on passive safety, where it helps vehicle occupants to survive the crash. However, the frontier in safety innovation has moved beyond airbags and side-impact protection. Today, the frontier is active safety for preventing collisions before they occur. In Euro NCAP 2025 Roadmap, this active safety frontier falls under the primary safety and has become one of the overall safety rating initiatives toward safer cars. The primary safety features four technologies to be assessed, including driver monitoring (2020), automatic emergency steering (2020, 2022), autonomous emergency braking (2020, 2022), and V2x (2024). However, this initiative is partially encapsulated in the ASEAN NCAP Roadmap 2021-2025 under – 'Safety Assist' technological feature. For instance, in the new roadmap, ASEAN NCAP only focuses on Auto Emergency Braking (AEB) technology. This AEB is a feature to alert drivers to an imminent crash and help them use the car's maximum capacity. Therefore, as benchmarked to the EURO NCAP, this paper comprehensively reviews the AES demand, assessments, control, and testing methodology and can be further developed to consolidate for the ASEAN NCAP safety rating schemes.
Boerka goat is a crossbreeding product of Australian boer goat and Indonesian local kacang goat. Boerka goat produce meat better than kacang goat and more suited to Indonesian climate than boer goat. In order to develop an optimal boerka population, the crossbreeding process beetwen the next generation has to observed very carefully. To support that purpose, the things needed are registration system for newborn livestock and data management system of pedigree of livestock. It could help the breeder to find a perfect seed for their goat. The newborn goatling will fitted with an RFID integrated electronic ear-tag. This unique ID number will be sent to an online database server together with the neccessary datas of the baby goat. These datas will online managed using website that operated by the breeders. In addition to livestock datas, the website also provides information about the goat farm, the owner, and also the migration or transaction of the livestocks from one to another farm or to the outside goat farm community.
Kecelakaan lalu lintas merupakan salah satu dampak negatif dari kemajuan teknologi dibidang transportasi. Faktor manusia merupakan salah satu faktor dengan persentase tertinggi pada peningkatan kecelakaan lalu lintas. Salah satu contoh faktor manusia adalah kelelahan dalam berkendara. Kelelahan berkendara akan mengakibatkan pengemudi mengalami kantuk, oleh karena itu untuk meminimalisir kecelakaan yang dikarenakan kelelahan saat berkendara maka diperlukan sebuah sistem yang menggunakan alarm untuk mendeteksi mata kantuk secara real time. Salah satu penelitian sebelumnya untuk mendeteksi mata kantuk, sistem yang dibuat menggunakan metode segmentasi warna. Pada penelitian ini akan dibuat sistem deteksi mata kantuk yang berdasarkan Facial Landmarks Detection menggunakan metode Regression Trees yang diimplementasikan ke dalam Raspberry Pi 3 model B. Input dari sistem ini berupa video yang direkam dari PiCamera secara real time. Output dari sistem ini menggunakan buzzer sebagai alarm untuk memberikan peringatan bahwa pengemudi terdeteksi mengantuk. Sistem yang dibuat pada penelitian ini menunjukan bahwa dapat mendeteksi mata kantuk dengan baik. Berdasarkan hasil pengujian, sistem dapat mendeteksi mata sebesar 93.3%, kedipan mata sebesar 96.7%, dan sudut miring wajah sebesar 95%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.