“…Combining and organizing previously annotated image data with you only look once v4 (YOLOv4)-tiny framework obtaining mean average precision (mAP) of 98.33% or in that of [8] where it used three object detection models to identify pills among them RetinaNet, single shot multi-box detector (SSD) and YOLOv3 in which ISSN: 2088-8708 Medication dispenser with touch screen and consumption … (Santiago Linder Rubiños Jimenez) 2635 they obtained mAP of 82.89%, 82.71% and 80.69% respectively. Other pillboxes such as developed the authentication of drug consumption as developed by [9] which detected drug consumption using locally on Raspberry Pi object detection employing a YOLOv3 model in the captured images, trained to detect human hands using bounding box coordinates in the image frame authenticating drug consumption and internet of things (IoT) technologies, to develop monitoring systems [10] from the cloud. However, most dispensers focus only on the recognition of medications to support the patient when selecting the correct container or pill at the time of consumption, but leave aside the authentication of medication consumption, which means that it is not possible to know if the patient has consumed the medication as the first antecedent or only takes into account the position of the hand to determine the consumption of the medication, which can generate low accuracy, since there are several activities with which the intake of medication can be confused, such as yawning, stretching or waving [11] and there is also no system for sending photos to the caregiver or family member so that they can verify whether the intake was performed correctly, as in the second antecedent.…”