It provides Ar-Core access to the subunits (embedded software, image processing, speech and sensing) on the robot. By bridging the embedded software and Ar-Core, it provides synchronous management and control of all limbs at the same time. The image processing algorithms it contains process the image on the video card simultaneously, giving real-time results under the human response. They perform human detection, recognition and tracking, object detection, recognition and tracking. In this way, the robot recognizes people and keeps the people it meets in the database to share with other robots. The robot, which creates the concepts of dark and light by measuring the light intensity of the environment, processes the map data of the environment with image processing algorithms and gives directions in real time. In this way, it eliminates the need to create the prohibited areas and walking paths on the map by humans, and it can be realized in seconds.
Thanks to speech detection and voice synthesis algorithms, it can develop dialogue with people.The sounds perceived from the external environment are first converted into text by passing through the speech detection algorithm. This created text is processed and concepts such as order, question, subject, object, place and direction in the sentence are determined and transmitted to Ar-Core. The system transmits the text processed from Ar-Core to the sound card through sound synthesis algorithms, allowing the robot to speak.