GRANT-SUPPORTED RESEARCH PROJECTS

Monitoring Vocal Tract and Face Movements for 3D Modeling and Animation For Turkish

It is very important to obtain accurate data about the movements and shapes of vocal tract and articulator organs during speech in terms of phonetic studies. This data is the basis for the treatment of speech disorders. In recent years, the use of MRI has become very important in studies about speech production. During speaking, dynamic information about the vocal tract and the movement of the articulator organs can be collected using MRI. The acoustic noise generated during the acquisition of MRI data makes it difficult to record the acoustic signal (speaking voice produced by the person) and to synchronize with the MR image. In addition, the recordings obtained with MRI include vocal tract and articulator organs, which normally do not show any external movement changes during speech only. However, with the MR microphone, microphone-camera synchronization set and MR-compatible sound-music system to be used in this study, it is ensured that acoustic signal is obtained simultaneously with image and acoustic noise suppression. By mounting 2 MR Bore cameras on the MRI device, movement changes of the mouth and around the mouth can also be recorded simultaneously during speaking. Under the Motion Capture (MOCAP) cameras, markers will be placed on the person's face and face movements will be recorded during speaking. While using MRI and MOCAP, target Turkish sounds and words will be recorded by observing the changes in motion and shape that occur. Data from MRI and MOCAP records will be processed for 3D modeling and animation of vocal tract and articulator organs. The 3D modeling and animation that is planned for Turkish and the speech sounds used in Turkish with the movement and shape changes that are revealed during speech will be very important about the production of Turkish speech sounds. This project will be a basic and indispensable project for this and related topic studies.

TÜBİTAK 1001

Grant No: 117E183

Principal Investigators: Asst. Prof. Dr. Maviş Emel KULAK KAYIKÇI

Co-Investigators: Prof. Dr. Haşmet GÜRÇAY, Dr. Serdar ARITAN, Dr. Ayça KARAOSMANOĞLU

Enhancing the User Experience of 3D Displayed Virtual Scenes

3D computer graphics has reached a high level of visual quality and the improvement of 3D graphical image quality continues to be an area of research receiving much attention. Recently, developments in displays with 3D capability and 3D televisions, 3D digital cinema, 3D games and other 3D applications has significantly increased the emphasis on the creation and processing of stereoscopic 3D content. Parallel to these developments, new techniques to improve the perceived quality of 3D scenes are in high demand.

The core issue in 3D content creation is determining the comfort range of the perceivable depth by the user and maximizing this perceivable range within those limits. Recent research has made progress in controlling the perceived depth range in post-production pipeline. However, unlike the improvements that can be executed during offline production, an interactive environment, where the position and rotation of the camera dynamically change based on the user input, calls for scalable stereo camera control systems that can run in real time in order to keep the perceived depth of the user in the comfortable target range.

The foremost example of an interactive setting is a 3D game environment where the stereoscopic output is prone to change very dynamically. Significant difficulties persist in presenting users of these 3D interactive environments with comfortable and realistic 3D experience. Today, eye-strain and headaches are still among the common complaints after extended sessions with 3D games. Accordingly, the expected jump in demand for 3D games has not been realized yet. Herein, the most prominent challenge with the human visual system stands to be applying the principles and limitations of 3D perception adequately for the display of the stereoscopic 3D content.

With this project that is proposed to advance the perceived 3D image quality, to enhance the perception of depth, to increase the visual comfort, and, consequently, to improve the overall user experience in the display of virtual 3D scenes; it is aimed, both in interactive and non-interactive environments, to maximize the perceived depth feeling without causing visual discomfort, to establish the sources of visual discomfort and minimize their effects in the presentation of 3D contents with displays of varying scales, and to detect and prevent the triggering mechanisms that lead to virtual reality sickness.

TÜBİTAK 1001

Grant No: 116E280

Principal Investigator: Dr. Ufuk ÇELİKCAN

Co-Investigators: Prof. Dr. Tolga K. ÇAPIN, Prof. Dr. Haşmet GÜRÇAY

Evaluation of the Effectiveness of Virtual Reality -Based Approach in Hand Hygiene Training Practices

Proper hand washing is the basis of individual hygiene and has become much more vital with the Covid-19 pandemic. In this project, we hypothesize that hand hygiene training supported with a VR -based approach will increase the effectiveness of the training in adult education. For this, a VR-based training environment will be created and VR gloves will be used during the training. Exercise and practice-based learning of all 11 steps recommended by the World Health Organization (WHO) to ensure proper hand hygiene will be carried out. The proposed approach will be examined in a comprehensive sample working at professional level in mass nutrition systems in comparison with the training based on classical approaches, and the effectiveness will be evaluated by measuring the bacterial load of individuals with microbiological analyses.

TÜBİTAK 1002

Grant No: 220S240

Principal Investigator: Dr. Derya DİKMEN

Co-Investigator: Dr. Ufuk ÇELİKCAN

Using Synthetic Data for Deep Person Re-Identification

Person re-identification is one of the major research topics in computer vision that has seen significant progress, with the advent of deep learning. The automation of this task is of utmost importance for visual surveillance systems, where the goal is to simply identify a person in a given image in a large gallery of images captured across multiple cameras. The rising interest in this topic is not merely because it poses a challenging real-world problem but also due to the introduction of, as per the aforementioned trend, much larger person re-identification benchmark datasets lately. While the progress has been extremely significant, the available datasets in the literature are still not of the desired quality in terms of size and variety for effectively training deep models.

The aim of the proposed project is to produce new large-scale datasets which will allow construction of more powerful deep models for person re-identification and are much larger and more comprehensive than the existing ones. For this purpose, instead of manually labeling images taken from cameras placed at different locations under specific scenarios, in our project, we will investigate novel synthetic data generation methods.

TÜBİTAK 1001

Grant No: 217E029

Principal Investigator: Dr. Erkut ERDEM

Co-Investigators: Dr. Ufuk ÇELİKCAN, Dr. Aykut ERDEM