Predictive & Analytic AI for Visual Data
Through the introduction of our "WeKare" product, we are able to process the precise location and gaze point of a user's eyes, head and body using relatively low resolution video. This data is then used to predict and estimate the risk factor of a large battery of chronic illnesses and ailments, primarily related to the neurological system.
Another application of our visual AI is in the recognition of handwritten characters in the structured formats seen on transportation slips used in logistics. Our current technology focuses specifically on import/export address transport slips and the automatic conversion of handwritten addresses to digital form.
Also used in our predictive health solution, we specifically focus on the precise tracking and movement of the eyes. Our specific technology is able to use comparatively low resolution video while still retaining very high accuracy. This allows us to apply precise eye tracking to general devices such as smart phones and web cameras.
Low-cost gaze-point tracking is our key value-point, and is being used in multiple in-progress trials.
GoQba was formed from a cooperation with the Korea National University Medical Center Neurological Research Team. An initial prototype algorithm for fast, high accuracy object detection using cutting edge deep learning AI was created. Soon after, GoQba entered into Google's Developer Launchpad and presented the technology at major exhibitions such as Web Summit and Techcrunch Disrupt.
Now, we are focused on bringing our AI technology to other areas such as logistics (ICR / OCR) and healthcare (gaze-point tracking / OCR).
Video is becoming ubiquitous in the modern era. Our vision is to build and train AI that is able to create actionable insights directly from visual data. The big problems we are focused on:
▶ Using gaze-point tracking to identify neurological problems early
▶ Automating the task of converting handwritten text to data via cutting edge OCR and ICR AI
We create AI that uses convolutional networks and deep-learning to effectively create insights, predictions, and usable data from (comparatively) low resolution video data.
In some cases, multi-modal data is also applied in order to create more accurate outcomes, such as in our predictive "WeKare" product where we combine base body data such as heart rate, blood sugar level, and blood pressure with precise head, eye and body tracking to create superior predictions and diagnostics.