SEOUL — The research center operated by South Korea’s LG Group and Seoul National University has opened a joint artificial intelligence research center to accelerate their study of large-scale multimodal AI systems.
A multimodal AI system uses special processing algorithms to learn and understand the connections between different types of data, including text, voice, images, and other digital data. The AI system can also mimic the way humans think to understand the environment using multiple sources.
LG AI Research Center and Seoul National University (SNU) jointly opened “SNU-LG AI Research Center” to research large-scale multimodal AI capable of creating three-dimensional body in virtual space and processing data two-dimensional such as text and images. . “Through this joint research center, we have laid the foundation for researching large-scale commercial AI technologies,” SNU Professor Choi Hae-cheon said in a statement on April 26.
The joint AI research center will focus on eight core AI technology projects, including the development of an AI system capable of studying the language system of humans and answering questions based on self-learning data, a multi-modal AI system capable of understanding emotion-based information as well as linguistic and image data, and an AI algorithm that will not learn race-based biased information, people’s age and sex.
LG’s AI Research Center unveiled “EXAONE”, a hyper-scale AI, in December 2021. The lab also launched an alliance called “Expert AI Alliances” with Google and other companies to create an ecosystem hyper-scale AI that can mimic the way humans think. The Expert AI Alliances brings together 13 founding members: LG AI Research, Google, Elsevier, Shutterstock, VA Corporation, EBS, Woori Bank, two South Korean university hospitals and four LG Group units.
© Aju Business Daily & www.ajunews.com Copyright: Nothing on this site may be reproduced, distributed, transmitted, displayed, published or broadcast without the permission of Aju News Corporation.