MediaTek, a Taiwanese chipmaker, has partnered with Meta Platforms Inc. to enable the usage of generative AI applications on a variety of edge devices such as smartphones, IoT devices, smart homes, and autos. The cooperation intends to create a comprehensive edge computing ecosystem that will accelerate the development of AI applications on these devices.
MediaTek, a company that powers over 2 billion edge devices annually, has announced its intention to leverage Meta's Llama 2 Large Language Model (LLM) with its APUs and NeuroPilot AI Platform. Through integrating these technologies, MediaTek aims to enable the execution of generative AI applications directly on-device, offering various advantages such as seamless performance, enhanced privacy and security, reduced latency, and the ability to function in areas with limited connectivity.
Currently, cloud computing is used for the majority of generative AI processing. However, these applications will be able to operate directly on-device, eliminating the need for cloud resources, thanks to MediaTek's adoption of Llama 2 models. The vast bulk of generative AI processing is done on the cloud. However, because MediaTek has adopted Llama 2 models, these applications can run directly on-device, removing the need for cloud services. The chipmaker intends to create smartphone AI applications based on Llama 2, powered by its next-generation flagship processor. The updated APU will have transformer backbone acceleration, decreased footprint access, and a new chipset optimized for Llama 2. This chipset is expected to be available by the end of 2023. According to JC Hsu, Corporate Senior Vice President and General Manager of MediaTek's Wireless Communications Business Unit, Generative AI is vital for the digital transition. MediaTek has joined forces with Meta to help the Llama 2 development community.