The Facebook parent said that it is working on a new AI-optimized data center design and the second phase of its 16,000 GPU supercomputer for AI research. Credit: Sdecoret / Getty Images Facebook parent company Meta has revealed plans for the development of its own custom chip for running artifical intelligence models, and a new data center architecture for AI workloads. “We are executing on an ambitious plan to build the next generation of Meta’s AI infrastructure and today, we’re sharing some details on our progress. This includes our first custom silicon chip for running AI models, a new AI-optimized data center design and the second phase of our 16,000 GPU supercomputer for AI research,” Santosh Janardhan, head of infrastructure at Meta, wrote in a blog post Thursday. Meta’s custom chip for running AI models, called Meta Training and Inference Accelerator (MTIA), is designed to provide greater compute power and efficiency than CPUs on the market today, according to Janardhan. MTIA is customized for internal workloads such as content understanding, feeds, generative AI, and ad ranking, the company said, adding that the first version of the chip was designed in 2020. Meta’s announcement of the strides it is making to produce its own custom chips for running AI models comes at a time when other large technology companies — driven by the proliferation of large language models and generative AI —are either working on or have already launched their own chips for AI workloads Earlier this month, news reports claimed that Microsoft was working with chip-maker AMD to develop its own chip for running AI workloads. AWS has also released its own chip for running AI workloads. On its part, Meta also said Thursday that its new data center design will be optimized to train AI models, a process that enables them to better their performance as they ingest more data.. “This new data center will be an AI-optimized design, supporting liquid-cooled AI hardware and a high-performance AI network connecting thousands of AI chips together for data center-scale AI training clusters,” Janardhan wrote, adding that the new data center systems will be faster and more cost-effective to build than earlier facilities. In addition to the new data center design, the company said that it was working on developing AI supercomputers that will support training of next-generation AI models, power augmented reality tools, and support real-time translation technology. ENDS Related content how-to Compressing files using the zip command on Linux The zip command lets you compress files to preserve them or back them up, and you can require a password to extract the contents of a zip file. By Sandra Henry-Stocker May 13, 2024 4 mins Linux news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center opinion NSA, FBI warn of email spoofing threat Email spoofing is acknowledged by experts as a very credible threat. By Sandra Henry-Stocker May 13, 2024 3 mins Linux how-to Download our SASE and SSE enterprise buyer’s guide From the editors of Network World, this enterprise buyer’s guide helps network and security IT staff understand what Secure Access Service Edge (SASE) and Secure Service Edge) SSE can do for their organizations and how to choose the right solut By Neal Weinberg May 13, 2024 1 min SASE Remote Access Security Network Security PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe