November 10, 2024

NeuReality lands $35M to bring AI accelerator chips to market

Chips #Chips

The growing demand for AI, particularly generative AI (i.e., AI that generates images, text and more), is supercharging the AI inferencing chip market. Inferencing chips accelerate the AI inferencing process, which is where AI systems generate outputs (e.g., text, images, audio) based on what they learned while “training” on a specific set of data. AI inferencing chips can be — and have been — used to yield faster generations from systems such as Stable Diffusion, which translates text prompts into artwork, and OpenAI’s GPT-3, which extends a few lines of prose into full-length poems, essays and more.

A number of vendors — both startups and well-established players — are actively developing and selling access to AI inferencing chips. There’s Hailo, Mythic and Flex Logix, to name a few upstarts. And on the incumbent side, Google’s competing for dominance with its tensor processing units (TPUs) while Amazon’s betting on Inferentia. But the competition, while fierce, hasn’t scared away firms like NeuReality, which occupy the AI chip inferencing market but aim to differentiate themselves by offering a suite of software and services to support their hardware.

On the subject, NeuReality today announced that it raised $35 million in a Series A funding round led by Samsung Ventures, Cardumen Capital, Varana Capital, OurCrowd and XT Hi-Tech with participation from SK Hynix, Cleveland Avenue, Korean Investment Partners, StoneBridge, and Glory Ventures. Co-founder and CEO Moshe Tanach tells TechCrunch that the tranche will be put toward finalizing the design of NeuReality’s flagship AI inferencing chip in early 2023 and shipping it to customers.

“NeuReality was founded with the vision to build a new generation of AI inferencing solutions that are unleashed from traditional CPU-centric architectures and deliver high performance and low latency, with the best possible efficiency in cost and power consumption,” Tanach told TechCrunch via email. “Most companies that can leverage AI don’t have the funds nor the huge R&D that Amazon, Meta and other huge companies investing in AI have. NeuReality will bring AI tech to anyone who wants to deploy easily and affordably.”

NeuReality was co-founded in 2019 by Tzvika Shmueli, Yossi Kasus and Tanach, who previously served as a director of engineering at Marvell and Intel. Shmueli was formerly the VP of back-end infrastructure at Mellanox Technologies and the VP of engineering at Habana Labs. As for Kasus, he held a senior director of engineering role at Mellanox and was the head of integrations at semiconductor company EZchip.

From the start, NeuReality focused on bringing to market AI hardware for cloud data centers and “edge” computers, or machines that run on-premises and do most of their data processing offline. Tanach says that the startup’s current-generation product lineup, the Network Attached Processing Unit (NAPU), is optimized for AI inference applications, including computer vision (think algorithms that recognize objects in photos), natural language processing (text-generating and classifying systems) and recommendation engines (like the type that suggest products on e-commerce sites).

NeuReality’s NAPU is essentially a hybrid of multiple types of processors. It can perform functions like AI inferencing load balancing, job scheduling and queue management, which have traditionally been done in software but not necessarily very efficiently.

NeuReality

Image Credits: NeuReality

NeuReality’s NR1, an FPGA-based SKU within the NAPU family, is a network-attached “server on a chip” with an embedded AI inferencing accelerator along with networking and virtualization capabilities. NeuReality also offers the NR1-M module, a PCIe card containing an NR1 and a network-attached inference server, and a separate module — the NR1-S — that pairs several NR1-Ms with the NR1.

On the software side, NeuReality delivers a set of tools, including a software development kit for cloud and local workloads, a deployment manager to help with runtime issues and a monitoring dashboard.

“The software for AI inference [and] the tools for heterogeneous compute and automated flow of compilation and deployment … is the magic that supports our innovative hardware approach,” Tanach said. “The first beneficiaries of the NAPU technology are enterprises and cloud solution providers that need infrastructure to support their chatbots, voice bots, automatic transcriptions and sentiment analysis as well as computer vision use cases for document scans, defect detection, etc.  … While the world was focusing on the deep learning processor improvements, NeuReality focused on optimizing the system around it and the software layers above it to provide higher efficiency and a much easier flow to deploy inference.”

NeuReality, it must be noted, has yet to back up some of its performance claims with empirical evidence. It told ZDNet in a recent article that it estimates its hardware will deliver a 15x improvement in performance per dollar compared to the available GPUs and ASICs offered by deep learning accelerator vendors, but NeuReality hasn’t released validating benchmarking data. The startup also hasn’t detailed its proprietary networking protocol, a protocol that it has previously claimed is more performant than existing solutions.

Those items aside, delivering hardware at massive scale isn’t easy — particularly where it involves custom AI inferencing chips. But Tanach argues that NeuReality has laid the necessary groundwork, partnering with AMD-owned semiconductor manufacturer Xilinx for production and inking a partnership with IBM to work on hardware requirements for the NR1. (IBM, which is also a NeuReality design partner, previously said it’s “evaluating” the startup’s products for use in the IBM cloud.) NeuReality has been shipping prototypes to partners since May 2021, Tanach says.

According to Tanach, beyond IBM, NeuReality is working with Lenovo, AMD and unnamed cloud solution providers, system integrators, deep learning accelerator vendors and “inference-consuming” enterprises on deployments. Tanach declined, however, to reveal how many customers the startup currently has or what roughly it’s projecting in terms of revenue.

“We see that the pandemic is slowing companies down and pushing for consolidation between the many deep learning vendors. However, for us it doesn’t change anything, since late next year or sometime through 2024 inference deployment is expected to explode — and our technology is exactly the enabler and driver of that growth,” Tanach said. “The NAPU will bring AI for a broader set of less technical companies. It is also set to allow large-scale users such as ‘hyperscalers’ and next-wave data center customers to support their growing scale of AI usage.”

Ori Kirshner, the head of Samsung Ventures in Israel, added in an emailed statement: “We see substantial and immediate need for higher efficiency and easy-to-deploy inference solutions for data centers and on-premises use cases, and this is why we are investing in NeuReality. The company’s innovative disaggregation, data movement and processing technologies improve computation flows, compute-storage flows, and in-storage compute — all of which are critical for the ability to adopt and grow AI solutions.”

NeuReality, which currently has 40 employees, plans to hire 20 more over the next two fiscal quarters. To date, it’s raised $38 million in venture capital.

Leave a Reply