Esperanto Technologies Introduces First Generative AI Appliance Based on RISC-V, Enabling Developers to Easily Create and Deploy Purpose-Built Vertical Applications

Enables Fast and Secure Deployment of Fine-Tuned Business Applications Including Summarization, Coding, Query and Image Generation Based on the Latest Open-Source Generative AI Models

MOUNTAIN VIEW, Calif.--()--Esperanto Technologies™, the leading developer of high-performance, energy-efficient artificial intelligence (AI) and high-performance computing (HPC) solutions based on the RISC-V instruction set, today announced the industry’s first Generative AI Appliance based on RISC-V technology. Esperanto’s Data Science team contributed heavily to its design, targeting customers wanting to develop and deploy business applications quickly using the latest open-source Generative AI foundation models. Esperanto’s Generative AI Appliance is an integrated software/hardware solution that can be installed in private datacenters or at the enterprise edge using an industry-standard server form factor. Because it is preloaded and self-contained, it delivers high levels of data privacy and lower total cost of ownership (TCO) while eliminating the need for developers to constantly download, port and tune the latest Large Language Models (LLMs) and Diffusion Models to expensive GPU-based hardware.

Esperanto’s new appliance is ideal for organizations that want to leverage the benefits of Generative AI technology to create custom applications initially around information summarization, organizational data/knowledge query, computer code generation and translation and image generation. Esperanto’s Data Science and Software teams designed it to support various application UI and output texts, computer programs and images, and is continually expanding the availability of LLMs and Diffusion models as they are made public. Examples of industries that can benefit from Esperanto’s new solution include the healthcare and legal professions which require quick and accurate summaries of complex descriptions while maintaining data privacy, and the financial industry which can translate its legacy code base to more modern and maintainable programming languages.

To request additional details and pricing, please visit www.esperanto.ai/contact.

“Generative AI is revolutionizing the way we create and summarize content, generate and translate computer code, and generate visual and video content. However, creating and deploying LLM-based applications typically requires large teams of data scientists, long development times and expensive, hard-to-obtain GPU-based platforms. This can make Generative AI strategies impractical for most organizations today,” said Art Swift, president and CEO at Esperanto Technologies. “Esperanto recognizes these challenges and has developed its new Generative AI Appliance based on its advanced RISC-V hardware using pretrained LLMs that are highly accurate but with much faster development and strong data privacy.”

Esperanto’s Generative AI Appliance is currently running the latest LLMs and image generation models such as LLaMA 2, Vicuna, StarCoder, OpenJourney and Stable Diffusion, and the company's strategy is to continuously update the system with the latest versions of popular open-source models as soon as they are released.

“We are in the early stages of a multi-year super cycle for merchant ASICs, driven by the adoption of Generative AI, an increase in AI training, significant growth of AI inferencing, and HPC workflows,” said Ben Bajarin, CEO and principal analyst at Creative Strategies, Inc. “We are forecasting an Enterprise Edge infrastructure refresh as companies look to run more AI and HPC workloads on-prem for cost, privacy, and data sovereignty reasons. In addition, energy efficiency is a growing priority, so offerings like Esperanto’s that have a strong dollar-per-watt value are well positioned.”

“The market is trending toward smaller LLM and diffusion models – 30 billion parameters and below – driven by reducing the high cost of inference on very large models,” said Karl Freund, founder and principal analyst at Cambrian-AI Research. “These models are trained to be highly accurate with much lower training and inference costs. There is a lot of money to be made in this space, and inference solutions like Esperanto’s Generative AI Appliance should save customers significant costs versus GPU-based systems.”

Esperanto’s Generative AI Appliance is available now and includes currently available ET-SoC-1 AI Accelerator chips that can run up to 4 LLMs simultaneously. The appliance is delivered in a standard 2U-high rack-mounted chassis and is available directly from Esperanto.

To request additional details and pricing, please visit www.esperanto.ai/contact.

About Esperanto Technologies:

Esperanto Technologies develops massively parallel, high-performance, energy-efficient computing solutions for Generative AI, other AI, and massively parallel HPC workloads, based on the open standard RISC-V instruction set architecture. Esperanto is headquartered in Mountain View, California with additional engineering sites in Portland, Oregon; Austin, Texas; Barcelona, Spain; and Belgrade, Serbia. For more information, please visit https://www.esperanto.ai/

Contacts

Craig Cochran
Phone: (408)507-1816
Email: newsroom@esperantotech.com

Contacts

Craig Cochran
Phone: (408)507-1816
Email: newsroom@esperantotech.com