India's start-up Baya ecosystem pioneer without GPU without generator AI

Trivandrum: To address the rising cost and sustainability challenges associated with generating AI, India’s Genai Research Lab ecosystem has introduced an innovative new product, Bud Runtime. Bud Runtime aims to simplify the deployment of generated AI applications and can be deployed on a CPU-based infrastructure, providing more affordable, scalable, and sustainable options for organizations looking to leverage the power of generated AI.
Since the end of 2023, the Bud ecosystem has been working with companies such as Intel, Microsoft and Infosys to commodify generative AI and make it accessible in organizations around the world. Bud Runtime greatly reduces capital and operating expenses for organizations that adopt AI-generating without compromising application performance. It enables developers, startups, companies and research institutions to launch their production AI programs for $200 a month.
In addition to supporting CPU-based inference, Bud Runtime supports major vendors such as Nvidia, Intel, AMD and Huawei, including GPU, HPU, TPU and NPU, including GPU, HPU, TPU and NPU. One of Bud Runtime’s key innovations is its support for heterogeneous cluster parallelism, which can leverage a mix of its existing hardware, including CPUs, GPUs, HPUs, HPUs, and other architectures, for deployment generation AI workloads and easily scale as more compute resources are available. Enables organizations to alleviate GPU shortages and reduces the cost of running generated AI applications. Bud Runtime is currently the only platform on the market that provides this level of heterogeneous hardware parallelism and clustering.
“We started our Genai journey in early 2023 and quickly encountered the high cost of GPUs. To address this, we built the first version of running smaller models on existing infrastructure. Since then, we have evolved it to support mid-size models on CPUs and are more compatible with Nvidia, Amd, Amd, Intel and Intel and Intel and Infere and Huawei, and more out of date. We see others facing similar barriers, and we decided to produce the technology to help start-ups, businesses and researchers adopt Genai more effectively,” said Jithin VG, CEO of Bud Ecosystem.
The Bud ecosystem focuses on basic AI research, especially effective transformer architectures for low-resource scenarios, decentralized models, hybrid inference and AI inference optimization. The company has also published several research papers and published more than 20 open source models. Bud was also the only first in India to build a large language model with the same large language model as GPT-3.5 at the time.
For the past 18 months, the Bud ecosystem has been working with Intel to make ready Genai inferred on the CPU, especially its Xeon lineup. Later, the cooperation was also expanded to support Intel Gaudi accelerators. In addition to this partnership, Research Labs has teamed up with global technology companies such as Microsoft, LTIM and Infosys to help organizations around the world adopt generative AI in a cost-efficient and scalable way.
“Our mission is to democratize Genai through commodification. Only if we can use commodity hardware on a large scale for Genai. To achieve this, we need to further enhance the inference technology and develop a better model architecture that requires less parallel computing and memory bandwidth. Most of our research and engineering work is focused on these tasks to make all products open. We will enable all products. It will be coming soon next month,” said Linson Joseph, CSO of Bud ecosystem.
It is well known that the generated AI has been on the rise in recent technology recently. However, it is still very expensive for companies to adopt it. Currently only large companies can adopt and try to generate AI. Furthermore, there is a persistent scarcity of GPUs, which further limits accessibility. Currently only large companies can adopt and try to generate AI. For those who do this, projects often fall into the minimum viable product (MVP) stage and rarely develop into full production deployment. It is in this case that the Bud runtime proves its benefit to the enterprise because they want to be cost-effective in the adoption of AI generation.