GigaIO, a startup developing high-speed interconnect hardware for computing clusters, today announced that it raised $14.7 million in a funding round led by Impact Ventures. The Carlsbad, California-based company says that itâ€™ll put the proceeds toward expanding its marketing, sales, partner, and channel teams as well as supporting its product R&D efforts.
Itâ€™s often difficult to plan IT infrastructure for and around changing AI workflows. For example, training algorithms can require specialized hardware like accelerators, which span GPUs, field programmable gate arrays (FPGAs), and custom chips. Embracing the cloud is one way to achieve scalability, but potentially at a high cost. The other is to make on-premises hardware more flexible by adopting software-defined infrastructure â€” specifically composable compute, which makes network, storage, and compute resources available over a network.
GigaIO, which Joey Maitra founded in 2012, offers datacenter â€œfabricâ€ hardware â€” interconnecting switches â€” that enable composable compute. Maitra, formerly head of engineering for a firm making computer servers and equipment for the military, prototyped GigaIOâ€™s first product with an external â€œbox of slotsâ€ and a PCI Express (PCIe) with cabling to connect to it.
â€œOne of the main ways we democratize access to AI and machine learning is by [delivering] the same infrastructure to be used by different teams and departments to run different types of workflows,â€ Maitra told VentureBeat via email. â€œInfrastructure requirements are very different for data ingest phase versus the data training phase. Without GigaIO, individual systems are [designed] to handle each phase. The data then moves from system to system for data ingest, cleaning and tagging, training, and inference. Resources are idle up to 85% of the time for GPUs â€¦ and require a significant footprint.â€
GigaIOâ€™s fabric product can repurpose one set of datacenter infrastructure â€œat willâ€ for an AI workflow using the right combination of hardware for the job, according to Maitra. For example, companies using Googleâ€™s TensorFlow framework for predictive analytics, which requires a specific CPU-GPU ratio and specific types of GPUs, can leverage GigaIOâ€™s technology to optimize the allocation of these on-premises resources.
â€œOur main â€˜competitorâ€™ is the cloud,â€ Maitra said. â€œThe pandemic slowed initial testing that most customers require. However, since the start of the year, we have seen activity increase dramatically, and customer engagement is at record levels.â€
Maitra declined to name customers. However, GigaIO earlier this year announced that its fabric will make up a part of the forthcoming Prototype National Research Platform (NRP), a computing platform designed by the University of California, San Diego for scientific research. The NRP will be underwritten by a $5 million five-year grant from the National Science Foundation, with matched funding provided for systems operation.
â€œComplex computational and data workflows underpin many of the scientific research challenges we hope to address with NRP,â€ Dr. Frank WÃ¼rthwein, interim director of the San Diego Supercomputer Center, said in a statement. â€œIn areas as diverse as public health, high energy physics, and wildfire response, this research requires that we aggregate disparate computational elements, such as FPGAs, GPUs, x86 processors, and storage systems into highly usable and reconfigurable systems. GigaIOâ€™s â€¦ technology makes it possible to dynamically bring these elements together in a very low-latency, high-performance interconnect while allowing for distinct, non-interfering workflows to co-exist on the same infrastructure.â€
To date, 30-employee GigaIO has raised $22.5 million in venture equity financing.