Tachyum Integrates IP From World’s Leading Vendors for Tape-Out In 2022
Tachyum™ today announced new IP suppliers who have provided the company with critical IP components needed to bring Prodigy™ to commercial markets in 2022. They include Alphawave and Rambus IP.
“It’s been said that you are often judged by the company that you keep, so it is vitally important that we only surround ourselves with the best in order to be the best”
Alphawave is a global leader in high-speed connectivity for the world’s technology infrastructure. Its IP solutions meet the needs of global tier-one customers in data centers, compute, networking, AI, 5G, autonomous vehicles and storage. Tachyum will leverage its AlphaCORE Long-Reach (LR) Multi-Standard-Serdes (MSS) IP, a high-performance, low-power, DSP-based PHY with speeds up to 112Gbps. Alphawave also delivers complete Ethernet and PCIe IP subsystems.
Rambus is a premier chip and Silicon Intellectual Property (SIP) provider specializing in high-speed interconnects supporting multi-gigabit rates (2.5G, 5G, 8G, 16G, 25G, 32G, 64G), and protocols such as PCI Express® (PCIe®), CXL™ and CCIX™. With industry-proven solutions that are both highly configurable and flexible, Tachyum expects its long-term partnership with Rambus to scale to PCIe 6.0 and beyond.
“It’s been said that you are often judged by the company that you keep, so it is vitally important that we only surround ourselves with the best in order to be the best,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “We work with partners like Alphawave and Rambus because of their industry-proven solutions, smooth IP integration, and the ability to execute on schedule and target in a much shorter time. By leveraging their IP as part of our Prodigy solution, we will be able to deliver the world’s first universal processor to market faster so that we can best address the growing needs of AI, HPC and hyperscale data centers.”
Tachyum’s Prodigy processor can run HPC applications, convolutional AI, explainable AI, general AI, bio AI, and spiking neural networks, plus normal data center workloads, on a single homogeneous processor platform, using existing standard programming models. Without Prodigy, hyperscale data centers must use a combination of disparate CPU, GPU and TPU hardware, for these different workloads, creating inefficiency, expense, and the complexity of separate supply and maintenance infrastructures. Using specific hardware dedicated to each type of workload (e.g. data center, AI, HPC), results in underutilization of hardware resources, and more challenging programming, support, and maintenance. Prodigy’s ability to seamlessly switch among these various workloads dramatically changes the competitive landscape and the economics of data centers.