NinjaTech AI Teams Up With AWS to Launch the Next Generation of AI Agents Trained Using Amazon’s AI Chips

  • NinjaTech AI, an SRI-backed generative AI spinout, powered by custom Ninja LLMs, has created a multi-agent personal AI that can plan and execute real-world tasks asynchronously, such as researching multi-step complex topics and scheduling meetings on its users’ behalf.

  • AWS’s Trainium and Inferentia2 machine learning chips enable NinjaTech AI to train and serve AI agents quickly, on demand and sustainably, delivering the power of generative AI to help everyone be more productive.

NinjaTech AI, a Silicon Valley-based generative AI company on a mission to make everyone more productive by taking care of time-consuming tasks, announced the launch of its new personal AI, Ninja, an evolution beyond AI assistants and co-pilots to autonomous agents. NinjaTech AI is leveraging Amazon Web Services’ (AWS) purpose-built machine learning (ML) chips Trainium and Inferentia2, and Amazon SageMaker, a cloud-based machine learning service, to build, train, and scale custom AI agents that can handle complex tasks autonomously, such as conducting research and scheduling meetings. These AI agents save time and money for every user by bringing the power of generative AI to everyday workflows. Using AWS’s cloud capabilities, Ninja can conduct multiple tasks simultaneously, meaning users can assign new tasks without waiting for existing tasks to be completed.

“Working with AWS’s Annapurna Labs has been a genuine game-changer for NinjaTech AI. The power and flexibility of Trainium & Inferentia2 chips for our reinforcement-learning AI agents far exceeded our expectations: They integrate easily and can elastically scale to thousands of nodes via Amazon SageMaker,” stated Babak Pahlavan, founder and CEO of NinjaTech AI. “These next-generation AWS-designed chips natively support the larger 70B variants of the latest popular open-source models like Llama 3, while saving us up to 80% in total costs and giving us 60% more energy efficiency compared to similar GPUs. In addition to the technology itself, the collaborative technical support from the AWS team has made an enormous difference as we build deep tech.”

Read More: o9 Transforms Integrated Planning and Decisioning With GenAI-Powered Innovations to Its Digital Brain Platform

AI agents operate using highly customized large language models (LLMs) that are modified using a variety of techniques, such as reinforcement learning, to deliver accuracy and speed. Successful development of AI agents requires affordable and elastic chips tuned specifically for reinforcement learning—a difficult and costly challenge for startups with today’s scarcity of GPUs, their inelasticity, and with compute cost at such a premium. AWS has solved this challenge for the AI agent ecosystem with its unique chip technology, which enables rapid training bursts that scale to touch thousands of nodes as required per training cycle. Combined with Amazon SageMaker, which offers the ability to leverage open-source models, training AI agents is now fast, flexible, and affordable.

“AI agents are rapidly emerging as the next generation of productivity tools that will transform how we work, collaborate, and learn. NinjaTech AI has enabled fast, accurate, and cost-effective agents that customers can quickly scale using AWS Trainium and Inferentia2 AI chips,” said Gadi Hutt, senior director, Annapurna Labs at AWS. “We’re excited to help the NinjaTech AI team bring autonomous agents to the market, while also advancing AWS’s commitment to empower open-source ML and popular frameworks like PyTorch and Jax.”

NinjaTech AI has trained its models with AWS Trainium (Amazon EC2 Trn1 instances) and is serving them using AWS Inferentia2 (Amazon EC2 Inf2 instances). Trainium powers high-performance compute clusters on AWS for training LLMs faster and at a lower cost, while using less energy. The Inferentia2 chip enables models to generate inferences faster and at a much lower cost, with up to 40% better price performance.

Read More: SalesTechStar Interview with Puneet Arora, Global President, Yellow.ai

“Our collaboration with AWS has been critical to accelerating our ability to develop a truly novel generative AI-based planner and action engine, which are vital to building state-of-the-art AI agents. Because we needed the most elastic and highest-performing chips with incredible accuracy and speed, our decision to train and deploy Ninja on Trainium and Inferentia2 chips made perfect sense,” added Pahlavan. “Every generative AI company should be considering AWS if they want access to on-demand AI chips with incredible flexibility and speed.”

Users can access Ninja by visiting myninja.ai. Starting today, Ninja offers four conversational AI agents capable of doing multi-step, real-time web research, scheduling meetings with internal and external parties via email, helping out with coding tasks, as well as drafting emails and giving advice. Additionally, Ninja offers easy access to side-by-side result comparisons from world-class models from companies such as OpenAI, Anthropic, and Google. Lastly, Ninja offers a state-of-the-art asynchronous infrastructure that allows users to tackle a nearly infinite number of tasks all at once. Ninja will get better as customers use it, making them more productive in their day-to-day life.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

AI assistantsAmazon SageMakerAmazon Web Services (AWS)autonomous agentscloud-based machine learning serviceco-pilotsmachine learning (ML) chipsNewsNinjaNinjaTech AIPersonal AITrainium and Inferentia2Valley-based generative AI company