round1-01
PassiveLogic Advances Differentiable Swift Compiler to Deliver Record-Setting AI Speeds
round1-01

Latest optimizations to Differentiable Swift enable lightning-fast AI, unlocking true autonomy for any automated system at the edge.

SALT LAKE CITY— SEP 17, 2024 — PassiveLogic, creator of the first platform for generative autonomy to enable autonomous infrastructural robots, announced today that it has once again shattered industry benchmarks with its groundbreaking advancements in the Differentiable Swift toolchain, delivering unprecedented AI performance speeds that pave the way for true autonomy. PassiveLogic’s Differentiable Swift compiler is hundreds of times faster than Google’s TensorFlow and thousands of times faster than Meta’s PyTorch. When paired with NVIDIA’s Jetson Orin CPU, it functions as the fastest general-purpose AI compiler toolchain available. Together, Differentiable Swift and the NVIDIA Orin unlock truly powerful compute and training at the edge to enable real-time compute on resource-constrained devices like smartphones, IoT systems, and personal computers.

PassiveLogic’s extensive work to advance Differentiable Swift has set a new speed precedent for AI compute, outperforming Google’s TensorFlow by 683x and Meta’s PyTorch by 3,610x. This speed performance builds on PassiveLogic’s previously announced work to advance Differentiable Swift, yielding a 740 percent increase over its 2023 release. Details regarding the benchmark are available in this article, and PassiveLogic’s open-source Differentiable Swift documentation can be found on GitHub.

Differentiable Swift delivers the fastest AI model training performance available. For the first time, software can use automatic differentiation during runtime, and not just during training. This dramatically improves computational efficiency and speed. Fast differentiation enables new kinds of multi-dimensional AI models that would otherwise require a combination of resource-intensive deep learning models to solve. An increase in performance also allows AI models to learn hundreds of times faster, meaning end-user deployments can train on and learn from their custom, real-time data. This is a significant advancement, as traditional models are trained on pre-existing data, which is often generic, and outdated, rather than updated data reflective of the environment in which the model currently operates. These contemporary models can assess new information, adjust decision-making, and optimize output delivery to better align with the intended goals. Real-time processing at the edge enables true autonomy.

“The limits of our autonomous future are bound by the speed and efficiency of our foundational technologies,” said Troy Harvey, CEO of PassiveLogic. “PassiveLogic has embraced this challenge and pushed the limits of AI performance to build the world’s fastest and most energy-efficient compiler toolchain. These breakthroughs enable us to pursue truly autonomous systems.”

Foundational edge compute technology is critical for an autonomous future; by definition, no system that relies on cloud compute can be truly autonomous. Embedded applications like robots and logistics cannot be wholly reliant on data centers. Autonomous systems at the edge must optimize energy efficiency, speed, and memory utilization, achieving balance among the three to facilitate real-time decisions. PassiveLogic’s unprecedented speed translates to significant energy savings, which reduces operational costs. PassiveLogic’s Differentiable Swift toolchain, in combination with NVIDIA’s Jetson Orin processor, balances all three needs to deliver superior compute performance for heterogenous AI models.

AI’s overall compute efficiency is determined largely by several factors, including the speed of the model and the processor. In embedded applications, efficiency is even more important given the limited capability of edge processors. The Orin processor that powers the PassiveLogic Hive has eight ARM A78 cores working in conjunction with 1024 GPU cores and 32 Tensor Cores to deliver 100 trillion operations per second of raw compute power. PassiveLogic’s latest optimization benchmark used functional operations for comparison.  Swift clocked 42,974,432 Ops/sec, with TensorFlow at 62,845 Ops/sec, and PyTorch at 11,902 Ops/sec.

PassiveLogic has built a first-of-its-kind general-purpose AI compiler toolchain that is incredibly fast and efficient, opening the door for novel AI applications including edge processing and robotics. Differentiable Swift unlocks automatic differentiation during runtime, not just during training, a drastic improvement that will enable resource-constrained units to perform compute-intensive actions immediately and respond to variables in real time. Additionally, for the first time AI can use gradient descent to learn in-situ beyond the training period. This can be leveraged to support new forms of AI including heterogeneous compute, ontological AI (digital twins), graph neural nets, and continuous learning.

PassiveLogic’s advancements in Differentiable Swift are the result of collaboration with the Swift Core Team and ongoing work with the open-source Swift community. As a collaborator to the Swift language, the PassiveLogic team has submitted thousands of commits and provided 33 patches and feature merges since August 2023.

PassiveLogic’s breakthrough Differentiable Swift speed ushers in a new era of AI, enabling true autonomy through real-time learning.

About PassiveLogic

PassiveLogic enables autonomy for controlled systems and unlocks collaboration between teams to manage those systems. PassiveLogic has reimagined how we design, build, operate, maintain, and manage infrastructural robots, whose current technology has remained unchanged for decades. By using revolutionary physics-based Quantum digital twins and leveraging the world’s fastest AI compiler to simulate future-forward controls, PassiveLogic empowers users to easily create their own generative digital twins in minutes to launch autonomous control. This control optimizes for energy use, equipment longevity, and occupant comfort levels in real time for the system’s lifetime. Autonomous control lays the foundation for decarbonization at scale and enables truly smart, connected cities. PassiveLogic is backed by leading investors including nVentures, Era Ventures, Keyframe Capital, Addition, RET Ventures, noa (formerly A/O Proptech), and Brookfield Growth.

Previous post
3 / 85
Next post
Twitter%20X