Content Creation / Rendering

Vortex Delivers CT-Like Ultrasound to Doctors Offices With NVIDIA Jetson

Despite advances in medical imaging, many medical professionals still lack access to diagnostic imaging in their own offices. Vortex Imaging—a medical imaging device developer based in Israel and a member of the NVIDIA Inception program for startups—has built a solution. 

CT and MRI scans are powerful, but they require expensive infrastructure and are typically confined to hospital settings or dedicated diagnostic imaging centers. Ultrasound machines are more portable, but even the most advanced systems depend on the operator’s skill and provide only a narrow field of view. As a result, many clinicians outside specialized facilities have to refer patients elsewhere for imaging, which delays diagnosis and treatment.

Vortex offers a device, the Vortex360, that combines usability with advanced diagnostic capabilities to address this gap. Roughly the size of a gaming console, the compact probe offers easy handling for in-office diagnostics. It docks into a cart that’s about waist height, bringing full imaging to the point of care with minimal footprint.

A black and white imaging machine set against a white background. The Vortex360 imaging device is about the size of a gaming console and sits on top of a waist-high cart with wheels.
Figure 1. The Vortex360 is about the size of a gaming console and docks into a cart that’s waist high.

Relying on the simplicity of ultrasound, edge compute with NVIDIA Jetson, and GPUs for cloud-based image reconstruction and post-processing, our product aims to meet the needs of practitioners working in any setting, from urban offices and hospitals to rural communities.

Powerful imaging in the palm of your hand

The speed and scalability of cloud computing make it possible to reconstruct high-quality 3D volumetric images in just minutes, giving diagnostic insights to doctors and patients where and when they need it most.

At the heart of the product is a compact ultrasound probe embedded with NVIDIA Jetson, which enables compute at the edge. The process of capturing image data is intuitive and takes only seconds. After an image is captured, it’s uploaded to the cloud where proprietary algorithms—accelerated by GPUs—reconstruct a standardized 3D version of the image based on the raw acoustic data. 

A diagram showing Vortex Imaging's three-step workflow: 1) Ultrasound probe with NVIDIA Jetson captures data; 2) Vortex algorithms reconstruct 3D images using NVIDIA GPUs in the cloud; 3) Resulting volumetric image is ready for diagnosis by physicians.
Figure 2. The Vortex medical imaging flow diagram

Advanced image reconstruction in the cloud

The core reconstruction engine behind the Vortex system is powered by Full Waveform Inversion (FWI), a physics-based computational method originally developed for geophysical exploration such as subsurface imaging for oil and gas firms. FWI works by modeling the entire acoustic wavefield, allowing it to reconstruct high-resolution images. 

Unlike traditional ultrasound, which relies on simplified assumptions and partial data, FWI uses the entire wavefield, amplitude, phase, and even complex behaviors like scattering and multi-path propagation. This comprehensive approach enables the reconstruction of quantitative maps of tissue properties—such as speed of sound, attenuation, and density—to deliver reliable and clinically meaningful information. 

While conventional FWI relies on both transmission and reflection data, we’ve developed a method that only requires reflection datasets. This allows for advanced wavefield reconstruction using a simple, single-sided probe that doesn’t require movement or complex hardware. By eliminating the need for transmission data or probe motion, medical professionals can generate high-quality images using a more compact and lower cost device.

A schematic of the Full Waveform Inversion (FWI) process: an ultrasound transducer captures body anatomy data, which undergoes iterative simulation using cloud-based NVIDIA GPUs to match synthetic models with real-world data, producing an accurate diagnostic image.
Figure 3. The FWI Simulation schematic diagram

FWI requires significant compute to solve complex physical problems. In our early development stages, we explored CPU-based architectures as potential solutions. However, those alternatives underperformed by a factor of 50x to 100x in internal benchmarks, making them impractical for 3D imaging. 

We also recognized that the iterative process used in FWI—where gradient-based optimization is used to minimize the discrepancy between simulated and observed wavefields—resembles the training process of a deep learning neural network. This similarity makes FWI a natural fit for NVIDIA GPUs due to their ability to accelerate parallelizable workloads. As a result, we shifted to a GPU-only architecture, which has been foundational to our system ever since. Today, our cloud platform is powered by GPUs and optimized with our proprietary implementation of CUDA kernels.

Deep learning model trainingFWI solver
GoalMake predictions based on learned model parametersA 3D volumetric image of the physical properties of a medium (i.e. human tissue in medical imaging, or earth subsurface in geophysics)
Input dataInput features (e.g., image, text, signal) Measured sensor data (e.g., ultrasound or seismic received signals)
Iterative core operationForward pass through a neural network, then gradient backpropagationNumerical acoustic wavefield propagation through a modeled medium, then gradient backpropagation
Parameters estimatedModel weights and biases (learned during training)For each voxel in the 3D volume: medium mechanical properties: speed of sound, density, attenuation and elasticity
Computation typeMatrix operations and activation functionsStencil for laplacian calculation
Gradient usePerturb the model weights and biases to continuously decrease the loss in each iterationPerturb the voxel mechanical properties to slightly decrease the loss (sensor data misfit) in each iteration
Loss functionClassification (i.e., MSE), regression (i.e. cross-entropy), or specialized loss (i.e., adversarial)Difference between observed and modeled sensor data
OutputA trained model that can provide prediction, score, or output vectorsThe physical properties of the medium (the optimized parameters)
Parallelism suitabilityHigh, both fine grained (e.g. matrix operations) and coarse gained (e.g. batch parallelism)High, both fine grained (e.g., stencil for laplacian) and coarse grained (multiple transmission patterns)
Why NVIDIA GPUs are IdealSpeeds up tensor and matrix operations, high memory bandwidth for fast data access, scales training job across multiple GPUs and nodesRepetitive and highly parallelized operations needed to simulate the wavefield and calculate the gradient of the medium with regard to misfit function

Table 1. A comparison of deep learning model training versus the FWI Solver.

Edge compute for data acquisition

Edge compute is an integral part of our architecture, enabling on-device processing during data acquisition without compromising portability or energy efficiency. The adoption of NVIDIA Jetson for edge deployment enabled a 20x performance improvement in our image generation pipeline, compared to previous CPU-only solutions. This leap in performance was critical to supporting imaging directly at the point of care.

A GIF of a side-by-side comparison of Vortex vs. CT imaging of human kidney slices presenting detailed anatomical cross-sections.
Figure 4. Side-by-side imaging of a human kidney from a CT scan (left) and the Vortex360.

Expanding access to imaging

Part of our mission is to make medical imaging faster, more convenient, and more accessible to help practitioners in the field achieve positive outcomes for patients.

  • In-clinic diagnostics: The device allows clinicians to perform imaging during patient visits, eliminating referral delays and speeding up decision making.
  • Rural and remote care: With minimal operator training required, the system is ideal for clinics in underserved or hard-to-reach areas.
  • AI-ready data: The standardized, high-quality datasets produced by the system are operator-independent, making them ideal for training and deploying AI tools to support clinical decision making. Since the foundation of effective AI models is access to large volumes of consistent, reliable, annotated data, Vortex’s ability to operate across diverse care settings enables broader data collection and accelerates model development and performance.

Shaping the future of diagnostics

By using edge processing and scalable GPU compute on cloud, we hope to transform traditional imaging hardware into a new class of devices that’s accurate, accessible, and affordable. 

Vortex is a member of the NVIDIA Inception program for startups. Inception has helped us identify best-fit products and the right tools and libraries to help imaging for our edge-to-cloud deployment model. NVIDIA continues to support our development of advanced AI-powered features using NVIDIA MONAI, while also offering exposure to a global network of potential customers and partners.

To learn more about our vision for diagnostic imaging, visit Vortex Imaging.

Discuss (0)

Tags