Meet the Team We are an innovation team on a mission to transform how enterprises harness AI. Operating with the agility of a startup and the focus of an incubator, we're building a tight-knit group of AI and infrastructure experts driven by bold ideas and a shared goal: to rethink systems from the ground up and deliver breakthrough solutions that redefine what's possible - faster, leaner, and smarter. We thrive in a fast-paced, experimentation-rich environment where new technologies aren't just welcome - they're expected. Here, you'll work side-by-side with seasoned engineers, architects, and thinkers to craft the kind of iconic products that can reshape industries and unlock entirely new models of operation for the enterprise. If you're energized by the challenge of solving hard problems, love working at the edge of what's possible, and want to help shape the future of AI infrastructure - we'd love to meet you. Impact As High-performance AI compute engineer, you will be instrumental in defining and delivering the next generation of enterprise-grade AI infrastructure. As a principal engineer within our GPU and CUDA Runtime team, you will play a critical role in shaping the future of high-performance compute infrastructure. Your contributions will directly influence the performance, reliability, and scalability of large-scale GPU-accelerated workloads, powering mission-critical applications across AI/ML, scientific computing, and real-time simulation. You will be responsible for developing low-level components that bridge user space and kernel space, optimizing memory and data transfer paths, and enabling cutting-edge interconnect technologies like NVLink and RDMA. Your work will ensure that systems efficiently utilize GPU hardware to its full potential, minimizing latency, maximizing throughput, and improving developer experience at scale. This role offers the opportunity to impact both open and proprietary systems, working at the intersection of device driver innovation, runtime system design, and platform integration. KEY RESPONSIBILITIES
- Design, develop, and maintain device drivers and runtime components for GPU and network components of the systems.
- Working with kernel and platform components to build efficient memory management paths using pinned memory, peer-to-peer transfers, and unified memory.
- Optimize data movement using high-speed interconnects such as RDMA, InfiniBand, NVLink, and PCIe, with a focus on reducing latency and increasing bandwidth.
- Implement and fine-tune GPU memory copy paths with awareness of NUMA topologies and hardware coherency.
- Develop instrumentation and telemetry collection mechanisms to monitor GPU and memory performance without impacting runtime workloads.
- Contribute to internal tools and libraries for GPU system introspection, profiling, and debugging.
- Provide technical mentorship and peer reviews, and guide junior engineers on best practices for low-level GPU development.
- Stay current with evolving GPU architectures, memory technologies, and industry standards.
Minimum Qualifications :
- 10+ years of experience in systems programming, ideally with 5+ years focused on CUDA/GPU driver and runtime internals.
- Minimum of 5+ years of experience with kernel-space development, ideally in Linux kernel modules, device drivers, or GPU runtime libraries (e.g., CUDA, ROCm, or OpenCL runtimes).
- Experience working with NVIDIA GPU architecture, CUDA toolchains, and performance tools (Nsight, CUPTI, etc.).
- Experience optimizing for NVLink, PCIe, Unified Memory (UM), and NUMA architectures.
- Strong grasp of RDMA, InfiniBand, and GPUDirect technologies and their using in frameworks like UCX.
- Minimum of 8+ years of experience programming within C/C++ with low-level systems proficiency (memory management, synchronization, cache coherence).
- Bachelor' degree in STEM related field
Preferred Qualifications
- Deep understanding of HPC workloads, performance bottlenecks, and compute/memory tradeoffs.
- Expertise in zero-copy memory access, pinned memory, peer-to-peer memory copy, and device memory lifetimes.
- Strong understanding of multi-threaded and asynchronous programming models.
- Familiarity with python and AI framework like pytorch.
- Familiarity with assembly or PTX/SASS for debugging or optimizing CUDA kernels.
- Familiarity with NVMe storage offloads, IOAT/DPDK, or other DMA-based acceleration methods.
- Familiarity with Valgrind, cuda-memcheck, gdb, and profiling with Nsight Compute/Systems.
- Proficiency with perf, ftrace, eBPF, and other Linux tracing tools.
- PhD is a plus, especially with research in GPU systems, compilers, or HPC.
|