Senior Solutions Architect, Cloud Infrastructure and DevOps - NVIS
Excelero Storage
NVIDIA is looking for Senior Cloud Infrastructure and DevOps Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to redefine deep learning and data analytics, and to power data centers. Be involved with the crew developing many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and various departments, to analyze, define and implement large scale Networking projects. The scope of these efforts includes a combination of Networking, System Building, Kubernetes-based platforms, and Automation and being the face to the customer!
What you'll be doing:
Maintain large scale computational and AI infrastructure, focusing on monitoring, logging, workload orchestration (Kubernetes and Linux job schedulers).
Perform end-to-end resolving across the stack, from bare metal and operating system, through the software stack, container platform, networking, and storage.
Optimize scalable, production-ready Kubernetes-based container platforms coordinated with enterprise-grade networking and storage.
Serve as a key technical resource, develop, refine, and document standard methodologies and operational guidelines to be shared with internal teams.
Support Research & Development activities and engage in POCs/POVs to validate new features, architectures, and upgrade approaches.
Create and deliver high-quality documentation, including runbooks, onboarding materials, and best-practice guides for customers and internal teams.
Become the technical leader for assigned customer accounts, providing strategic guidance on DevOps and platform architecture and influencing long-term infrastructure and operations decisions.
What we need to see:
BS/MS/PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or related fields, with 8+ years of professional experience in managing scalable cloud environments and automation engineering roles.
Cloud & HPC Expertise: Proven understanding of networking fundamentals (TCP/IP stack), data center architectures, and hands-on experience managing HPC/AI clusters, including deployment, optimization, and fixing issues.
Kubernetes & AI/ML Workloads: Extensive experience with Kubernetes for container orchestration, resource scheduling, scaling, and integration with HPC environments.
Hardware & Software Knowledge: Familiarity with HPC and AI technologies (CPUs, GPUs, high-speed interconnects) and supporting software stacks.
Linux & Storage Systems: Deep knowledge of Linux (RedHat/CentOS, Ubuntu), OS-level security, and protocols (TCP, DHCP, DNS). Experience with storage solutions such as Lustre, GPFS, ZFS, XFS, and emerging Kubernetes storage technologies.
Automation & Observability: Proficiency in Python and Bash scripting, configuration management, and Infrastructure-as-Code tools (e.g., Ansible, Terraform). Experience with observability stacks (Grafana, Loki, Prometheus) for monitoring, logging, and building fault-tolerant systems.
Solution Architecture & Customer Engagement: Strong background in crafting scalable solutions and providing consultative support to customers.
Ways to stand out from the crowd:
Knowledge of CI/CD pipelines for software deployment and automation.
Solid hands-on knowledge of Kubernetes and container-based microservices architectures.
Experience with GPU-focused hardware and software (e.g., NVIDIA DGX, CUDA, GPU Operator).
Background with RDMA-based fabrics (InfiniBand or RoCE) in HPC or AI environments.