We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results

Principal Software Engineer - Azure AI Inferencing

Microsoft
United States, Washington, Redmond
Sep 04, 2025
OverviewMicrosoft Azure AI Inference platform is the next generation cloud business positioned to address the growing AI market. We are on the verge of an AI revolution and have a tremendous opportunity to empower our partners and customers to harness the full power of AI responsibly. We offer a fully managed AI Inference platform to accelerate the research, development, and operations of AI powered intelligent solutions at scale. This team owns the hosting, optimization, and scaling the inference stack for all the Azure AI Foundary models including the latest and greatest from OpenAI, Grok, DeepSeek, and other OSS models. Do you want to join a team entrusted with serving all internal and external ML workloads, solve real world inference problems for state-of-the-art large language (LLM) and multi-modal Gen AI models from OpenAI and other model providers? We are already serving billions of inferences per day on the most cutting-edge AI scenarios across the industry. You will be joining the CoreAI Inferencing team, influencing the overall product, driving new features and platform capabilities from preview to General Availability, and many exciting problems on the intersection of AI and Cloud. We're looking for a Principal Software Engineer - Azure AI Inferencing to drive the design, optimization, and scaling of our inference systems. In this role, you'll lead engineering efforts to ensure our largest models run with exceptional efficiency in high-throughput, low-latency environments. You will get to work on and influence multiple levels of the AI Inference data plane stack. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
ResponsibilitiesLead the design and implementation of core inference infrastructure for serving frontier AI models in production. Identify and drive improvements to end-to-end inference performance and efficiency of OpenAI and other state-of-the-art LLMs. Lead the design and implementation of efficient load scheduling and balancing strategies, by leveraging key insights and features of the model and workload. Scale the platform to support the growing inferencing demand and maintain high availability. Deliver critical capabilities required to serve the latest and greatest Gen AI models such as GPT5, Realtime audio, Sora, and enable fast time to market for them. Drive generic features to cater to the needs of customers such as GitHub, M365, Microsoft AI and third-party companies. Collaborate with our partners both internal and external. Mentor engineers on distributed inference best practices. Embody Microsoft'sCultureandValues.
Applied = 0

(web-759df7d4f5-7gbf2)