We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Data Engineer

Stellantis
parental leave, tuition reimbursement, 401(k)
United States, Michigan, Auburn Hills
Feb 03, 2026

The AI & Data Analytics Team is looking for a Senior Data Engineer to join our team. In this role, you will be responsible for designing, building, and optimizing robust data pipelines that process massive datasets in both batch and real-time. You will work at the intersection of software engineering and data science, ensuring that our data architecture is scalable, reliable, and follows industry best practices.

Priorities can change in a fast-paced environment like ours, so this role includes, but is not limited to, the following responsibilities:




  • Pipeline Development: Design and implement complex data processing pipelines using Apache Spark.


  • Architectural Leadership: Build scalable, distributed systems that handle high-throughput data streams and large-scale batch processing.


  • Infrastructure as Code: Manage and provision cloud infrastructure using Terraform.


  • CI/CD & Automation: Streamline development workflows by implementing and maintaining GitHub Actions for automated testing and deployment.


  • Code Quality: Uphold rigorous software engineering standards, including comprehensive unit/integration testing, code reviews, and maintainable documentation.


  • Collaboration: Work closely with stakeholders to translate business requirements into technical specifications.

Required Qualifications:



  • BA/BSc in Computer Science, Engineering, Mathematics, or a related technical discipline
  • 5+ years of experience in the data engineering and software development life cycle.
  • 4+ years of hands-on experience in building and maintaining production data applications, current experience in both relational and columnar data stores.
  • 4+ years of hands-on experience working with AWS cloud services
  • Comprehensive experience with one or more programming languages such as Python, Java, or Rust
  • Comprehensive experience working with Big Data platforms (i.e., Spark, Google Big Query, Azure, AWS S3, etc.)
  • Familiarity with time series database, data streaming applications, event driven architectures, Kafka, Flink, and more
  • Experience with workflow management engines (i.e., Airflow, Luigi, Azure Data Factory, etc.)
  • Experience with designing and implementing real-time pipelines
  • Experience with data quality and validation
  • Experience with API design
  • Distributed Computing: Deep expertise in Apache Spark (Core, SQL, and Structured Streaming).


  • Programming Mastery: Strong proficiency in Scala or Java. You should be comfortable building production-grade applications in a JVM-based environment.


  • SQL Proficiency: Advanced knowledge of SQL for data transformation, analysis, and performance tuning.


  • DevOps & Tools: Hands-on experience with Terraform for infrastructure management and GitHub Actions for CI/CD pipelines.


  • Software Engineering Foundation: Solid understanding of data structures, algorithms, and design patterns. Experience applying "Clean Code" principles to data engineering.


  • Stream Processing: Experience with Apache Flink for low-latency stream processing.


  • Scripting: Proficiency in Python for automation, data analysis, or scripting.


  • Cloud Platforms: Experience with AWS, Azure, or GCP data services (e.g., EMR, Glue, Databricks).


  • Data Modeling: Familiarity with dimensional modeling, Lakehouse architectures (Delta Lake, Iceberg), or NoSQL databases.



Preferred Experience:



  • Comprehensive knowledge of relational database concepts, including; data architecture, operational data stores, Interface processes, multidimensional modeling, master data management, and data manipulation


  • Expert knowledge and experience with custom ETL design, implementation and maintenance


  • Comprehensive experience designing, implementing, and iterating data pipelines using Big Data technologies


  • Certification in AWS or other cloud providers


  • Experience with Databricks notebook workflows


  • Experience with Terraform

Salaried Employee Benefits (US, Non-Represented)
Health & Wellbeing

Comprehensive coverages encompassing the Physical, Mental, Emotional, and overall Wellbeing of our employees, including short- and long-term disability.

Compensation, Savings, and Retirement

Annual Incentive Plan (SAIP), 401k with Employer Match & Contribution (max 8%), SoFi Student Loan Refinancing.

Time Away from Work

Paid time includes company holidays, vacation, and Float/Wellbeing Days.

Family Benefits

12 Weeks paid Parental Leave, Domestic Partner Benefits, Family Building Benefit, Marketplace, Life/Disability and other Insurances.

Professional Growth

Annual training, tuition reimbursement and discounts, Business Resource & Intra-professional Groups.

Company Car & More

Comprehensive Company Car Program and Vehicle Discounts. Vehicle discounts include family and friends.

Applied = 0

(web-54bd5f4dd9-d2dbq)