We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.

Job posting has expired

Back to search results

Data Engineer

O'Reilly Media Inc
vision insurance, 401(k)
United States
May 21, 2023

About Your Team

Our data engineering team has a strong focus on delivering high-quality, reliable data to platforms and people within O'Reilly as well as building high-performance, scalable and extensible systems. We are intentional in our search for teammates who are helpful, respectful, communicate openly, and are always willing to do what's best for our users. We keep a close eye on our pipelines and processes to make sure we're delivering useful, timely improvements to aid decision-making and data visualization within O'Reilly. The team is broadly distributed across the US in multiple cities and time zones and constantly encourages each other to deliver work that instills pride and fulfillment.

Salary Range: $110,000 - $138,000

About the Job

We are looking for a thoughtful and experienced data engineer to help grow a suite of systems and tools written primarily in Python. The ideal candidate will have a deep understanding of modern data engineering concepts and will have shipped or supported code and infrastructure with a user base in the millions and datasets with billions of records. The candidate will be routinely implementing features, fixing bugs, performing maintenance, consulting with product managers, and troubleshooting problems. Changes you make will be accompanied by tests to confirm desired behavior. Code reviews, in the form of pull requests reviewed by peers, are a regular and expected part of the job as well.

Job Details

In a normal week, you might:

  • Develop a new feature from a user story using Python and PostgreSQL or BigQuery
  • Collaborate with product managers to define clear requirements, deliverables, and milestones
  • Team up with other groups within O'Reilly (e.g. data science or machine learning) to leverage experience and consult on data engineering best practices
  • Review a pull request from a coworker and pair on a tricky problem
  • Provide a consistent and reliable estimate to assess risk for a project manager
  • Learn about a new technology or paper and present it to the team
  • Identify opportunities to improve our pipelines through research and proof-of-concepts
  • Help QA and troubleshoot a pesky production problem
  • Participate in agile process and scrum ceremonies

Why you'll love working on our team:

  • You'll be working for a company that embraces and pursues new technology
  • You'll be working with a company that trusts and engages its employees
  • We believe in giving engineers the tools and hardware that they need to do their job
  • Bi-weekly virtual team hangouts and space to learn new skills (we're a learning company after all!)
  • Great company benefits (health/dental/vision insurance, 401k, etc.)
  • We care deeply about work-life balance and treat everyone like human beings first

About You

What we like to see for anyone joining our data engineering teams:

  • Proficiency in building highly scalable ETL and streaming-based data pipelines using Google Cloud Platform services and products
  • Proficiency in large scale data platforms and data processing systems such as Google BigQuery and Amazon Redshift
  • Excellent Python and PostgreSQL development and debugging skills
  • Experience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g. RabbitMQ and Pub/Sub)
  • Strong drive to experiment, learn and improve your skills
  • Respect for the craft-you write self-documenting code with modern techniques
  • Great written communication skills-we do a lot of work asynchronously in Slack and Google Docs
  • Empathy for our users-a willingness to spend time understanding their needs and difficulties is central to the team
  • Desire to be part of a compact, fun, and hard-working team

Not required, but for bonus points:

  • Experience with Google Cloud Dataflow/Apache Beam
  • Experience with Django RESTful endpoints
  • Experience working in a distributed team
  • Knowledge and experience with machine learning pipelines
  • Contributions to open source projects
  • Knack for benchmarking and optimization
Minimum Qualifications
  • 2+ years of professional data engineering (or equivalent) experience
  • 1+ year experience of working in an agile environment

About O'Reilly Media

O'Reilly's mission is to change the world by sharing the knowledge of innovators. For over 40 years, we've inspired companies and individuals to do new things-and do things better-by providing them with the skills and understanding that's necessary for success.

At the heart of our business is a unique network of experts and innovators who share their knowledge through us. O'Reilly Learning offers exclusive live training, interactive learning, a certification experience, books, videos, and more, making it easier for our customers to develop the expertise they need to get ahead. And our books have been heralded for decades as the definitive place to learn about the technologies that are shaping the future. Everything we do is to help professionals from a variety of fields learn best practices and discover emerging trends that will shape the future of the tech industry.

Our customers are hungry to build the innovations that propel the world forward. And we help you do just that.

Learn more:


At O'Reilly, we believe that true innovation depends on hearing from, and listening to, people with a variety of perspectives. We want our whole organization to recognize, include, and encourage people of all races, ethnicities, genders, ages, abilities, religions, sexual orientations, and professional roles.

Learn more: