Build the future of the AI Data Cloud. Join the Snowflake team. Snowflake empowers thousands of organizations to unlock the value of their data with high scale, concurrency, and performance. Snowflake platform powers and provides access to the AI Data Cloud, creating a solution for applications, collaboration, cybersecurity, data engineering, data lake, data science, data warehousing, and unistore. Our vision is a world with unlimited access to governed data, so every organization can tackle the challenges and opportunities of today and reveal the possibilities of tomorrow. We're hiring talented Software Engineers to join the Snowflake Database Engineering group and help build the world's leading AI Data Cloud platform! Our group spans across three key areas -
Database Query Processing: This is the core data processing engine, powering the world's best data platform. This includes building SQL language features and developing novel query optimization and execution techniques for industry-leading performance. We also build features that automatically optimize workloads for performance and cost-efficiency. Foundation Database: It is our large-scale distributed transactional KV store - internally called FDB - which powers all of Snowflake's products and services and is rapidly evolving to meet Snowflake's future needs. FDB houses Snowflake's metadata, allowing the service to be elastic, making the AI Data Cloud possible. FDB is also used to store user data for Unistore (and provides transactional and analytical optimized access paths). Unistore: Unistore unites analytics with transactional data processing by removing OLAP data silos and providing real-time data produced by our customers' OLTP-based transactional systems. Unistore can tell you NOW using the superpower of HTAP (Hybrid Transaction and Analytics Processing) provided by Snowflake's Hybrid Tables.
Learn more about the Snowflake Database Engineering group at https://careers.snowflake.com/us/en/database-engineering
AS A SOFTWARE ENGINEER AT SNOWFLAKE, YOU WILL:
Design, develop, and support a petabyte-scale cloud database that is highly parallel and fault-tolerant. Build high-quality and highly reliable software to meet the needs of some of the largest companies on the planet. Analyze and understand performance and scalability bottlenecks in the system and solve them. Pinpoint problems, instrument relevant components as needed, and ultimately implement solutions. Design and implement novel query optimization or distributed data processing algorithms which allow Snowflake to provide industry leading data warehousing capabilities. Design and implement the new service architecture required to enable the Snowflake AI Data Cloud Develop tools for improving our customers' insights into their workloads.
OUR IDEAL SOFTWARE ENGINEER WILL HAVE:
2+ years industry experience working on commercial or open-source software. Systems programming skills including multi-threading, concurrency, etc. Fluency in C++, C, or Java is preferred. Familiarity with development in a Linux environment. Excellent problem solving skills, and strong CS fundamentals including data structures, algorithms, and distributed systems. Systems programming skills including multi-threading, concurrency, etc. Experience with implementation testing, debugging and documentation. Bachelor's degree or foreign equivalent in Computer Science, Software Engineering or related field; Masters or PhD preferred. Ability to work on-site in our San Mateo / Bellevue / Berlin office.
BONUS POINTS FOR EXPERIENCE WITH THE FOLLOWING:
SQL or other database technologies including internal design and implementation. Hands-on experience designing/implementing database security technologies, including encryption algorithms, cryptographic key management systems, and secure authentication mechanisms. Query optimization, query execution, compiler design and implementation. Experience with internals of distributed key value stores like FoundationDB and storage engines like RocksDB, InnoDB, BerkeleyDB etc. Experience with MySQL, PostgreSQL internals Data warehouse design, database systems, and large-scale data processing solutions like Hadoop and Spark. Large scale distributed systems, transactions and consistency models. Experience in database replication technology Big data storage technologies and their applications, e.g., HDFS, Cassandra, Columnar Databases, etc.
Snowflake is growing fast, and we're scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? The following represents the expected range of compensation for this role:
- The estimated base salary range for this role is $154,000 - $224,250.
- Additionally, this role is eligible to participate in Snowflake's bonus and equity plan.
The successful candidate's starting salary will be determined based on permissible, non-discriminatory factors such as skills, experience, and geographic location. This role is also eligible for a competitive benefits package that includes: medical, dental, vision, life, and disability insurance; 401(k) retirement plan; flexible spending & health savings account; at least 12 paid holidays; paid time off; parental leave; employee assistance program; and other company benefits. Snowflake is growing fast, and we're scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact?
|