Staff Data Engineer

San Diego, California

2021-04-06

Job Title: Staff Data Engineer
As a Data Engineer, you’ll be a crucial player on a new team delivering new data and insights while leveraging and deepening your skills in SQL and Python and with eco-systems, platforms and tools such as Snowflake, DBT, Kafka, GCP, GitHub, Jenkins, and Terraform, as well as data replication and data orchestration (DAG) tools.

Essential Duties and Responsibilities:
• Work with Manufacturing Operations’ business analysts to ascertain analytics requirements for cloud data repositories and data pipelines in support of business analytics and data science.
• Design, build, and enhance a set of robust, automated data pipelines, driven by event streams, replication, and/or batches, to create and refresh diverse datasets.
• Design, instantiate, load, and refresh tables in Snowflake.
• Ingest data using Kafka, Spark, or a leading data replication tool.
• Transform data using DBT, stored procedures
• Write algorithms for data pipeline testing and continuous monitoring
• Participate in enablement among organizationally distributed business analysts in their delivery of robust, test-driven, self-service data wrangling, visual analytics, and solution sharing.

Required Experience/Qualifications:
• Bachelors degree in computer science, information technologies, or an equivalent combination of job experience and certifications relevant to the above qualifications.
• 5 to 10 years of combined work experience in data engineering and data warehousing
• Strong expertise with ANSI SQL, data profiling, transformations, and end-to-end orchestration of a workflow built with multiple tools, and therein efficiently move terabytes of data
• Data Modeling: Experience choosing from among a variety of data modeling methods — such as 3rd normal, dimensional, data vault, sub-type/super-type, or wide-table – for a given use case, with an understanding of the associated loading logic.
• Combined hands-on experience with Snowflake, DBT, DAGs, GCP, GitHub, Jira, Confluence and leading data replication tools
• Progressive experience with Python and Kafka
• Affinity for Agile rigor and a DataOps culture of test-automation, versioning, infrastructure as code, rapid deployments, and continuous monitoring.
• Comfort with multiple data modeling methods
• Option: Tableau for data visualization

Preferred Experience
• Consulting experience, with an eagerness to exceed expectations on quality and delivery speed by mixing heads-down coding with influencing on DataOps rigor among analysts newer to SDLC.
• Experience with test-driven data-pipeline development, CI-CD, statistical process control, source code branching / merging, testing, and pipeline monitoring tools such as Terraform, Ansible, Jenkins, X-Ray, and Great Expectations.
• Comfort with quantitative aspects of data pipeline monitoring
• Experience working in regulated industry, including FDA and HIPAA
• Familiarity with Oracle and SQL Server environments

#LI-HH

Proven Recruitinghttps://provenrecruiting.com/https://provenrecruiting.com/wp-content/uploads/2019/10/logo-proven-recruiting.pngconsulting