ETL Developer
StackAdapt
This job is no longer accepting applications
See open jobs at StackAdapt.See open jobs similar to "ETL Developer" Work In Tech.What you'll be doing:
- Build reliable data ingestion pipelines to extract data from a variety of data sources including databases (e.g., RDBMS/NOSQL/file stores), applications (via API), flat files, etc into the Data Lake with appropriate metadata tagging
- Build data transformation pipelines to transform the raw data and materialize the data models designed by the Data Architect into the Enterprise Data Warehouse
- Deploy developed pipelines into production in adherence with deployment best practices to ensure a seamless rollout
- Orchestrate data pipelines via batch, near-real-time, or real-time operations depending on requirements to ensure a seamless and predictable execution
- Support the day-to-day operation of the EDO pipelines as well as the EDO environment by monitoring alerts and investigating, troubleshooting, and remediating production issues
- Work with members of the Enterprise Data Office to ensure the stability and optimization of the data pipelines to meet the required SLA
What you’ll bring to the table:
- Minimum 2 years of experience building and deploying data pipelines
- Hands-on experience with at least one cloud-based data warehouse e.g., Snowflake, BigQuery; experience with big data formats e.g., Delta Lake, Parquet, Avro would be an asset
- Good knowledge of relational and dimensional data models; able to interpret and understand physical data models and apply data rules and constraints as required to create data pipelines; prior data warehousing architecture knowledge would be an asset
- Hands-on experience building ETL/ELT data pipelines via custom-coded scripts (e.g., Spark, Python, JAVA, SQL stored procedures) OR via integration platforms (e.g., PowerCenter, DataStage, Talend); by following various standards and best practices such as coding and naming standards, version control, code promotion, testing, and deployment
- Strong verbal and written communication skills as well as excellent collaboration skills are required to participate and engage in highly technical discussions regarding data solutions
- Demonstrated ability to self-learn and master new data tools, platforms, and technologies within a short ramp-up period under conditions of limited formal training and coaching
- Experience with data orchestration in Apache Airflow, Cron, or other schedulers is a strong asset
- Knowledge of microservices architecture and container technology such as Kubernetes and Docker would be a definite asset
- Experience managing data platforms via infrastructure-as-code eg: Terraform would be a strong asset
StackAdapters Enjoy
- Competitive salary
- 401k/RRSP matching
- 3 weeks vacation + 3 personal care days + 1 Culture & Belief day + birthdays off
- Access to a comprehensive mental health care platform
- Health benefits from day one of employment
- Work-from-home reimbursements
- Optional global WeWork membership for those who want a change from their home office
- Robust training and onboarding program
- Coverage and support of personal development initiatives (conferences, courses, etc)
- Access to StackAdapt programmatic courses and certifications to support continuous learning
- Mentorship opportunities with industry leaders
- An awesome parental leave policy
- A friendly, welcoming, and supportive culture
- Our social and team events!
This job is no longer accepting applications
See open jobs at StackAdapt.See open jobs similar to "ETL Developer" Work In Tech.