Senior Data Engineer
Knix Wear
About You
- You thrive in fast-paced environments and enjoy solving complex, cross-functional problems.
- You’re fluent in Python and SQL and love building clean, scalable data pipelines.
- You think about data architecture holistically, with performance, governance, and usability in mind.
- You sweat the details and prioritize reliability, security, and documentation.
- You’re a strong communicator who can translate business needs into scalable data solutions.
- Above all, you’re proactive, positive, and excited by Knix’s mission to redefine intimates.
Responsibilites
- Design, build, and maintain scalable ELT/ETL pipelines using FiveTran, Airbyte, and custom Python workflows.
- Manage data ingestion and transformation across AWS services such as S3, Lambda, Glue, and Athena.
- Monitor pipeline performance and implement logging, alerting, and troubleshooting processes.
- Administer and optimize Knix’s Snowflake environment including access management, cost efficiency, and query performance.
- Design and maintain scalable data models using both dimensional and normalized approaches.
- Build and maintain reusable transformation logic in DBT to deliver high-quality, analysis-ready datasets.
- Data Insights and Enablement
- Partner with engineering, analytics, and business teams to understand data needs and deliver governed, trustworthy datasets.
- Design and optimize Looker explores, views, and dashboards to support self-serve analytics.
- Apply best practices in data quality, documentation, and data governance.
- Champion data engineering best practices including version control, testing, and peer reviews.
- Support performance tuning and optimization across the data stack.
- Stay current on emerging tools, frameworks, and trends in the modern data ecosystem to drive innovation.
Qualifications
- 7+ years of experience in software and/or data engineering, with at least 4 years focused on data engineering or analytics engineering.
- Deep hands-on experience with Snowflake, DBT, FiveTran, and Looker in a production environment.
- Proficiency in Python for data workflows, automation, and orchestration.
- Strong SQL skills and experience designing scalable, performant data models.
- Working knowledge of AWS services including S3, Lambda, Glue, and Athena.
- Familiarity with Airbyte or similar tools for custom ingestion.
- Strong problem-solving skills and the ability to work independently and collaboratively.
- Understanding of data governance, access control, and performance tuning.
