Data Engineer – Python / SQL / Snowflake / Spark
Location: Remote (occasional onsite presence)
Contract Duration: 9 Months
Day Rate: Competitive (outside IR35)
We’re working with a fast-growing organisation that’s scaling its data platform and looking for an experienced Data Engineer to support major data transformation and analytics initiatives.
This role is ideal for someone who thrives in modern, cloud-based data environments and enjoys building robust, scalable pipelines used by data science, analytics, and product teams.
Key Responsibilities
Design, build and maintain scalable ETL and data pipelines Develop high-performance data models for analytics and reporting Work with real-time and batch data processing systems Collaborate closely with analysts, data scientists and platform engineers Improve data reliability, observability and performance
Essential Skills & Experience
- Python – dominant language for ETL, orchestration and APIs
- SQL – advanced querying, modelling and analytics engineering
- Scala or Java – especially in Spark-heavy environments
- Snowflake & MySQL
- dbt & Apache Airflow
- Apache Spark & Apache Kafka
- Containerisation & DevOps: Kubernetes, Docker and/or CI/CD pipelines
Nice to Have
- Cloud experience (AWS, GCP or Azure)
- Streaming data architectures
- Data platform migration or greenfield build experience
If this sounds like something you’d be interested in, apply now and share your CV for a confidential discussion.