take the initiative, and identify creative solutions to deliver outcomes in the face of obstacles. Knowledge of common data science tools around SQL-based data warehousing (eg. Snowflake, Databricks, DBT), BI tools (eg. Tableau, Looker), workflow orchestration, and ML Ops. Excellent spoken and written English skills. Fluency with scripting in Python. Ability to work effectively across time zones. Teammates will More ❯
that's batch files or real-time streams. You'll have set up and worked with ETL and ELT tools like Dagster, AWS Glue, Azure Data Factory, Airflow or dbt, and you can decide what tools are right for the job. You'll have an understanding of how Node.js and TypeScript fit into a modern development environment and can speak More ❯
or Scala, SQL, and Apache Spark. You're highly proficient with Databricks and Databricks Asset Bundles, and have a strong understanding of data transformation best practices. Experience with GitHub, DBT, and handling large structured and unstructured datasets is essential. You're detail-oriented, self-motivated, and able to communicate complex technical concepts clearly. Experience with media/social platform dataMore ❯
data applications Airflow purely for job scheduling and tracking Circle CI for continuous deployment Parquet and Delta file formats on S3 for data lake storage Spark for data processing DBT for data modelling SparkSQL for analytics Why else you'll love it here Wondering what the salary for this role is? Just ask us! On a call with one of More ❯
a team of Data Engineers Experience with Data modelling, Data warehousing, and building ETL pipelines Experience with AWS (S3, EKS, EC2, RDS) or similar cloud services, Snowflake, Fivetran, Airbyte, dbt, Docker, Argo Experience in SQL, Python, and Terraform Experience with building Data pipelines and applications to stream and process datasets Robust understanding of DevOps principles is required Experience managing cloud More ❯
complex operational businesses Experience using LLMs or AI tools to structure and extract meaning from unstructured data Experience automating workflows and deploying model pipelines (e.g. MLFlow, GCP Vertex, Airflow, dbt or similar) Exposure to business planning, pricing, or commercial decision-making Familiarity with geospatial data Experience in fast-scaling startups or operational teams We're flexible on experience - if you More ❯
a leading Media & Entertainment provider. They're looking for multiple experienced Data Engineers to join on an initial 6 month contract basis, with scope for further extensions. -Python, AWS, dbt, SQL, -Principal Data Engineer -6 Month Contract -Inside IR35 -£500 - 600/Day -Remote (Occasional office visits may be required ad-hoc) If you're interested in this role & meet More ❯
a leading Media & Entertainment provider. They're looking for multiple experienced Data Engineers to join on an initial 6 month contract basis, with scope for further extensions. -Python, AWS, dbt, SQL, -Principal Data Engineer -6 Month Contract -Inside IR35 -£(Apply online only)/Day -Remote (Occasional office visits may be required ad-hoc) If you're interested in this role More ❯
problem-solving skills and a collaborative mindset. Experience in industries with large consumer marketplaces (e.g., programmatic media, travel, financial services) is a plus. Our Tech Stack AWS Redshift Postgres DBT Grafana PowerBI Note: While our current environment is AWS, we are transitioning to Azure for new development. Why Join Us? At Kantar, you'll be part of a global leader More ❯
best practices for performance optimisation, scalability, and cost management, empowering teams to access and utilise data seamlessly. Streamline Data Pipelines: Lead the development and optimisation of data pipelines using DBT , enabling faster and more reliable data flows. Enhance Data Governance and Quality: Design and implement robust data governance frameworks, ensuring high data quality, compliance, and consistency. Develop Scalable Data Models More ❯
modern AI tools and models across the entire data and engineering stack. Tech Stack AI Engineering: Python, LangChain, LlamaIndex, LangGraph AI Models: OpenAI, Anthropic, Gemini, Custom Models Data Tools: DBT, BigQuery Databases: PostgreSQL, OpenSearch, Redis Product Engineering: TypeScript (Full-stack) Infrastructure: Serverless, AWS, Google Cloud, GitHub Actions Requirements Strong academic background in a relevant field Proven track record of delivering More ❯
make customer-centric work possible - providing foundational tooling like address lookup and consent capture, and offering more advanced capabilities like our internal event stream platform and a fully-managed DBT environment. Our services help teams track behavioural signals, enrich customer profiles, and transform raw data into decision-ready insight - all while maintaining a strong focus on data protection. We primarily More ❯
Whetstone, Greater London, UK Hybrid / WFH Options
RVU Co UK
make customer-centric work possible providing foundational tooling like address lookup and consent capture, and offering more advanced capabilities like our internal event stream platform and a fully-managed DBT environment. Our services help teams track behavioural signals, enrich customer profiles, and transform raw data into decision-ready insight all while maintaining a strong focus on data protection. We primarily More ❯
TensorFlow, Hugging Face) Proven MLOps, big data, and backend/API development experience Deep understanding of NLP and LLMs Proficient with cloud platforms (AWS/GCP/Azure), Airflow, DBT, Docker/Kubernetes Strong collaboration, problem-solving, and coding best practices Nice to have: LLM fine-tuning, streaming data, big data warehousing, open-source contributions. More ❯
TensorFlow, Hugging Face) Proven MLOps, big data, and backend/API development experience Deep understanding of NLP and LLMs Proficient with cloud platforms (AWS/GCP/Azure), Airflow, DBT, Docker/Kubernetes Strong collaboration, problem-solving, and coding best practices Nice to have: LLM fine-tuning, streaming data, big data warehousing, open-source contributions. More ❯
mentor some of the brightest minds and friendliest people in the industry! - You work with some of the best tech in the industry, such as AWS SageMaker for ML, DBT for data processing along with standard analytical tools like Python, Redshift, Tableau etc. - All our decisioning systems are built in-house, meaning that we have nearly unlimited flexibility in what More ❯
feature development process. Identifying tracking requirements to enable accurate reporting/measurement. Work with our developers to get these implemented. Helping to develop and maintain our business intelligence tool (DBT and Lightdash) by adding new metrics in line with emerging areas of business interest and product development. Be a passionate ZAVA data advocate, inspiring others to embrace and utilise dataMore ❯
and its data science ecosystem (e.g., pandas, scikit-learn, TensorFlow/PyTorch) , Statistical methods and machine learning (e.g., A/B testing, model validation) , Data pipelining tools like SQL, dbt, BigQuery, or Spark , A strong communicator with the ability to communicate technical concepts into layman's terms for a non-technical audience , You're not afraid to challenge the status More ❯
/m/d) to join our German and Western European intraday trading team. You should have strong Python skills, know how to manage Redis cache, AWS S3 or DBT, and bring experience in the German or European power market. You'll work in a fast-paced, tech-driven environment and collaborate closely with traders and developers to turn dataMore ❯
data science ecosystem (e.g., pandas, scikit-learn, TensorFlow/PyTorch). Statistical methods and machine learning (e.g., A/B testing, model validation). Data pipelining tools like SQL, dbt, BigQuery, or Spark. A strong communicator with the ability to communicate technical concepts into layman's terms for a non-technical audience. You're not afraid to challenge the status More ❯
operations Advanced proficiency in SQL and experience working with large, complex, and sometimes messy datasets Experience designing, building, and maintaining ETL pipelines or data models, ideally using tools like dbt Proficiency in Python for data analysis, including data manipulation, visualisation, and basic modelling Strong data storytelling and communication skills: you can translate complex data into clear, actionable recommendations for both More ❯
data applications Airflow purely for job scheduling and tracking Circle CI for continuous deployment Parquet and Delta file formats on S3 for data lake storage Spark for data processing DBT for data modelling SparkSQL for analytics Why else you'll love it here Wondering what the salary for this role is? Just ask us! On a call with one of More ❯
company that takes data seriously, where your work has direct impact on product quality, decision making, and customer outcomes. Skills and Experience Strong Python SQL Experience working with GCP dbt If you are looking for a new challenge, then please submit your CV for initial screening and more details. Senior Data Engineer More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
by the wider team. Core Requirements: Strong SQL skills for data modelling and transformation Proven experience building and managing ETL pipelines in production Hands-on experience with Snowflake and dbt Proven experience as an Analytics Engineer Experience working in a cross-functional environment with stakeholders Excellent communication skills What Will Make You Stand Out: Experience with ThoughtSpot , Looker, or similar More ❯
both written and verbal Excellent language skills in Hebrew and English Preferred (but not required) to have: Hands on experience with Python Experience working with modern data technology (e.g. dbt, spark, containers, devops tooling, orchestration tools, git, etc.) Experience with data science and machine learning technology People want to buy from people who understand them. Our Solution Engineers build connections More ❯