in data architecture, data modeling, and scalable storage design Solid engineering practices: Git and CI/CD for data systems Highly Desirable Skills GCP Stack: Hands-on expertise with BigQuery, Cloud Storage, Pub/Sub, and orchestrating workflows with Composer or Vertex Pipelines. Domain Knowledge: Understanding of sports-specific data types ( event, tracking, scouting, video ) API Development: Experience building More ❯
and automation. Strong background in data visualisation (Power BI/Looker/Looker Studio). Experience with regulated data environments and a disciplined approach to quality and documentation. Desirable: BigQuery, Snowflake, or Databricks; reverse-ETL tools; marketing attribution methods. Next Steps If you're a data scientist passionate about using data and making an impact, we would love to More ❯
and automation. Strong background in data visualisation (Power BI/Looker/Looker Studio). Experience with regulated data environments and a disciplined approach to quality and documentation. Desirable: BigQuery, Snowflake, or Databricks; reverse-ETL tools; marketing attribution methods. Next Steps If you're a data scientist passionate about using data and making an impact, we would love to More ❯
Plus) Experience with a distributed computing framework like Apache Spark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or GoogleBigQuery/Cloud Storage ). Exposure to workflow orchestration tools ( Apache Airflow, Prefect, or Dagster ). Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. More ❯
Plus) Experience with a distributed computing framework like Apache Spark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or GoogleBigQuery/Cloud Storage ). Exposure to workflow orchestration tools ( Apache Airflow, Prefect, or Dagster ). Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. More ❯
processing frameworks like Apache Flink. (Preferred) Prior experience with User Behaviour Analytics. (Preferred) Knowledge of data governance and data security and compliance practices. (Preferred) Familiarity with GCP, especially GoogleBigQuery and writing production-level SQL. (Preferred) A background in data engineering, including concepts like data modeling, schema evolution, or ETL/ELT processes. (Preferred) At JET, this is on More ❯
Confluence; familiarity with Tableau/Power BI and basic Python. Understanding of ML tools (ML Studio, GitHub Copilot, Elastic Search), model explainability (SHAP, LIME), and cloud platforms (GCP including BigQuery, Vertex AI). Knowledge of data pipelines, ETL/ELT, and software/model development lifecycles. Experience with security, compliance (GDPR, data privacy), model governance, and risk management. Excellent More ❯
data processing frameworks (e.g., Spark, Flink). Strong experience with cloud platforms (AWS, Azure, GCP) and related data services. Hands-on experience with data warehousing tools (e.g., Snowflake, Redshift, BigQuery), Databricks running on multiple cloud platforms (AWS, Azure and GCP) and data lake technologies (e.g., S3, ADLS, HDFS). Expertise in containerization and orchestration tools like Docker and Kubernetes. More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid / WFH Options
Vivedia Ltd
and Python . Strong grasp of ETL/ELT pipelines , data modeling , and data warehousing . Experience with cloud platforms (AWS, Azure, GCP) and tools like Snowflake, Databricks, or BigQuery . Familiarity with streaming technologies (Kafka, Spark Streaming, Flink) is a plus. Tools & Frameworks: Airflow, dbt, Prefect, CI/CD pipelines, Terraform. Mindset: Curious, data-obsessed, and driven to More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Retelligence
practices for governance, lineage, and performance – Enhance data observability and automate key workflows for efficiency You Will Need – Advanced hands-on experience with SQL, Python, and core GCP tools (BigQuery, Dataflow, Pub/Sub, Composer or similar) – Proven background in full-lifecycle data engineering and automation – Strong understanding of CI/CD and infrastructure-as-code for data systems More ❯
practices for governance, lineage, and performance – Enhance data observability and automate key workflows for efficiency You Will Need – Advanced hands-on experience with SQL, Python, and core GCP tools (BigQuery, Dataflow, Pub/Sub, Composer or similar) – Proven background in full-lifecycle data engineering and automation – Strong understanding of CI/CD and infrastructure-as-code for data systems More ❯
Manchester Area, United Kingdom Hybrid / WFH Options
Morson Edge
pipelines Automating and orchestrating workflows using tools such as AWS Glue, Azure Data Factory or Google Cloud Dataflow Working with leading data platforms like Amazon S3, Azure Synapse, GoogleBigQuery and Snowflake Implementing Lakehouse architectures using Databricks or Snowflake Collaborating with engineers, analysts and client teams to deliver value-focused data solutions What We’re Looking For: Strong experience More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Paritas Recruitment
strategy definition, vendor assessment, and governance adoption. Experience establishing reference data management and data quality practices at enterprise scale. Knowledge of analytical data architectures: data lakes, warehouses (e.g., Snowflake, BigQuery), BI/reporting, and ETL pipelines. Proficiency in public cloud platforms (AWS, Google Cloud Platform, Microsoft Azure). Experience with both transactional RDBMS and modern distributed/cloud-native More ❯
strategy definition, vendor assessment, and governance adoption. Experience establishing reference data management and data quality practices at enterprise scale. Knowledge of analytical data architectures: data lakes, warehouses (e.g., Snowflake, BigQuery), BI/reporting, and ETL pipelines. Proficiency in public cloud platforms (AWS, Google Cloud Platform, Microsoft Azure). Experience with both transactional RDBMS and modern distributed/cloud-native More ❯
solutions. Promote a culture of insight, simplicity, and innovation in data use. Skills & Experience Expert in Microsoft Power Platform; strong SQL skills. Experience with cloud analytics (e.g. Databricks, Snowflake, BigQuery). Skilled in automation, agile delivery, and stakeholder management. Strong understanding of data governance, CI/CD, and version control. Excellent data storytelling and visual communication skills. If you More ❯
everything they build. What You’ll Do Build and own data pipelines that connect product, analytics, and operations Design scalable architectures using tools like dbt , Airflow , and Snowflake/BigQuery Work with engineers and product teams to make data easily accessible and actionable Help evolve their data warehouse and ensure high data quality and reliability Experiment with automation, streaming More ❯
if you Have 7+ years in data or backend engineering, including 2+ years in a lead or technical decision-making role. Are fluent in the modern data stack: DBT, BigQuery, Airflow, Terraform, GCP or AWS. Bring strong software engineering skills: Python, SQL, CI/CD, DevOps mindset. Understand data warehousing, ETL/ELT, orchestration, and streaming pipelines. Thrive in More ❯
if you Have 7+ years in data or backend engineering, including 2+ years in a lead or technical decision-making role. Are fluent in the modern data stack: DBT, BigQuery, Airflow, Terraform, GCP or AWS. Bring strong software engineering skills: Python, SQL, CI/CD, DevOps mindset. Understand data warehousing, ETL/ELT, orchestration, and streaming pipelines. Thrive in More ❯
data pipelines in production environments Strong Python and SQL skills (Pandas, PySpark, query optimisation) Cloud experience (AWS preferred) including S3, Redshift, Glue, Lambda Familiarity with data warehousing (Redshift, Snowflake, BigQuery) Experience with workflow orchestration tools (Airflow, Dagster, Prefect) Understanding of distributed systems, batch and streaming data (Kafka, Kinesis) Knowledge of IaC (Terraform, CloudFormation) and containerisation (Docker, Kubernetes) Nice to More ❯
data pipelines in production environments Strong Python and SQL skills (Pandas, PySpark, query optimisation) Cloud experience (AWS preferred) including S3, Redshift, Glue, Lambda Familiarity with data warehousing (Redshift, Snowflake, BigQuery) Experience with workflow orchestration tools (Airflow, Dagster, Prefect) Understanding of distributed systems, batch and streaming data (Kafka, Kinesis) Knowledge of IaC (Terraform, CloudFormation) and containerisation (Docker, Kubernetes) Nice to More ❯
similar role, with demonstrable expertise in designing enterprise-grade data platforms. Proficiency in modern data engineering and analytics technologies-for example, cloud platforms (e.g., Azure Synapse, AWS Redshift, GCP BigQuery), ETL frameworks, data modeling tools, and data governance platforms. Hands-on experience with both relational and dimensional modeling techniques, modern data lakehouse design, and scalable pipeline architecture. Exposure to More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
KDR Talent Solutions
environment. About You Highly proficient in SQL and Python . Experience with data modelling (3NF, Kimball) and modern data integration tools. Exposure to cloud-based platforms such as Snowflake , BigQuery , or similar is advantageous. Experience with DBT , Airbyte , or APIs is desirable. Strong communicator who can link data initiatives to commercial outcomes. Enjoys mentoring, providing direction, and developing others More ❯
SQL . Hands-on experience with data orchestration tools (Airflow, dbt, Dagster, or Prefect). Solid understanding of cloud data platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift). Experience with streaming technologies (Kafka, Kinesis, or similar). Strong knowledge of data modelling, governance, and architecture best practices . Excellent leadership, communication, and stakeholder management skills. More ❯
SQL . Hands-on experience with data orchestration tools (Airflow, dbt, Dagster, or Prefect). Solid understanding of cloud data platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift). Experience with streaming technologies (Kafka, Kinesis, or similar). Strong knowledge of data modelling, governance, and architecture best practices . Excellent leadership, communication, and stakeholder management skills. More ❯
SQL . Hands-on experience with data orchestration tools (Airflow, dbt, Dagster, or Prefect). Solid understanding of cloud data platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift). Experience with streaming technologies (Kafka, Kinesis, or similar). Strong knowledge of data modelling, governance, and architecture best practices . Excellent leadership, communication, and stakeholder management skills. More ❯