processes Develop dashboards and visualizations Work closely with data scientists and stakeholders Follow CI/CD and code best practices (Git, testing, reviews) Tech Stack & Experience: Strong Python (Pandas), PySpark, and SQL skills Cloud data tools (Azure Data Factory, Synapse, Databricks, etc.) Data integration experience across formats and platforms Strong communication and data literacy Nice to Have: Commodities/ More ❯
london (city of london), south east england, united kingdom
twentyAI
processes Develop dashboards and visualizations Work closely with data scientists and stakeholders Follow CI/CD and code best practices (Git, testing, reviews) Tech Stack & Experience: Strong Python (Pandas), PySpark, and SQL skills Cloud data tools (Azure Data Factory, Synapse, Databricks, etc.) Data integration experience across formats and platforms Strong communication and data literacy Nice to Have: Commodities/ More ❯
data tooling, helping to solve complex data challenges that have wide-reaching impact across multiple business domains. Key Requirements: Strong experience in AWS data engineering tools (e.g., Glue, Athena, PySpark, Lake Formation) Solid skills in Python and SQL for data processing and analysis Deep understanding of data governance, quality, and security A passion for building scalable, secure, and efficient More ❯
data tooling, helping to solve complex data challenges that have wide-reaching impact across multiple business domains. Key Requirements: Strong experience in AWS data engineering tools (e.g., Glue, Athena, PySpark, Lake Formation) Solid skills in Python and SQL for data processing and analysis Deep understanding of data governance, quality, and security A passion for building scalable, secure, and efficient More ❯
a focus on data quality at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or Power BI. Excellent communication skills More ❯
a focus on data quality at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or Power BI. Excellent communication skills More ❯
a focus on data quality at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or Power BI. Excellent communication skills More ❯
london (city of london), south east england, united kingdom
Anson McCade
a focus on data quality at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or Power BI. Excellent communication skills More ❯
SKILLS • Experience with Python web development frameworks - Django, in particular, beneficial. • Experience with Git for version control and collaborative development. • Familiarity with Snowflake or Databricks, especially with Snowpark or PySparkPython libraries. • Experience with big data technologies and cloud platforms (e.g., AWS, Azure) a benefit. • Familiarity and practical experience with time series modelling techniques are a benefit. LEADERSHIP & SOFT More ❯
focus on Databricks, Azure, or AWS cloud ecosystems. Proven track record of designing and implementing data architectures that support large-scale data pipelines and analytics. Strong proficiency in Python, PySpark, and SQL for data processing and manipulation. Extensive experience in creating ETL pipelines from scratch, handling large datasets, and developing solutions that align with business goals. Hands-on experience More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Recruit with Purpose
they modernise the use of their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet performance, governance, and reuse standards. More ❯
they modernise the use of their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet performance, governance, and reuse standards. More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Recruit with Purpose
they modernise the use of their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet performance, governance, and reuse standards. More ❯
london, south east england, united kingdom Hybrid / WFH Options
Recruit with Purpose
they modernise the use of their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet performance, governance, and reuse standards. More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Recruit with Purpose
they modernise the use of their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet performance, governance, and reuse standards. More ❯
London, England, United Kingdom Hybrid / WFH Options
amber labs
requirements and deliver solutions. Requirements: AWS: S3, Lambda, EMR, SMS, SQS, and additional services related to data infrastructure Terraform Databricks Data Lake, Warehouse, Lakehouse architecture and design Python/Pyspark Data platforms and notebooks: Jupyter, Databricks, Azure Gitlab: repository and CI/CD Java (Spring Boot) experience is a plus Benefits: Join a rapidly expanding startup where personal growth More ❯
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
solutions. Expertise in data warehousing and data lake patterns , including ingestion, governance, and quality. Strong technical skills in BigQuery, DataProc, Dataform, Composer, Pub/Sub . Fluent in Python, PySpark, and SQL . Experience with BI tools like Looker or Power BI. Strong client-facing and communication skills — able to lead conversations with both business and technical stakeholders. Experience More ❯
in Cloud native Data & Analytics Platforms Hands-on practical experience delivering system design, application development, testing, and operational stability Advanced knowledge in one or more programming language(s) - Python, PySpark, SQL Advanced knowledge of software applications and technical processes with considerable in-depth knowledge in one or more technical disciplines (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Ability More ❯
and transformation into business usable analytics. Input into high level design and responsibility for low level design • Hands on development of data pipelines using Step functions, Glue, Python/Pyspark and DBT(Redshift). Thorough and high-quality automated Unit testing • Creation of accurate, insightful & informative technical documentation • Performance analysis & improvement Handover and upskill of Operational teams Protecting the More ❯
London, England, United Kingdom Hybrid / WFH Options
Recruit with Purpose
they modernise the use of their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet performance, governance, and reuse standards. More ❯
to be part of a team that's transforming how data powers retail, this is your opportunity. Your Role (Key Responsibilities) Design, build, and optimise robust data pipelines using PySpark, SparkSQL, and Databricks to ingest, transform, and enrich data from a variety of sources. Translate business requirements into scalable and performant data engineering solutions, working closely with squad members More ❯
skills Familiarity with No SQL Databases such as MongoDB Experience in various messaging technologies such as Kafka Cloud Certifications including AWS Developer Associate, AWS Solutions Architect Associate Experience with PySpark Good understanding of event based architecture AI/ML field knowledge and trends Experience with Java, Big Data technologies will be a strong plus About Us J.P. Morgan is More ❯
field such as Computer Science, Mathematics, Statistics, or strong quantitative and software background. SKILLS 6+ years of hands-on experience in Python & 3+ years of hands-on experience in PySpark 6+ years of hands-on experience in using advanced SQL queries (analytical functions), experience in writing and optimizing highly efficient SQL queries Proven ability to reconcile technical and business More ❯
London, England, United Kingdom Hybrid / WFH Options
Ekimetrics
data products incrementally and integrating and managing data sets from multiple sources. 4+ years hands-on experience in key data management technologies, including Python but not limited to SQL, PySpark, Scoop, etc. Experience working on use cases for different file formats and database management systems including NOSQL databases Conceptual understanding of data management including governance processes and platforms, data More ❯
At Rockstar Games, we create world-class entertainment experiences. Become part of a team working on some of the most rewarding, large-scale creative projects to be found in any entertainment medium - all within an inclusive, highly-motivated environment where More ❯