Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
data warehouses and data lakes. Expertise in GCP data services including BigQuery, Composer, Dataform, DataProc, and Pub/Sub. Strong programming experience with Python, PySpark, and SQL. Hands-on experience with data modelling, ETL processes, and data quality frameworks. Proficiency with BI/reporting tools such as Looker or More ❯
added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks More ❯
Penryn, England, United Kingdom Hybrid / WFH Options
Aspia Space
availability and integrity. •3+ years of experience in data engineering, data architecture, or similar roles. •Expert proficiency in Python, including popular data libraries (Pandas, PySpark, NumPy, etc.). •Strong experience with AWS services—specifically S3, Redshift, Glue (Athena a plus). •Solid understanding of applied statistics. •Hands-on experience More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with Azure and Data Bricks More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
team solving real-world trading challenges ? What We’re Looking For 8+ years of professional experience in Python application development Solid knowledge of Pandas, PySpark, and modern testing (PyTest) Strong background in Azure cloud services (Databricks, ADF, Key Vaults, etc.) Familiarity with DevOps, CI/CD pipelines, and Agile More ❯
houses. Advanced understanding and experience with file storage layer management in data lake environment, including parquet and delta file formats. Solid experience with SPARK (PySpark) language, and data processing techniques. Solid Understanding of and experience with AZURE SYNAPSE tools and services. Some knowledge of Python preferred. Strong analytic skills More ❯
architectures with a focus on automation, performance tuning, cost optimisation, and system reliability. Proven proficiency in programming languages such as Python, T-SQL, and PySpark, with practical knowledge of test-driven development. Demonstrated capability in building secure, scalable data solutions on Azure with an in-depth understanding of data More ❯
or a related technical field Experience with object-oriented programming preferred General familiarity with some of the technologies we use: Python, Apache Spark/PySpark, Java/Spring Amazon Web Services SQL, relational databases Understanding of data structures and algorithms Interest in data modeling, visualisation, and ETL pipelines Knowledge More ❯
Derby, England, United Kingdom Hybrid / WFH Options
Cooper Parry
warehouse, Lakehouse, Data Lake Hands-on experience with Power BI, semantic modelling, and DAX Strong SQL and data manipulation skills. Exposure to Python and PySpark is required. Experience working with open data formats like Delta Lake, Parquet, Json, Csv. Familiarity with CI/CD pipelines, version control (e.g., Git More ❯
Work closely with data scientists and stakeholders Follow CI/CD and code best practices (Git, testing, reviews) Tech Stack & Experience: Strong Python (Pandas), PySpark, and SQL skills Cloud data tools (Azure Data Factory, Synapse, Databricks, etc.) Data integration experience across formats and platforms Strong communication and data literacy More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Recruit with Purpose
their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Matillion Limited
to engage both technical and non-technical stakeholders. Desirable Criteria Experience with Matillion products and competitive ETL solutions. Knowledge of big data technologies (Spark, PySpark), data lakes, and MPP databases (Teradata, Vertica, Netezza). Familiarity with version control tools such as Git, and experience with Python. Degree in Computer More ❯
Experience: Strong proficiency in SQL and Python. Experience in cloud data solutions (AWS, GCP, or Azure). Experience in AI/ML. Experience with PySpark or equivalent. Strong problem-solving and analytical skills. Excellent attention to detail. Ability to manage stakeholder relationships effectively. Strong communication skills and a collaborative More ❯
Experience: Strong proficiency in SQL and Python. Experience in cloud data solutions (AWS, GCP, or Azure). Experience in AI/ML. Experience with PySpark or equivalent. Strong problem-solving and analytical skills. Excellent attention to detail. Ability to manage stakeholder relationships effectively. Strong communication skills and a collaborative More ❯
backend development focused on data platforms. Strong hands-on experience with AWS services, especially Glue, Athena, Lambda, and S3 . Proficient in Python (ideally PySpark) and modular SQL for transformations and orchestration. Solid grasp of data modeling (partitioning, file formats like Parquet, etc.). Comfort with CI/CD More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Citigroup Inc
in Java Some knowledge inHadoop, hive, SQL, Spark Understanding of Unix Shell Scripting CI/CD Pipeline Maven or Gradle experience Predictive analytics (desirable) PySpark (desirable) Trade Surveillance domain knowledge (desirable) Education: Bachelor’s/University degree or equivalent experience What we’ll provide you: By joining Citi, you More ❯
Stockport, England, United Kingdom Hybrid / WFH Options
Movera
experience working with Azure Git/DevOps Repos experience Demonstration of problem solving ability Synapse Analytics or similar experience - desirable Visual Files experience - desirable PySpark/Python experience - desirable Powershell experience - desirable What we offer: We aim to reward your hard work generously. You'll be greeted in our More ❯