Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Searchability
position, you'll develop and maintain a mix of real-time and batch ETL processes, ensuring accuracy, integrity, and scalability across vast datasets. You'll work with Python, SQL, ApacheSpark, and AWS services such as EMR, Athena, and Lambda to deliver robust, high-performance solutions.You'll also play a key role in optimising data pipeline architecture, supporting … Proven experience as a Data Engineer, with Python & SQL expertise Familiarity with AWS services (or equivalent cloud platforms) Experience with large-scale datasets and ETL pipeline development Knowledge of ApacheSpark (Scala or Python) beneficial Understanding of agile development practices, CI/CD, and automated testing Strong problem-solving and analytical skills Positive team player with excellent communication … required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS:Data Engineer/Python/SQL/AWS/ETL/Data Pipelines/ApacheSpark/EMR/Athena/Lambda/Big Data/Manchester/Hybrid Working More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and data pipeline More ❯
North West London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
knowledge of Kafka , Confluent , and event-driven architecture Hands-on experience with Databricks , Unity Catalog , and Lakehouse architectures Strong architectural understanding across AWS, Azure, GCP , and Snowflake Familiarity with ApacheSpark, SQL/NoSQL databases, and programming (Python, R, Java) Knowledge of data visualisation, DevOps principles, and ML/AI integration into data architectures Strong grasp of data More ❯
their growth and development Apply agile methodologies (Scrum, pair programming, etc.) to deliver value iteratively Essential Skills & Experience Extensive hands-on experience with programming languages such as Python, Scala, Spark, and SQL Strong background in building and maintaining data pipelines and infrastructure In-depth knowledge of cloud platforms and native cloud services (e.g., AWS, Azure, or GCP) Familiarity with More ❯
Strong capability in documenting high-level designs, system workflows, and technical specifications. Deep understanding of architectural patterns (e.g., microservices, layered, serverless). Expert knowledge of big data tools (Kafka, Spark, Hadoop) and major cloud platforms (AWS, Azure, GCP) to inform technology recommendations. Proficiency in Java, C#, Python, or JavaScript/TypeScript. Experience with cloud platforms (AWS, Azure, Google Cloud More ❯
Team Valley Trading Estate, Gateshead, Tyne and Wear, England, United Kingdom
Nigel Wright Group
include: 3+ years experience in data engineering roles, delivering integrated data-driven applications Hands-on experience with Microsoft Fabric components (Pipelines, Lakehouse, Warehouses) Proficient in T-SQL and either ApacheSpark or Python for data engineering Comfortable working across cloud platforms, with emphasis on Microsoft Azure Familiarity with REST APIs and integrating external data sources into applications More ❯
Research/Statistics or other quantitative fields. Experience in NLP, image processing and/or recommendation systems. Hands on experience in data engineering, working with big data framework like Spark/Hadoop. Experience in data science for e-commerce and/or OTA. We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance More ❯
and listed on the London Stock Exchange. With 3,000 employees and 32 offices in 12 countries we're a business with lots of opportunity for people with talent, spark and lots of ambition. If you want to build a great career with a company that prioritises strong values - such as integrity and courage - where our people always pull More ❯
Oversee pipeline performance, address issues promptly, and maintain comprehensive data documentation. What Youll Bring Technical Expertise: Proficiency in Python and SQL; experience with data processing frameworks such as Airflow, Spark, or TensorFlow. Data Engineering Fundamentals: Strong understanding of data architecture, data modelling, and scalable data solutions. Backend Development: Willingness to develop proficiency in backend technologies (e.g., Python with Django … to support data pipeline integrations. Cloud Platforms: Familiarity with AWS or Azure, including services like Apache Airflow, Terraform, or SageMaker. Data Quality Management: Experience with data versioning and quality assurance practices. Automation and CI/CD: Knowledge of build and deployment automation processes. Experience within MLOps A 1st class Data degree from one of the UKs top 15 Universities More ❯
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
Liverpool, Merseyside, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯