Greater London, England, United Kingdom Hybrid / WFH Options
CommuniTech Recruitment Group
Data Developer. C# + (either Clickhouse, SingleStore, Rockset, TimescaleDB) + open standard datalake (e.g. Iceberg or Delta tables, ApacheSpark, Column store). £700/Day. 6 month rolling. Hybrid. My client is a top tier commodities trading firm that is looking for a strong C# Data Engineer. The key things in summary are: Strong experience of .NET … Have you worked with an analytical database such as Clickhouse, SingleStore, Rockset, TimescaleDB? Have you got any experience working with an open standard datalake (e.g. Iceberg or Delta tables, ApacheSpark, Column store) Have you got any experience processing (e.g. ingesting into a database) a large amount of data (in batches would be fine)? Required Skills and Experience More ❯
Greetings! Adroit People is currently hiring Title: AWS Data Engineer Location: London, UK Work Mode: Hybrid Duration: 12 Months FTC Keywords: AWS, PYTHON, Glue, EMR Serverless, Lambda, and S3,SPARK Job Spec: WHAT YOU'LL BE DOING: We are building the next-generation data platform at FTSE Russell and we want you to shape it with us. Your role … will involve: Designing and developing scalable, testable data pipelines using Python and ApacheSpark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing Contributing to the development of a lakehouse architecture using Apache Iceberg Collaborating with business teams … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn ApacheSpark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, ApacheSpark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, ApacheSpark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, ApacheSpark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, ApacheSpark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as ApacheSpark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and More ❯
experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, ApacheSpark, Apache Iceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: ∙ Designing … and developing scalable, testable data pipelines using Python and ApacheSpark ∙ Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 ∙ Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing ∙ Contributing to the development of a lakehouse architecture using Apache Iceberg ∙ Collaborating with business teams to translate requirements More ❯
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: ∙ Designing … and developing scalable, testable data pipelines using Python and ApacheSpark ∙ Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 ∙ Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing ∙ Contributing to the development of a lakehouse architecture using Apache Iceberg ∙ Collaborating with business teams to translate requirements More ❯
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell and we want you to shape it with us. Your role will involve: Designing … and developing scalable, testable data pipelines using Python and ApacheSpark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing Contributing to the development of a lakehouse architecture using Apache Iceberg Collaborating with business teams to translate requirements More ❯
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and ApacheSpark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and ApacheSpark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
. Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and More ❯
. Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
driven decision-making. Responsibilities for the Senior Data Engineer: Design, build, and maintain scalable data pipelines and architectures, ensuring reliability, performance, and best-in-class engineering standards Leverage Databricks, Spark, and modern cloud platforms (Azure/AWS) to deliver clean, high-quality data for analytics and operational insights Lead by example on engineering excellence, mentoring junior engineers and driving … customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and ApacheSpark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing More ❯
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
CV TECHNICAL LTD
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Skills Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, Delta Lake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
Skills Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, Delta Lake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
data modeling (star schema, snowflake schema). Version Control Practical experience with Git (branching, merging, pull requests). Preferred Qualifications (A Plus) Experience with a distributed computing framework like ApacheSpark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ). Exposure to workflow … orchestration tools ( Apache Airflow, Prefect, or Dagster ). Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. More ❯
data modeling (star schema, snowflake schema). Version Control Practical experience with Git (branching, merging, pull requests). Preferred Qualifications (A Plus) Experience with a distributed computing framework like ApacheSpark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ). Exposure to workflow … orchestration tools ( Apache Airflow, Prefect, or Dagster ). Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. More ❯
of professional experience in data engineering roles, preferably for a customer facing data product Expertise in designing and implementing large-scale data processing systems with data tooling such as Spark, Kafka, Airflow, dbt, Snowflake, Databricks, or similar Strong programming skills in languages such as SQL, Python, Go or Scala Demonstrable use and an understanding of effective use of AI More ❯
of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to ApacheSpark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWS Step Functions Understanding of containerisation using Docker Experience with CI/CD pipelines More ❯