City of London, London, United Kingdom Hybrid/Remote Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
Azure. AWS experience is considered, however Azure exposure is essential. Data Warehousing: Proven expertise with Snowflake – schema design, performance tuning, data ingestion, and security. Workflow Orchestration: Production experience with Apache Airflow (Prefect, Dagster or similar), including authoring DAGs, scheduling workloads and monitoring pipeline execution. Data Modeling: Strong skills in dbt, including writing modular SQL transformations, building data models, and More ❯
and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration tools (e.g., Apache Airflow, dbt, Prefect). Knowledge of data modelling, data warehousing, and lakehouse architectures. Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code. Strong problem-solving skills More ❯
frameworks, and clear documentation within your pipelines Experience in the following areas is not essential but would be beneficial: Data Orchestration Tools: Familiarity with modern workflow management tools like Apache Airflow, Prefect, or Dagster Modern Data Transformation: Experience with dbt (Data Build Tool) for managing the transformation layer of the data warehouse BI Tool Familiarity : An understanding of how More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Solirius Reply
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as Python More ❯
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯
e.g., KDB, OneTick) and Parquet-based file storage to optimize data access and retrieval. Design scalable cloud-native solutions (AWS preferred) for market data ingestion and distribution. (Bonus) Integrate Apache Iceberg for large-scale data lake management and versioned data workflows. Collaborate with trading and engineering teams to define data requirements and deliver production-grade solutions. Implement robust data More ❯
or internship experience within financial services or technology. Exposure to Java. Experience managing on-premise or hybrid data infrastructure (e.g. Hadoop). Knowledge of workflow orchestration tools such as Apache Airflow. Postgraduate degree in Computer Science, Data Science, or related field. Benefits Comprehensive health, dental, and vision coverage Flexible approach to time off and sick leave Discretionary bonus More ❯
City Of Westminster, London, United Kingdom Hybrid/Remote Options
Additional Resources
of Kubernetes, Docker, and cloud-native data ecosystems. Demonstrable experience with Infrastructure as Code tools (Terraform, Ansible). Hands-on experience with PostgreSQL and familiarity with lakehouse technologies (e.g. Apache Parquet, Delta Tables). Exposure to Spark, Databricks, and data lake/lakehouse environments. Understanding of Agile development methods, CI/CD pipelines, GitHub, and automated testing. Practical experience More ❯
Westminster, City of Westminster, Greater London, United Kingdom Hybrid/Remote Options
Additional Resources
of Kubernetes, Docker, and cloud-native data ecosystems. Demonstrable experience with Infrastructure as Code tools (Terraform, Ansible). Hands-on experience with PostgreSQL and familiarity with lakehouse technologies (e.g. Apache Parquet, Delta Tables). Exposure to Spark, Databricks, and data lake/lakehouse environments. Understanding of Agile development methods, CI/CD pipelines, GitHub, and automated testing. Practical experience More ❯
and MCP Server integrations for data-driven automation. Understanding of Platform as a Product , Backstage developer portals , and the developer experience lifecycle . Experience with data pipeline orchestration using Apache Airflow or Prefect . A passion for continuous learning and staying ahead of modern DevOps, Platform Engineering, and GitOps practices. More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Singular Recruitment
Advanced-level Python for data applications and high proficiency in SQL (query tuning, complex joins) Hands-on experience designing and deploying ETL/ELT pipelines using Google Cloud Dataflow (Apache Beam) or similar tools Proficiency in data architecture, data modeling, and scalable storage design Solid engineering practices: Git and CI/CD for data systems Highly Desirable Skills GCP More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Peaple Talent
a focus on having delivered in Microsoft Azure Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure Data Engineering certifications Databricks certifications What's in it for you: 📍Location: London (Hybrid More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Peaple Talent
delivered solutions in Google Cloud Platform (GCP) Strong experience designing and delivering data solutions using BigQuery Proficient in SQL and Python Experience working with Big Data technologies such as Apache Spark or PySpark Excellent communication skills, with the ability to engage effectively with senior stakeholders Nice to haves: GCP Data Engineering certifications BigQuery or other GCP tool certifications What More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Advanced Resource Managers
warehouse knowledge Redshift and Snowflake preferred Working with IaC – Terraform and Cloud Formation Working understanding of scripting languages including Python and Shell Experience working with streaming technologies inc. Kafka, Apache Flink Experience working with a ETL environments Experience working with a confluent cloud platform More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
consulting 8+ years leading technical teams in data engineering or analytics Expertise in modern data platforms such as Databricks, Snowflake, GCP, AWS, or Azure Strong understanding of tools like Apache Spark, Kafka, and Kubernetes Deep knowledge of data governance, strategy, and privacy regulations (GDPR, etc.) Strategic mindset with the ability to balance technical depth and business insight Passion for More ❯
teams to build scalable data pipelines and contribute to digital transformation initiatives across government departments. Key Responsibilities Design, develop and maintain robust data pipelines using PostgreSQL and Airflow or Apache Spark Collaborate with frontend/backend developers using Node.js or React Implement best practices in data modelling, ETL processes and performance optimisation Contribute to containerised deployments (Docker/Kubernetes More ❯
City of London, London, United Kingdom Hybrid/Remote Options
CBSbutler Holdings Limited trading as CBSbutler
operational dashboards. - Advanced proficiency with Microsoft BI Stack: SSIS, SSRS - Strong SQL Server skills and SQL querying experience - Hands-on experience with Google Cloud Platform tools including: BigQuery; Composer; Apache Airflow; Stream; Informatica; Vertex AI - Tableau dashboard development and reporting - Python programming for data analysis - Data modelling (warehouse, lakehouse, medallion architecture) - Understanding of financial and insurance data models - Insurance More ❯
Employment Type: Permanent
Salary: £60000 - £70000/annum Bonus + Full Benefits
and Responsibilities: Develop and maintain high-performance, low-latency Java-based systems for front office trading or pricing platforms. Build reactive systems using Kafka Streams , Akka , Eclipse Vert.x , or Apache Flink . Utilize multithreading , concurrency models , and Executor Services to optimize system performance and throughput. Write clean, efficient, and maintainable code using functional programming paradigms in Java. Follow and More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Areti Group | B Corp™
and government organisations , delivering real-world innovation powered by data and technology . 🔧 Tech Stack & Skills We're Looking For: Palantir Azure Databricks Microsoft Azure Python Docker & Kubernetes Linux Apache Tools Data Pipelines IoT (Internet of Things) Scrum/Agile Methodologies ✅ Ideal Candidate: Already DV Cleared or at least SC Strong communication skills – comfortable working directly with clients and More ❯
assessments and predictive models. Optimize models for performance, scalability, and accuracy. Qualifications: Deep knowledge of neural networks (CNNs, RNNs, LSTMs, Transformers). Strong experience with data tools (Pandas, NumPy, Apache Spark). Solid understanding of NLP algorithms. Experience integrating ML models via RESTful APIs. Familiarity with CI/CD pipelines and deployment automation. Strategic thinking around architecture and trade More ❯
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as Apache Spark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and or More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Deloitte
to automate admin tasks Basic awareness of Active Directory concepts and enterprise network concepts Experience of working with technical support at vendors where required Desirable skills : Administering a Linux, Apache, MySQL, PHP stack Working with NuGet packages Configuring Octopus Deploy Working in a helpdesk environment Deliverables Ensure the day-to-day availability of the service Provide production support for More ❯