London, England, United Kingdom Hybrid / WFH Options
Noir
Responsibilities Design, build, and maintain robust data pipelines. Work with Python and SQL for data processing, transformation, and analysis. Leverage a wide range of GCP services including: Cloud Composer (Apache Airflow) BigQuery Cloud Storage Dataflow Pub/Sub Cloud Functions IAM Design and implement data models and ETL processes. Apply infrastructure-as-code practices using tools like Terraform. Ensure More ❯
in Computer Science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., Apache Spark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., MySQL, PostgreSQL, MongoDB) and More ❯
ideally with experience using data processing frameworks such as Kafka, NoSQL, Airflow, TensorFlow, or Spark. Finally, experience with cloud platforms like AWS or Azure, including data services such as Apache Airflow, Athena, or SageMaker, is essential. This is a fantastic opportunity for a Data Engineer to join a rapidly expanding start-up at an important time where you will More ❯
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom
Randstad Technologies Recruitment
institutions, alongside a proven record of relevant professional experience." Proven experience in a data specialist role with a passion for solving data-related problems. Expertise in SQL, Python , and Apache Spark , with experience working in a production environment. Familiarity with Databricks and Microsoft Azure is a plus. Financial Services experience is a bonus, but not required. Strong verbal and More ❯
data pipelines and systems Qualifications & Skills: x5 + experience with Python programming for data engineering tasks Strong proficiency in SQL and database management Hands-on experience with Databricks and Apache Spark Familiarity with Azure cloud platform and related services Knowledge of data security best practices and compliance standards Excellent problem-solving and communication skills Multi-Year Project - Flexible Start More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Fruition Group
best practices for data security and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, Apache Spark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and ETL/ELT development in AWS More ❯
Practical experience of database design and data transformation, supported by ETL processing. Managing data pipelines & orchestration that enable the transfer and processing of data (Databricks, Microsoft Fabric, Alteryx, Snowflake, Apache). Coding and programming, capable of working through complex problems with others from the team, adapting quickly to changing market trends and business needs (SQL, Python or R). More ❯
are constantly looking for components to adopt in order to enhance our platform. What you'll do: Develop across our evolving technology stack - we're using Python, Java, Kubernetes, Apache Spark, Postgres, ArgoCD, Argo Workflow, Seldon, MLFlow and more. We are migrating into AWS cloud and adopting many services that are available in that environment. You will have the More ❯
the UK for the last 10 years, and ability to obtain security clearance. Preferred Skills Experience with cloud platforms (IBM Cloud, AWS, Azure). Knowledge of big data frameworks (Apache Spark, Hadoop). Experience with data warehousing tools like IBM Cognos or Tableau. Certifications in relevant technologies are a plus. Additional Details Seniority level: Mid-Senior level Employment type More ❯
obtain UK security clearance. We do not sponsor visas. Preferred Skills and Experience Public sector experience Knowledge of cloud platforms (IBM Cloud, AWS, Azure) Experience with big data frameworks (Apache Spark, Hadoop) Data warehousing and BI tools (IBM Cognos, Tableau) Additional Details Seniority level: Mid-Senior level Employment type: Full-time Job function: Information Technology Industries: IT Services and More ❯
London, England, United Kingdom Hybrid / WFH Options
FDM Group
large-scale, high-performance services using Kubernetes, Kafka, Spring Boot, .NET, Node.js, React JS, serverless functions, and event-driven architecture Create real-time streaming and batch data pipelines with Apache Spark, Kafka, Lambda, Step Functions, and Snowflake Develop infrastructure with Kubernetes, Lambda, Terraform, Cloud Custodian, and AWS Transit Gateway Implement connectivity solutions using Cisco, F5, and Direct Connect About More ❯
London, England, United Kingdom Hybrid / WFH Options
Derisk360
in Neo4j such as fraud detection, knowledge graphs, and network analysis. Optimize graph database performance, ensure query scalability, and maintain system efficiency. Manage ingestion of large-scale datasets using Apache Beam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Collaborate with cross-functional teams and clients across diverse EMEA More ❯
code Desired Skills (Bonus Points): Proven experience in recommender systems, behavioural AI, and/or reinforcement learning. Building data pipelines (realtime or batch) & data quality using modern toolchain (e.g., Apache Spark, Kafka, Airflow, dbt). PhD in Computer Science, Machine Learning, or a closely related field What We Offer: Opportunity to build technology that will transform millions of shopping More ❯
modelling: Experience integrating structural data, e.g. CryoEM or HDX-MS with computational models in automated ways Experience with data curation and processing from heterogeneous sources; familiarity with tools like Apache Spark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Open-source contributions or publications demonstrating More ❯
London, England, United Kingdom Hybrid / WFH Options
Noir
Responsibilities Design, build, and maintain robust data pipelines. Work with Python and SQL for data processing, transformation, and analysis. Leverage a wide range of GCP services including: Cloud Composer (Apache Airflow) BigQuery Dataflow Pub/Sub Cloud Functions IAM Design and implement data models and ETL processes. Apply infrastructure-as-code practices using tools like Terraform. Ensure data quality More ❯
London, England, United Kingdom Hybrid / WFH Options
ScanmarQED
Foundations: Data Warehousing: Knowledge of tools like Snowflake, DataBricks, ClickHouse and traditional platforms like PostgreSQL or SQL Server. ETL/ELT Development: Expertise in building pipelines using tools like Apache Airflow, dbt, Dagster. Cloud providers: Proficiency in Microsoft Azure or AWS. Programming and Scripting: Programming Languages: Strong skills in Python and SQL. Data Modeling and Query Optimization: Data Modeling More ❯
London, England, United Kingdom Hybrid / WFH Options
FIND | Creating Futures
Engineering (open to professionals from various data eng. backgrounds — data pipelines, ML Eng, data warehousing, analytics engineering, big data, cloud etc.) Technical Exposure: Experience with tools like SQL, Python, Apache Spark, Kafka, Cloud platforms (AWS/GCP/Azure), and modern data stack technologies Formal or Informal Coaching Experience: Any previous coaching, mentoring, or training experience — formal or informal More ❯
City of London, England, United Kingdom Hybrid / WFH Options
ACLED
English, problem-solving skills, attention to detail, ability to work remotely. Desirable: Cloud architecture certification (e.g., AWS Certified Solutions Architect). Experience with Drupal CMS, geospatial/mapping tools, Apache Airflow, serverless architectures, API gateways. Interest in conflict data, humanitarian tech, open data platforms; desire to grow into a solution architect or technical lead role. Application Process Submit CV More ❯
on domain-specific data. Experience working with cloud platforms like Azure, AWS, or GCP for machine learning workflows. Understanding of data engineering pipelines and distributed data processing (e.g., Databricks, Apache Spark). ·Strong analytical skills, with the ability to transform raw data into meaningful insights through AI techniques. Experience with SQL, ETL processes, and data orchestration tools (e.g. Azure More ❯
agile environment to deliver data solutions that support key firm initiatives. Build scalable and efficient batch and streaming data workflows within the Azure ecosystem. Apply distributed processing techniques using Apache Spark to handle large datasets effectively. Help drive improvements in data quality, implementing validation, cleansing, and monitoring frameworks. Contribute to the firm’s efforts around data security, governance, and More ❯
knowledge of ETL processes. Ability to write production-grade, automated testing code. Experience deploying via CI/CD platforms like Github Actions or Jenkins. Proficiency with distributed frameworks like Apache Spark. Experience with cloud platforms (AWS, Azure, GCP) and services (S3, Redshift, BigQuery). Knowledge of data modelling, database systems, and SQL optimisation. Other key criteria Knowledge of UK More ❯
writing, optimization techniques, data modeling, and database performance tuning. Skilled in working with large datasets, building stored procedures, functions, and triggers, and implementing ETL processes. Have used products like Apache Airflow, DBT, Gitlab/Github, BigQuery Demonstrable experience in Data Modelling including working with denormalised data structures, testing, asserts and data cleansing The other stuff we are looking for More ❯
London, England, United Kingdom Hybrid / WFH Options
Arreoblue
or more of the following technologies: Databricks Dedicated SQL Pools Synapse Analytics Data Factory To set yourself up for success you should have in-depth knowledge of the languages Apache Spark, SQL, Python, along with solid development practices. Additionally, you will be required to have in-depth knowledge of supporting Azure platforms such as Data Lake, Key Vault, DevOps More ❯
to ensure code is fit for purpose Experience that will put you ahead of the curve Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure SQL development skills Experience using Dataform or dbt Demonstrated strength in data modelling, ETL development, and data warehousing Knowledge More ❯
between systems Experience with Google Cloud Platform (GCP) is highly preferred.(Experience with other cloud platforms like AWS, Azure can be considered.) Familiarity with data pipeline scheduling tools like Apache Airflow Ability to design, build, and maintain data pipelines for efficient data flow and processing Understanding of data warehousing best practices and experience in organising and cleaning up messy More ❯