Software Engineers Cheltenham Permanent OR Contract Must hold Green Badge clearance Excellent salaries dependant on experience Experience with the below essential: Python Java Openshift nifi apache Bonus: CI/CD ML DevOps AWS experience/cloud engineering ( DESIRABLE ) JBG81_UKTJ click apply for full job details More ❯
sharing. Required Skills & Qualifications 3-6 years of experience as a Data Engineer (or similar). Hands-on expertise with: DBT (dbt-core, dbt-databricks adapter, testing & documentation). Apache Airflow (DAG design, operators, scheduling, dependencies). Databricks (Spark, SQL, Delta Lake, job clusters, SQL Warehouses). Strong SQL skills and understanding of data modeling (Kimball, Data Vault, or More ❯
Telford, Shropshire, West Midlands, United Kingdom Hybrid / WFH Options
Experis
of these areas - Scala, Docker, Puppet, IntelliJ, IDE, Sbt Knowledge of -Java Development, Weblogic, Webmethods, Java Scripting, J2SE, J2EE, Spring, EJB, HTML, HTML5, Unix, Eclipse, SOAP, XML, REST, JBOSS, Apache, Tomact, SQL, Hibernate, JUnit, Selenium (Automation), GiT Ability to work in all areas of the project life cycle. Strong working knowledge of Agile approach & methodologies Good interpersonal and communication More ❯
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). Familiarity with data orchestration tools (e.g., Airflow). Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). Exposure to CI/CD pipelines, ideally using GitLab CI. Background working with media, marketing, or advertising data. The Opportunity : Work alongside smart, supportive teammates More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
Datatech Analytics
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). Familiarity with data orchestration tools (e.g., Airflow). Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). Exposure to CI/CD pipelines, ideally using GitLab CI. Background working with media, marketing, or advertising data. The Opportunity : Work alongside smart, supportive teammates More ❯
London, City of London, United Kingdom Hybrid / WFH Options
Datatech
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). ·Familiarity with data orchestration tools (e.g., Airflow). ·Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). ·Exposure to CI/CD pipelines, ideally using GitLab CI. ·Background working with media, marketing, or advertising data. The Opportunity : ·Work alongside smart, supportive teammates More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Datatech Analytics
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). Familiarity with data orchestration tools (e.g., Airflow). Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). Exposure to CI/CD pipelines, ideally using GitLab CI. Background working with media, marketing, or advertising data. The Opportunity : Work alongside smart, supportive teammates More ❯
london, south east england, united kingdom Hybrid / WFH Options
Datatech Analytics
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). Familiarity with data orchestration tools (e.g., Airflow). Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). Exposure to CI/CD pipelines, ideally using GitLab CI. Background working with media, marketing, or advertising data. The Opportunity : Work alongside smart, supportive teammates More ❯
Jarrow, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Catalyst
depth knowledge of relational databases (especially MS SQL) Also desirable use of Git/Azure Dev Ops, Visual Studio 2019/2022, JQuery, MS Power BI reporting, Linux/Apache web stack, bootstrap front end and sector experience in education Personal qualities: Passion for user experience, design and accessibility - commitment to diversity, inclusion and enabling people to achieve their More ❯
london (city of london), south east england, united kingdom
iO Associates
ideal candidate will lead the delivery of modern data solutions across multiple projects, leveraging Azure and Databricks technologies in an agile environment. Core Requirements Cloud & Data Engineering: Azure, Databricks, Apache Spark, Azure Data Factory, Delta Lake Programming & Querying: Python, SQL (complex, high-performance queries) Data Governance & DevOps: Unity Catalog/Purview, Terraform, Azure DevOps Consulting: Requirements gathering, stakeholder engagement More ❯
or another language such as Python Good knowledge of developing in a Linux environment Working knowledge of Git version control and GitLabs CI/CD pipelines Experience working with Apache NiFi Some exposure to front-end elements like JavaScript, TypeScript or React Some data interrogation with ElasticSearch and Kibana Exposure to working with Atlassian products Looking for a role More ❯
or another language such as Python Good knowledge of developing in a Linux environment Working knowledge of Git version control and GitLabs CI/CD pipelines Experience working with Apache NiFi Some exposure to front-end elements like JavaScript, TypeScript or React Some data interrogation with ElasticSearch and Kibana Exposure to working with Atlassian products Looking for a role More ❯
City of London, London, United Kingdom Hybrid / WFH Options
ECS
cloud data engineering, with a strong focus on building scalable data pipelines Expertise in Azure Databricks, including building and managing ETL pipelines using PySpark or Scala Solid understanding of Apache Spark, Delta Lake, and distributed data processing concepts Hands-on experience with Azure Data Lake Storage, Azure Data Factory, and Azure Synapse Analytics Proficiency in SQL and Python for More ❯
Winchester, Hampshire, England, United Kingdom Hybrid / WFH Options
Ada Meher
days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering platforms from scratch Alongside More ❯
City of London, London, England, United Kingdom Hybrid / WFH Options
Ada Meher
days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering platforms from scratch Alongside More ❯
. Solid understanding of DevOps principles and agile delivery. Excellent problem-solving skills and a proactive, team-oriented approach. Confident client-facing communication skills. Desirable Skills & Experience Experience with Apache NiFi and Node.js . Familiarity with JSON, XML, XSD, and XSLT . Knowledge of Jenkins, Maven, BitBucket, and Jira . Exposure to AWS and cloud technologies. Experience working within More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
applications and high proficiency SQL for complex querying and performance tuning. ETL/ELT Pipelines: Proven experience designing, building, and maintaining production-grade data pipelines using Google Cloud Dataflow (Apache Beam) or similar technologies. GCP Stack: Hands-on expertise with BigQuery , Cloud Storage , Pub/Sub , and orchestrating workflows with Composer or Vertex Pipelines. Data Architecture & Modelling: Ability to More ❯
JUnit 5, Mockito, database integration. AI: LangChain, Retrieval-Augmented Generation (RAG), MCP servers (as consumer and developer), and prompt engineering for LLM optimization. Exposure to popular libraries and frameworks (Apache Commons, Guava, Swagger, TestContainers). Architecture & Platforms: Skilled in designing and deploying distributed systems on cloud hyperscalers (AWS, GCP). Familiarity with containerization (Docker), CI/CD pipelines, DevOps More ❯
JUnit 5, Mockito, database integration. AI: LangChain, Retrieval-Augmented Generation (RAG), MCP servers (as consumer and developer), and prompt engineering for LLM optimization. Exposure to popular libraries and frameworks (Apache Commons, Guava, Swagger, TestContainers). Architecture & Platforms:Skilled in designing and deploying distributed systems on cloud hyperscalers (AWS, GCP). Familiarity with containerization (Docker), CI/CD pipelines, DevOps More ❯
JUnit 5, Mockito, database integration. AI: LangChain, Retrieval-Augmented Generation (RAG), MCP servers (as consumer and developer), and prompt engineering for LLM optimization. Exposure to popular libraries and frameworks (Apache Commons, Guava, Swagger, TestContainers). Architecture & Platforms: Skilled in designing and deploying distributed systems on cloud hyperscalers (AWS, GCP). Familiarity with containerization (Docker), CI/CD pipelines, DevOps More ❯
Deep Learning or LLM Frameworks) Desirable Minimum 2 years experience in Data related field Minimum 2 years in a Business or Management Consulting field Experience of Docker, Hadoop, PySpark, Apache or MS Azure Minimum 2 years NHS/Healthcare experience Disclosure and Barring Service Check This post is subject to the Rehabilitation of Offenders Act (Exceptions Order) 1975 and More ❯
tools, particularly Terraform. Experience with network design, administration, and troubleshooting. Knowledge of programming languages (e.g., JavaScript, Node.js, PHP). Experience with version control systems, ideally Git. Web server configuration (Apache, Nginx). Database management (MySQL, MongoDB), including high availability and backup solutions. Hands-on experience managing cloud providers, with significant experience in AWS and Google Cloud Platform (GCP). More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯