London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and More ❯
data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience developing and maintaining data warehouses in big data solutions (Snowflake) Expert knowledge in Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize data More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
frameworks, and clear documentation within your pipelines Experience in the following areas is not essential but would be beneficial: Data Orchestration Tools: Familiarity with modern workflow management tools like Apache Airflow, Prefect, or Dagster Modern Data Transformation: Experience with dbt (Data Build Tool) for managing the transformation layer of the data warehouse BI Tool Familiarity : An understanding of how More ❯
of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to Apache Spark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWS Step Functions Understanding of containerisation using Docker Experience with CI/CD pipelines and More ❯
of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to Apache Spark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWS Step Functions Understanding of containerisation using Docker Experience with CI/CD pipelines and More ❯
understanding of data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile environments. More ❯
understanding of data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile environments. More ❯
and Responsibilities While in this position your duties may include but are not limited to: Support the design, development, and maintenance of scalable data pipelines using tools such as Apache Airflow, dbt, or Azure Data Factory. Learn how to ingest, transform, and load data from a variety of sources, including APIs, databases, and flat files. Assist in the setup More ❯
Brockworth, Gloucestershire, UK Hybrid / WFH Options
Lockheed Martin
Agile Development using SCRUM. Experience in mentoring junior team members Experience in Oracle/Relational Databases and/or Mongo Experience in GitLab CI/CD Pipelines Knowledge of Apache NiFi Experience in JavaScript/TypeScript & React Experience of Elasticsearch and Kibana Knowledge of Hibernate Proficiency in the use of Atlassian Suite - Bitbucket, Jira, Confluence We would love to More ❯
brockworth, south west england, united kingdom Hybrid / WFH Options
Lockheed Martin
Agile Development using SCRUM. Experience in mentoring junior team members Experience in Oracle/Relational Databases and/or Mongo Experience in GitLab CI/CD Pipelines Knowledge of Apache NiFi Experience in JavaScript/TypeScript & React Experience of Elasticsearch and Kibana Knowledge of Hibernate Proficiency in the use of Atlassian Suite - Bitbucket, Jira, Confluence We would love to More ❯
gloucester, south west england, united kingdom Hybrid / WFH Options
Lockheed Martin
Agile Development using SCRUM. Experience in mentoring junior team members Experience in Oracle/Relational Databases and/or Mongo Experience in GitLab CI/CD Pipelines Knowledge of Apache NiFi Experience in JavaScript/TypeScript & React Experience of Elasticsearch and Kibana Knowledge of Hibernate Proficiency in the use of Atlassian Suite - Bitbucket, Jira, Confluence We would love to More ❯
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional modelling More ❯
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional modelling More ❯
london (city of london), south east england, united kingdom
Vallum Associates
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional modelling More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
Familiarity with ETL processes, data warehousing and distributed systems. Not necessary for you apply, but would be great if you also have: •Experience with data visualisation tools such as Apache Superset, Jupytr or Power BI •Familiarity with visualisation libraries like D3.js, Chart.js or Plotly for building interactive dashboards. •Exposure to DevOps practices including CI/CD and infrastructure as More ❯
Familiarity with ETL processes, data warehousing and distributed systems. Not necessary for you apply, but would be great if you also have: •Experience with data visualisation tools such as Apache Superset, Jupytr or Power BI •Familiarity with visualisation libraries like D3.js, Chart.js or Plotly for building interactive dashboards. •Exposure to DevOps practices including CI/CD and infrastructure as More ❯
Familiarity with ETL processes, data warehousing and distributed systems. Not necessary for you apply, but would be great if you also have: •Experience with data visualisation tools such as Apache Superset, Jupytr or Power BI •Familiarity with visualisation libraries like D3.js, Chart.js or Plotly for building interactive dashboards. •Exposure to DevOps practices including CI/CD and infrastructure as More ❯
Familiarity with ETL processes, data warehousing and distributed systems. Not necessary for you apply, but would be great if you also have: •Experience with data visualisation tools such as Apache Superset, Jupytr or Power BI •Familiarity with visualisation libraries like D3.js, Chart.js or Plotly for building interactive dashboards. •Exposure to DevOps practices including CI/CD and infrastructure as More ❯
london, south east england, united kingdom Hybrid / WFH Options
BondAval
hands-on when needed. Nice to haves Experience leading technical discovery or architecture definition in a scaling SaaS or fintech environment. Familiarity with event-driven or streaming architectures (Kafka, Apache Flink, etc.). Practical exposure to AI/LLM orchestration frameworks or fine-tuning workflows. Experience designing developer tools, data platforms, or intelligent systems. Interest in or experience mentoring More ❯
Gloucester, England, United Kingdom Hybrid / WFH Options
Omega
GitLab) Contributing across the software development lifecycle from requirements to deployment Tech Stack Includes: Java, Python, Linux, Git, JUnit, GitLab CI/CD, Oracle, MongoDB, JavaScript/TypeScript, React, Apache NiFi, Elasticsearch, Kibana, AWS, Hibernate, Atlassian Suite What’s on Offer: Hybrid working and flexible schedules (4xFlex) Ongoing training and career development Exciting projects within the UK’s secure More ❯