Strong experience in Infrastructure as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using ApacheSpark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling More ❯
Wales, Yorkshire, United Kingdom Hybrid / WFH Options
Made Tech Limited
Strong experience in Infrastructure as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using ApacheSpark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Made Tech Limited
Strong experience in Infrastructure as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using ApacheSpark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Made Tech Limited
Strong experience in Infrastructure as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using ApacheSpark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling More ❯
including data warehouses, data lakes, data lake houses and data mesh Strong understanding of best practice DataOps and MLOps Up-to-date understanding of various data engineering technologies including ApacheSpark, Databricks and Hadoop Strong understanding of agile ways of working Up-to-date understanding of various programming languages including Python, Scala, R and SQL Up-to-date More ❯
their growth and development Apply agile methodologies (Scrum, pair programming, etc.) to deliver value iteratively Essential Skills & Experience Extensive hands-on experience with programming languages such as Python, Scala, Spark, and SQL Strong background in building and maintaining data pipelines and infrastructure In-depth knowledge of cloud platforms and native cloud services (e.g., AWS, Azure, or GCP) Familiarity with More ❯
Experience working with relational and non-relational databases to build data solutions, such asSQL Server/Oracle, experience with relational and dimensional data structures Experience in using distributed frameworks (Spark, Flink, Beam, Hadoop) Proficiency in infrastructure as code(IaC) using Terraform Experience withCI/CD pipelinesand related tools/frameworks Good knowledge of containers (Docker, Kubernetesetc) Experience withGCP, AWS More ❯
Agile projects Skills & Experience: Proven experience as a Lead Data Solution Architect in consulting environments Expertise in cloud platforms (AWS, Azure, GCP, Snowflake) Strong knowledge of big data technologies (Spark, Hadoop), ETL/ELT, and data modelling Familiarity with Python, R, Java, SQL, NoSQL, and data visualisation tools Understanding of machine learning and AI integration in data architecture Experience More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
slough, south east england, united kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
environment (Python, Go, Julia etc.) •Experience with Amazon Web Services (S3, EKS, ECR, EMR, etc.) •Experience with containers and orchestration (e.g. Docker, Kubernetes) •Experience with Big Data processing technologies (Spark, Hadoop, Flink etc) •Experience with interactive notebooks (e.g. JupyterHub, Databricks) •Experience with Git Ops style automation •Experience with ix (e.g, Linux, BSD, etc.) tooling and scripting •Participated in projects More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Fruition Group
best practices for data security and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, ApacheSpark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and ETL/ELT development in More ❯
Farnborough, Hampshire, England, United Kingdom Hybrid / WFH Options
Eutopia Solutions ltd
with Microsoft Azure and Azure SQL Database Proficiency with Docker and containerisation tools Experience working with APIs for data extraction Desirable Skills Familiarity with big data technologies such as Spark and Kafka Experience with machine learning frameworks like TensorFlow or PyTorch Knowledge of data visualisation tools such as Power BI or Tableau Strong understanding of data modelling and database More ❯
and delivering end-to-end AI/ML projects. Nice to Have: Exposure to LLMs (Large Language Models), generative AI , or transformer architectures . Experience with data engineering tools (Spark, Airflow, Snowflake). Prior experience in fintech, healthtech, or similar domains is a plus. More ❯
Bethesda, Maryland, United States Hybrid / WFH Options
Gridiron IT Solutions
SCALA, and/or UNIX shell scripting Expertise in machine learning techniques and statistical analysis Proficiency in SQL and NoSQL databases Experience with big data platforms such as Hadoop, Spark, and Kafka Cloud computing expertise across AWS, Azure, and other Experience in designing and implementing real-time data processing solutions Strong understanding of AI/ML applications in systems More ❯
practices for data infrastructure, fostering a culture of collaboration and knowledge sharing. (Required) Kubernetes and Orchestration: Manage and optimize Kubernetes clusters, specifically for running critical data processing workloads using Spark and Airflow. (Required) Cloud Security: Implement and maintain robust security measures, including cloud networking, IAM, encryption, data isolation, and secure service communication (VPC peering, PrivateLink, PSC/PSA). More ❯
if the solution is to adopt modern DevOps processes. What does it take to be a Data Engineer? Previous experience in data engineering ideally using Databricks, Azure Data Factory, Spark, Python, SQL, Cosmos DB, Microsoft Azure Analysis services, PowerBI Experience in delivering end to end BI solution from requirements, design to delivery Experience of working within an Agile/ More ❯
of the AEC industry and its specific data processing challenges Experience scaling ML training and data pipelines for large datasets Experience with distributed data processing and ML infrastructure (e.g., ApacheSpark, Ray, Docker, Kubernetes) Experience with performance optimization, monitoring, and efficiency in large-scale ML systems Experience with Autodesk or similar products (Revit, Sketchup, Forma) The Ideal Candidate More ❯
Washington, Washington DC, United States Hybrid / WFH Options
BLN24
and analytical skills. Strong communication and collaboration abilities. Preferred Skills: Experience with cloud-based data platforms (e.g., AWS, Azure, Google Cloud). Familiarity with big data technologies (e.g., Hadoop, Spark). Knowledge of data governance and security best practices. Experience with ETL processes and tools. What BLN24 brings to the Game: BLN24 benefits are game changing. We like our More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Fortice
between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and provide More ❯
between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and provide More ❯
london, south east england, united kingdom Hybrid / WFH Options
Fortice
between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and provide More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Fortice
between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and provide More ❯
Cambridge, Cambridgeshire, United Kingdom Hybrid / WFH Options
Arm Limited
orchestrators!) Data platform expertise, knowledge of lakehouse architectures, Delta Lake, or real-time data processing patterns. Experience working in hybrid environments with asynchronous teamwork. Tuning, profiling and optimising Kafka, Spark, or container networking under load. In Return: This role is at the heart of a highly leveraged platform, enabling hundreds of engineers to use critical data systems with confidence. More ❯
large structured and unstructured data sets. Hands on experience with the set up and maintenance of bronze, silver, and gold layers in a big data platform, ideally Databricks and Apache Spark. And experience building and maintaining DBT models. A desire and passion for transforming raw data into structured tables which can answer common business questions. Proven experience working with More ❯