Engineer with an Azure focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Skills/Experience Design and … build high-performance data pipelines: Utilize Databricks and ApacheSpark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Develop and maintain secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Build and deploy AI/ML models: Integrate Machine … and best practices with a focus on how AI can support you in your delivery work Solid experience as a Data Engineer or similar role. Proven expertise in Databricks, ApacheSpark, and data pipeline development and strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks and Azure More ❯
an Azure and Databrick focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Duties Design and build high … performance data platforms: Utilize Databricks and ApacheSpark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Design and oversee the delivery of secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Abilty to Design, Build and deploy AI/… to ensure successful data platform implementations. Your Skills and Experience Solid experience as a Data Architect with experience in designing, developing and implementing Databricks solutions Proven expertise in Databricks, ApacheSpark, and data platforms with a strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks, and Azure More ❯
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as ApacheSpark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using ApacheSpark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and ApacheSpark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Harvey Nash
Subject Matter Expertise: Act as an SME for Tableau best practices, advising senior stakeholders and team members. Nice to haves: Python and/or exposure to Hive, Impala, and Spark ecosystem technologies (HDFS, ApacheSpark, Spark-SQL, UDF, Sqoop) Please apply within for further details or call on 07393149627 Alex Reeder Harvey Nash Finance & Banking To More ❯
modelling tools, data warehousing, ETL processes, and data integration techniques. · Experience with at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like Apache Flink, Apache Kafka/· Good knowledge of log management, monitoring, and analytics solutions like Splunk, Elastic Stack, New Relic etc Given that this is just a short snapshot of the role we encourage you to apply even if you don't meet all the requirements listed above. We are looking for individuals who strive to make an impact More ❯
understanding of data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. ApacheSpark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile More ❯
pipelines and ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. ApacheSpark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes More ❯
Azure, or GCP, with hands-on experience in cloud-based data services. Proficiency in SQL and Python for data manipulation and transformation. Experience with modern data engineering tools, including ApacheSpark, Kafka, and Airflow. Strong understanding of data modelling, schema design, and data warehousing concepts. Familiarity with data governance, privacy, and compliance frameworks (e.g., GDPR, ISO27001). Hands More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. More ❯
pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. More ❯
london, south east england, united kingdom Hybrid/Remote Options
Yapily
systems. API & Micro services Architecture: Comfortable working with REST APIs and micro services architectures. Real-time Stream Processing: Understanding of real-time stream processing frameworks (e.g., PubSub, Kafka, Flink, Spark Streaming). BI Tools & Visualisation Platforms: Experience supporting BI tools or visualization platforms (e.g. Looker, Grafana, PowerBI etc.). Data Pipelines & APIs: Experience in building and maintaining both batch More ❯
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Reporting tools (e.g. Tableau, PowerBI, Qlik More ❯
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
lancashire, north west england, united kingdom Hybrid/Remote Options
CHEP
plus work experience BS & 5+ years of work experience MS & 4+ years of work experience Proficient with machine learning and statistics Proficient with Python, deep learning frameworks, Computer Vision, Spark Have produced production level algorithms Proficient in researching, developing, synthesizing new algorithms and techniques Excellent communication skills Desirable Qualifications Master's or PhD level degree 5+ years of work More ❯
Terraform, CloudFormation) and CI/CD workflows · If you have previous exposure to geospatial data, that would be advantageous but is not a requirement for the position. · Familiarity with ApacheSpark or Databricks · Excellent communication and collaboration skills Benefits About Prevail Partners Prevail Partners delivers strategic advice, intelligence, specialist capabilities, and managed services to clients ranging from governments More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid/Remote Options
DCS Recruitment
principles. Experience working with cloud platforms such as AWS, Azure, or GCP. Exposure to modern data tools such as Snowflake, Databricks, or BigQuery. Familiarity with streaming technologies (e.g., Kafka, Spark Streaming, Flink) is an advantage. Experience with orchestration and infrastructure tools such as Airflow, dbt, Prefect, CI/CD pipelines, and Terraform. What you get in return: Up to More ❯
london, south east england, united kingdom Hybrid/Remote Options
LocalStack
on experience with cloud data platforms such as Snowflake, Redshift, Athena, or BigQuery, including optimization techniques and custom parsers/transpilers. Practical knowledge of distributed and analytical engines (e.g., ApacheSpark, Trino, PostgreSQL, DuckDB) with skills in query engines, performance tuning, and integration in local and production environments. Experience building developer tooling such as CLI tools, SDKs, and More ❯
data modelling, data warehousing, and ETL development. Hands-on experience with Azure Data Factory, Azure Data Lake, and Azure SQL Database. Exposure to big data technologies such as Hadoop, Spark, and Databricks. Experience with Azure Synapse Analytics or Cosmos DB. Familiarity with data governance frameworks (e.g., GDPR, HIPAA). Experience implementing CI/CD pipelines using Azure DevOps or More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Accenture
work with client teams to deliver intelligent data products, leveraging modern cloud and AI technologies. Key Responsibilities Design and implement robust data pipelines and ML workflows using Python, SQL, Spark, and Databricks. Develop and deploy machine learning models (including NLP, deep learning, and agentic AI) in production environments. Integrate data from diverse sources, including streaming and batch ingestion, using More ❯
e.g., PostgreSQL, DuckDB). Experience with the modern data stack, building data ingestion pipelines and working with ETL and orchestration tools (e.g., Airflow, Luigi, Argo, dbt), big data technologies (Spark, Kafka, Parquet), and web frameworks for model serving (e.g. Flask or FastAPI). Data Science: Familiarity or experience with classical NLP techniques (BERT, topic modelling, summarisation), statistical analysis, and More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Syntax Consultancy Limited
data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred More ❯
EC4N 6JD, Vintry, United Kingdom Hybrid/Remote Options
Syntax Consultancy Ltd
data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred More ❯