data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
london (city of london), south east england, united kingdom
Mastek
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
flow issues, optimize performance, and implement error-handling strategies. Optional scripting skills for creating custom NiFi processors. Programming & Data Technologies: Proficiency in Java and SQL. Experience with C# and Scala is a plus. Experience with ETL tools and big data platforms. Knowledge of data modeling, replication, and query optimization. Hands-on experience with SQL and NoSQL databases is desirable. Familiarity More ❯
flow issues, optimize performance, and implement error-handling strategies. Optional scripting skills for creating custom NiFi processors. Programming & Data Technologies: Proficiency in Java and SQL. Experience with C# and Scala is a plus. Experience with ETL tools and big data platforms. Knowledge of data modeling, replication, and query optimization. Hands-on experience with SQL and NoSQL databases is desirable. Familiarity More ❯
and compliance needs. Document data flows and engineering processes for transparency and knowledge sharing. Skills & Experience Programming & Data Technologies Proficiency in Java and SQL . Experience with C# and Scala is a plus. Familiarity with ETL tools and big data platforms. Knowledge of data modelling, replication, and query optimization. Experience with SQL and NoSQL databases. Exposure to data warehousing tools More ❯
knowledge sharing across the team. What We're Looking For Strong hands-on experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, Spark Streaming, Kinesis). Proven More ❯
these requirements. In order to secure one of these Senior Data Engineer roles you must be able to demonstrate the following experience: Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure More ❯
London, England, United Kingdom Hybrid / WFH Options
Client Server
a week with flexibility to work from home once a week. About you: You have experience as a Data Engineer, working on scalable systems You have Python, Java or Scala coding skills You have experience with Kafka for data streaming including Kafka Streams and KTables You have strong SQL skills (e.g. PostgreSQL, MySQL) You have strong AWS knowledge, ideally including More ❯
london, south east england, united kingdom Hybrid / WFH Options
Client Server
a week with flexibility to work from home once a week. About you: You have experience as a Data Engineer, working on scalable systems You have Python, Java or Scala coding skills You have experience with Kafka for data streaming including Kafka Streams and KTables You have strong SQL skills (e.g. PostgreSQL, MySQL) You have strong AWS knowledge, ideally including More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Client Server
a week with flexibility to work from home once a week. About you: You have experience as a Data Engineer, working on scalable systems You have Python, Java or Scala coding skills You have experience with Kafka for data streaming including Kafka Streams and KTables You have strong SQL skills (e.g. PostgreSQL, MySQL) You have strong AWS knowledge, ideally including More ❯
Edinburgh, Midlothian, United Kingdom Hybrid / WFH Options
Onyx-Conseil
web/mobile applications or platform with either Java/J2EE or .NET tech stack and database technologies such as Oracle, MySQL, etc. Exposure to polyglot programming languages like Scala, Python and Golang will be a plus. Ability to read/write code and expertise with various design patterns. Have used NoSQL databases such as MongoDB, Cassandra, etc. Work on More ❯
to ensure data availability and quality Implement data governance , security , and compliance standards Monitor and troubleshoot data workflows and performance issues Essential Skills & Experience: Proficiency in SQL , Python , or Scala Experience with cloud platforms such as AWS, Azure, or GCP Familiarity with tools like Apache Spark , Kafka , and Airflow Strong understanding of data modelling and architecture Knowledge of CI/ More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
multiple stakeholders in a fast-paced environment Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Experience in processing large amounts of structured and unstructured data, including integrating data More ❯
multiple stakeholders in a fast-paced environment Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Experience in processing large amounts of structured and unstructured data, including integrating data More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
multiple stakeholders in a fast-paced environment Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Experience in processing large amounts of structured and unstructured data, including integrating data More ❯
and provide technical guidance. Troubleshoot and resolve complex issues related to Databricks infrastructure. Essential Skills & Experience: Proven experience working with Databricks in a similar role. Strong knowledge of Spark, Scala, Python, and SQL. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in a fast-paced environment. Excellent problem-solving and communication skills. Desirable Skills & Experience More ❯
warrington, cheshire, north west england, united kingdom
iO Associates
and provide technical guidance. Troubleshoot and resolve complex issues related to Databricks infrastructure. Essential Skills & Experience: Proven experience working with Databricks in a similar role. Strong knowledge of Spark, Scala, Python, and SQL. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in a fast-paced environment. Excellent problem-solving and communication skills. Desirable Skills & Experience More ❯
bolton, greater manchester, north west england, united kingdom
iO Associates
and provide technical guidance. Troubleshoot and resolve complex issues related to Databricks infrastructure. Essential Skills & Experience: Proven experience working with Databricks in a similar role. Strong knowledge of Spark, Scala, Python, and SQL. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in a fast-paced environment. Excellent problem-solving and communication skills. Desirable Skills & Experience More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Avanti
For Active DV clearance (essential) Strong background in data engineering, ETL, and data pipeline development Experience with AWS or Azure cloud technologies Excellent coding skills in Python, SQL, or Scala Understanding of data warehousing, modelling, and streaming technologies Confident working in agile, fast-paced technical teams Locations London - (Hybrid model – typically 3 days per week on site) Salary range More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Avanti
For Active DV clearance (essential) Strong background in data engineering, ETL, and data pipeline development Experience with AWS or Azure cloud technologies Excellent coding skills in Python, SQL, or Scala Understanding of data warehousing, modelling, and streaming technologies Confident working in agile, fast-paced technical teams Locations Manchester - (Hybrid model – typically 3 days per week on site) Salary range More ❯
to demonstrate the following experience: Commercial experience gained in a Data Engineering role on any major cloud platform (Azure, AWS or GCP) Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Some experience with the design, build and maintenance of data pipelines and More ❯
/Ansible, Terraform) Basic understanding of Linux System Admin with basic troubleshooting and problem-solving skills Basic hands-on experience with languages (Bash/Python/Core Java/Scala) Experience with CI/CD pipeline (Jenkins, Git, Maven etc) Role & Responsibilities Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and More ❯