London, South East, England, United Kingdom Hybrid / WFH Options
WüNDER TALENT
solution design. Requirements Proven experience as a Data Engineer in cloud-first environments. Strong commercial knowledge of AWS services (e.g. S3, Glue, Redshift). Advanced PySpark and Databricks experience (DeltaLake, Unity Catalog, Databricks Jobs etc). Proficient in SQL (T-SQL/SparkSQL) and Python for data transformation and scripting. Hands-on experience with workflow orchestration tools More ❯
real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising Azure-based data pipelines using Databricks, PySpark, ADF, and DeltaLake Implementing a medallion architecture - from raw to curated Collaborating with analysts to make data business-ready Applying CI/CD and DevOps best practices (Git, Azure DevOps More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
Degree in Computer Science, Software Engineering, or similar (applied to Data/Data Specialisation). Extensive experience in Data Engineering, in both Cloud & On-Prem, Big Data and Data Lake environments. Expert knowledge in data technologies, data transformation tools, data governance techniques. Strong analytical and problem-solving abilities. Good understanding of Quality and Information Security principles. Effective communication, ability … monitoring/security is necessary. Significant AWS or Azure hands-on experience. ETL Tools such as Azure Data Fabric (ADF) and Databricks or similar ones. Data Lakes: Azure Data, DeltaLake, Data Lake or Databricks Lakehouse. Certifications: AWS, Azure, or Cloudera certifications are a plus. To be considered for this role you MUST have in-depth experience … role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, DeltaLake, Data Lake Please note that due to a high level of applications, we can only respond to applicants whose skills and qualifications are suitable for this More ❯
based data solutions using Databricks , Python , Spark , and Kafka -working on both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , DeltaLake , Spark Structured Streaming , and Unity Catalog Advanced Python/PySpark and big data pipeline development Familiar with event streaming tools ( Kafka , Azure Event Hubs ) Solid understanding of More ❯
a fast-paced and agile environment What We're Looking For : Essential Skills : Solid DevOps experience including GitHub Actions and Jenkins Proficient in Python and shell scripting Experience with DeltaLake table formats Strong data engineering background Proven experience working with large datasets Nice to Have : Familiarity with Airflow Background in full stack development Team & Culture : Join a More ❯
Jenkins). Familiarity with large-scale data management and engineering best practices. Bonus Points For Workflow orchestration tools like Airflow. Working knowledge of Kafka and Kafka Connect. Experience with DeltaLake and lakehouse architectures. Proficiency in data serialization formats: JSON, XML, PARQUET, YAML. Cloud-based data services experience. Ready to build the future of data? If you're More ❯