both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , Delta Lake , Spark Structured Streaming , and Unity Catalog Advanced Python/PySpark and big data pipeline development Familiar with event streaming tools ( Kafka , Azure Event Hubs ) Solid understanding of SQL , data modelling , and lakehouse architecture Experience deploying via CI/CD More ❯
across varied solutions. - Extensive experience of using the Databricks platform for developing and deploying data solutions/data products (including ingestion, transformation and modelling) with high proficiency in Python, PySpark and SQL. - Leadership experience in other facets necessary for solution development such as testing, the wider scope of quality assurance, CI/CD etc. - Experience in related areas of More ❯
across varied solutions. - Extensive experience of using the Databricks platform for developing and deploying data solutions/data products (including ingestion, transformation and modelling) with high proficiency in Python, PySpark and SQL. - Leadership experience in other facets necessary for solution development such as testing, the wider scope of quality assurance, CI/CD etc. - Experience in related areas of More ❯
Luton, Bedfordshire, South East, United Kingdom Hybrid / WFH Options
Anson Mccade
practice Essential Experience: Proven expertise in building data warehouses and ensuring data quality on GCP Strong hands-on experience with BigQuery, Dataproc, Dataform, Composer, Pub/Sub Skilled in PySpark, Python and SQL Solid understanding of ETL/ELT processes Clear communication skills and ability to document processes effectively Desirable Skills: GCP Professional Data Engineer certification Exposure to Agentic More ❯
new technologies and continuously improve engineering processes What You’ll Bring A relevant degree: Computer Science, Software Engineering, Data Science or other fields Proficiency in SQL and either Python, PySpark or C# Strong analytical and problem-solving skills Passion for data, innovation and continuous learning More ❯
Eastleigh, Hampshire, South East, United Kingdom Hybrid / WFH Options
Spectrum It Recruitment Limited
operations and performance Work with Azure DevOps to manage and track project work About You Essential Skills & Experience: Proficiency in cloud-based tools (ADF, Synapse, S3, Lambda) Experience using PySpark for ELT pipelines Strong analytical and problem-solving mindset Able to work independently and collaboratively across teams Confident communicator with strong documentation skills Experience in a data engineering role More ❯
environment. Experience in the Utilities sector. Experience leading technical projects. Skills & Technologies required: Proficiency in cloud-based data engineering tools (ADF, Synapse Analytics, S3, Lambda). Proficiency in using PySpark notebooks for ELT processes. Ability to foster and cultivate a culture of best practices. Strong analytical and problem-solving skills. Ability to work independently and within cross-functional teams. More ❯
UK. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … desire to make a significant impact, we encourage you to apply! Job Responsibilities Data Engineering & Data Pipeline Development Design, develop, and optimize scalable DATA workflows using Python, PySpark, and Airflow Implement real-time and batch data processing using Spark Enforce best practices for data quality, governance, and security throughout the data lifecycle Ensure data availability, reliability and performance through … Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning, caching, and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives. Workflow Orchestration More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
WüNDER TALENT
Senior Data Engineer| AWS/Databricks/PySpark | London/Glasgow (Hybrid) | August Start Role: Senior Data Engineer Location: This is a hybrid engagement represented by 2 days/week onsite, either in Central London or Glasgow. Start Date: Must be able to start mid-August. Salary: £80k-£90k (Senior) | £90k-£95k (Lead) About The Role Our partner is … decisions, peer reviews and solution design. Requirements Proven experience as a Data Engineer in cloud-first environments. Strong commercial knowledge of AWS services (e.g. S3, Glue, Redshift). Advanced PySpark and Databricks experience (Delta Lake, Unity Catalog, Databricks Jobs etc). Proficient in SQL (T-SQL/SparkSQL) and Python for data transformation and scripting. Hands-on experience with … engagement represented by 2 days/week onsite, either in Central London or Glasgow. You must be able to start in August. Senior Data Engineer| AWS/Databricks/PySpark | London/Glasgow (Hybrid) | August Start More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to a high level More ❯
Employment Type: Permanent
Salary: £75000 - £80000/annum Pension, Good Holiday, Healthcare
real urgency, and real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising Azure-based data pipelines using Databricks, PySpark, ADF, and Delta Lake Implementing a medallion architecture - from raw to curated Collaborating with analysts to make data business-ready Applying CI/CD and DevOps best practices (Git … time logistics datasets What they're looking for: A strong communicator - someone who can build relationships and help connect silos Experience building pipelines in Azure using Databricks, ADF, and PySpark Strong SQL and Python skills Bonus points if you've worked with Power BI, Azure Purview, or streaming tools You're versatile - happy to support analysts and wear multiple More ❯
Data Engineer | Data Consultant | Azure | Fabric | Python | SQL | PySpark Senior Data Engineer - Up to £70,000 London - 3 days in-office Method Resourcing are thrilled to be partnering with a Microsoft Solutions Partner to support them in hiring a Data Consultant to focus on and specialise on their current and upcoming Fabric projects. This is a fantastic time to … is offering a salary of up to £70,000 dependent on experience + Bonus & Benefits. Please apply now for immediate consideration. Data Engineer | Data Consultant | Azure | Fabric | Python | SQL | PySpark Senior Data Engineer - Up to £70,000 London - 3 days in-office RSG Plc is acting as an Employment Agency in relation to this vacancy. More ❯