Cardiff, South Glamorgan, Wales, United Kingdom Hybrid / WFH Options
Octad Recruitment Ltd
and clients Required Skills & Experience Must-Haves: 3+ years of hands-on Azure engineering experience (IaaS PaaS), including Infra as Code. Strong SQL skills and proficiency in Python or PySpark . Built or maintained data lakes/warehouses using Synapse , Fabric , Databricks , Snowflake , or Redshift . Experience hardening cloud environments (NSGs, identity, Defender). Demonstrated automation of backups, CI … their Azure data lake using Synapse , Fabric , or an alternative strategy. Ingest data from core platforms: NetSuite , HubSpot , and client RFP datasets. Automate data pipelines using ADF , Fabric Dataflows , PySpark , or SQL . Publish governed datasets with Power BI , enabling row-level security (RLS). By Year-End: Deliver a production-ready lakehouse powering BI and ready for AI More ❯
Data Engineer - AWS, Databricks & Pyspark Contract Role - Data Engineer Location: Hybrid (1 day per month onsite in Harrow, London) Rate: £350 per day (Outside IR35) Duration: 6 months A client of mine is looking for a Data Engineer to help maintain and enhance their existing cloud-based data platform. The core migration to a Databricks Delta Lakehouse on AWS … with analysts, data scientists, and business stakeholders to deliver clean, usable datasets - Contribute to good data governance, CI/CD workflows, and engineering standards - Continue developing your skills in PySpark, Databricks, and AWS-based tools Tech Stack Includes: - Databricks (Delta Lake, PySpark) - AWS - CI/CD tooling (Git, DevOps pipeline - Cloud-based data warehousing and analytics tools If … your a mid to snr level Data Engineer feel free to apply or send your C.V Data Engineer - AWS, Databricks & PysparkMore ❯
Data Engineer (Databricks) - Leeds (Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer) Our client is a global innovator and world leader with one of the most recognisable names within technology. They are looking for … Data Engineers with significant Databricks experience to join an exceptional Agile engineering team. We are seeking a Data Engineer with strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No-SQL - Aurora, MS SQL Server, MySQL is … top performers. Location: Leeds Salary: £40k - £50k + Pension + Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer More ❯
Azure Data Engineer - 1/2 days onsite Summary: Join a team building a modern Azure-based data platform. This hands-on engineering role involves designing and developing scalable, automated data pipelines using tools like Data Factory, Databricks, Synapse, and More ❯
Azure Data Engineer - 1/2 days onsite Summary: Join a team building a modern Azure-based data platform. This hands-on engineering role involves designing and developing scalable, automated data pipelines using tools like Data Factory, Databricks, Synapse, and More ❯
make your mark in global asset management and make an impact on the world's investors What you will be responsible for Design, develop and maintain applications leveraging Python, PySpark, and SQL leveraging cloud native architecture principles and technologies. Lead the design and development of scalable and robust software application architecture, ensuring alignment with business goals and industry best … tools and technologies to improve our technology stack. Develop oneself into a Subject Matter Expert (SME) on Technical and Functional domain areas. What we value Demonstrated experience in Python, PySpark and SQL (AWS Redshift, Postgres, Oracle). Demonstrated experience building data pipelines with PySpark and AWS. Application development experience in financial services with hands on designing, developing, and More ❯
specialists, contributing to the development and maintenance of advanced data pipelines and supporting various analytical initiatives. Responsibilities: • Assist in the development and maintenance of data pipelines using Spark, Scala, PySpark, and Python. • Support the deployment and management of AWS services including EC2, S3, and IAM. • Work with the team to implement and optimize big data processing frameworks such as … equivalent practical experience. • Basic knowledge of Spark and Hadoop distributed processing frameworks. • Familiarity with AWS services, particularly EC2, S3, and IAM. • Some experience with programming languages such as Scala, PySpark, Python, and SQL. • Understanding of data pipeline development and maintenance. • Strong problem-solving skills and the ability to work collaboratively in a team environment. • Eagerness to learn and grow More ❯
Oracle and integrated with Databricks Spark on AWS. Write efficient, production-quality SQL and PL/SQL queries for data extraction and transformation in Oracle. Leverage Databricks Spark and PySpark to process large datasets and build machine learning models in a distributed environment. Collaborate closed with business stakeholders to understand data requirements and translate them into technical solutions. Ensure … data security). Working knowledge of AWS core services, including S3, EC2/EMR, IAM, Athena, Glue or Redshift. Hands-on experience with Databricks Spark on large datasets, using PySpark, Scala, or SQL. Familiarity with Delta Lake, Unity Catalog or similar data lakehouse technologies. Proficient in Linux environments, including experience with shell scripting, basic system operations, and navigating file More ❯
real urgency, and real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising Azure-based data pipelines using Databricks, PySpark, ADF, and Delta Lake Implementing a medallion architecture - from raw to curated Collaborating with analysts to make data business-ready Applying CI/CD and DevOps best practices (Git … time logistics datasets What they're looking for: A strong communicator - someone who can build relationships and help connect silos Experience building pipelines in Azure using Databricks, ADF, and PySpark Strong SQL and Python skills Bonus points if you've worked with Power BI, Azure Purview, or streaming tools You're versatile - happy to support analysts and wear multiple More ❯
Senior Software Engineer Location: Springfield, VA (100% onsite) Salary: Up to $210k (DOE) Clearance: Active Top Secret with SCI eligibility and CI Polygraph Benefits: 100% employer-paid healthcare Role Overview: Join to work on challenging projects in software engineering. Collaborate More ❯
time left to apply End Date: August 3, 2025 (30+ days left to apply) job requisition id R094691 Media help partners understand the changing advertising landscape. Specialising in audience measurement, consumer targeting and in-depth intelligence into paid, owned and More ❯
our machine learning and analytics workloads to support the companies growth. Our data stack: We work with a modern data stack built on Databricks and AWS with python and pyspark as our primary tools. In this role, you'll get to: Own business critical components and perform meaningful work with an impact on our company and our customers Design … expand your skillset About you We believe that no one is the finished article, however, some experience in the following is important for this role: Proficient with Python and PySpark Experience working with a modern data stack is beneficial but not required Experience with AWS is beneficial but not required You enjoy learning new technologies and are passionate about More ❯
our machine learning and analytics workloads to support the companies growth. Our data stack: We work with a modern data stack built on Databricks and AWS with python and pyspark as our primary tools. In this role, you'll get to: Own business critical components and perform meaningful work with an impact on our company and our customers Design … expand your skillset About you We believe that no one is the finished article, however, some experience in the following is important for this role: Proficient with Python and PySpark Experience working with a modern data stack is beneficial but not required Experience with AWS is beneficial but not required You enjoy learning new technologies and are passionate about More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineering Manager, you will … Project lead and other team members to provide regular progress updates and raise any risk/concerns/issues Qualification Core skills we're working with include : Palantir PythonPySpark/PySQL AWS or GCP What's in it for you At Accenture in addition to a competitive basic salary, you will also have an extensive benefits package which More ❯
Job Description Job Role : Data & AI Manager - R&D Location : London Career Level: Manager Accenture is a leading global professional services company, providing a broad range of services in strategy and consulting, interactive, technology and operations, with digital capabilities across More ❯
Data Engineer | Bristol/Hybrid | £65,000 - £80,000 | AWS | Snowflake | Glue | Redshift | Athena | S3 | Lambda | Pyspark | Python | SQL | Kafka | Amazon Web Services | Do you want to work on projects that actually help people? Or maybe you want to work on a modern AWS stack I am currently supporting a brilliant company in Bristol who build software which genuinely … pipelines using AWS services. implementing data validation, quality checks, and lineage tracking across pipelines, automate data workflows and integrate data from various sources.Tech you will use and learn – Python, Pyspark, AWS, Lambda, S3, DynamoDB, CI/CD, Kafka and more.This is Hybrid role in Bristol and you also get a bonus and generous holiday entitlement to name a couple … you be interested in finding out more? If so apply to the role or send your CV to Sponsorship isn’t available. AWS | Snowflake | Glue | Redshift | Athena | S3 | Lambda | Pyspark | Python | SQL | Kafka | Amazon Web Services More ❯
months Location: London JOB DETAILS Role Title: Senior Data Engineer Note: (Please do not submit the same profiles as for 111721-1) Required Core Skills: Databricks, AWS, Python, Pyspark, data modelling Minimum years of experience: 7 years Job Description: Must have hands-on experience in designing, developing, and maintaining data pipelines and data streams. Must have a strong working … knowledge of moving/transforming data across layers (Bronze, Silver, Gold) using ADF, Python, and PySpark. Must have hands-on experience with PySpark, Python, AWS, data modelling. Must have experience in ETL processes. Must have hands-on experience in Databricks development. Good to have experience in developing and maintaining data integrity and accuracy, data governance, and data security policies More ❯
Data Engineer | Data Consultant | Azure | Fabric | Python | SQL | PySpark Senior Data Engineer - Up to £70,000 London - 3 days in-office Method Resourcing are thrilled to be partnering with a Microsoft Solutions Partner to support them in hiring a Data Consultant to focus on and specialise on their current and upcoming Fabric projects. This is a fantastic time to … is offering a salary of up to £70,000 dependent on experience + Bonus & Benefits. Please apply now for immediate consideration. Data Engineer | Data Consultant | Azure | Fabric | Python | SQL | PySpark Senior Data Engineer - Up to £70,000 London - 3 days in-office RSG Plc is acting as an Employment Agency in relation to this vacancy. More ❯
will have both business and technical ownership. Day-to-Day Responsibilities: Individual Contributor Design and own the end-to-end solution architecture for complex data estates across Azure, Databricks, PySpark, and broader modern data stacks. Collaborate with vendors and demanding business stakeholders to build scalable, aligned, and performance-driven solutions. Engage directly with clients in deep-dive working sessions … and ensure technical quality and delivery velocity across pods. In terms of technical experience - 15+ years of hands-on Data Engineering development experience Proficient in Object-oriented languages (Python, PySpark) and frameworks Hands-on expertise in Azure ecosystem, including components like Azure Data Factory, Azure Data Lake Storage, Azure, SQL, Azure DataBricks, HD Insights, ML Service etc. Expertise in … and Practice leadership to define growth roadmaps and execute strategies. Good understanding of the CPG (Consumer Packaged Goods) domain is preferred. Mandatory Skills : Proficient in Object-oriented languages (Python, PySpark) and frameworks Hands-on expertise in Azure ecosystem, including components like Azure Data Factory, Azure Data Lake Storage, Azure, SQL, Azure DataBricks, HD Insights, ML Service etc. Expertise in More ❯
Leverage Azure services extensively, particularly Azure Storage, for scalable cloud solutions. Ensure seamless integration with AWS S3 and implement secure data encryption/decryption practices. Python Implementation: Utilize Python, Pyspark for processing large datasets and integrating with cloud-based data solutions. Team Leadership: Code review and mentor a team of 3 engineers, fostering best practices in software development and … and optimize workflows, ensuring efficient and reliable operations. Required 5-7 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
Leverage Azure services extensively, particularly Azure Storage, for scalable cloud solutions. Ensure seamless integration with AWS S3 and implement secure data encryption/decryption practices. Python Implementation: Utilize Python, Pyspark for processing large datasets and integrating with cloud-based data solutions. Team Leadership: Manage and mentor a team of 3 engineers, fostering best practices in software development and code … and optimize workflows, ensuring efficient and reliable operations. Required 5-7 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
Your Role As a Data Engineering Specialist, you will have a considerable understanding of data engineering principles including ETL Processes. You will have hands on experience working with Databricks, Pyspark for data transformation and be familiar with cloud computing such as Microsoft Azure Services. In your role you will have an understanding of data warehouse architectures as well as … been identified and accounted for. About You You will be able to leverage azure services to build scalable data solutions. Your role will require knowledge of Python, SQL and Pyspark and being able to use the tech stack effectively. Your role will provide oversight and guidance for new projects as well as working closely with analysts and stakeholders, and More ❯
effective knowledge transfer Translate business needs into technical solutions through effective stakeholder engagement Document data architecture, processes and reporting logic to ensure repeatability and transparency Work with SQL and PySpark to transform and load data Support Power BI reporting needs where required What We’re Looking For Previous experience in data engineering Strong hands-on experience with Azure data … tools (Data Factory, Synapse, Databricks) Advanced SQL and PySpark knowledge Strong stakeholder engagement skills with experience in requirement gathering and documentation Microsoft certification and Power BI experience is desirable Background in mid-to-large scale businesses preferred – complexity and data maturity essential A proactive, solutions-oriented personality who thrives in fast-paced, evolving environments Interested? Click “Apply” or email More ❯
re used daily by home builders, mortgage brokers, local councils, and more to make informed property purchasing decisions. We've migrated key legacy SQL Server/SSIS pipelines to PySpark and Databricks, and we're in the home stretch of our modernisation programme. Now we're looking to unlock the power of our disparate data, and make it accessible … working to high standards of compliance (inc ISO-27001, GDPR), Data Governance, and Information Security Experienced in migrating from SQL based data architectures to modern Data Engineering technologies, using PySpark, Databricks, Terraform, and Pandas Someone able to explore, analyse and understand our data and its uses Ideally experienced in a multi-cloud environment (Databricks across Azure and AWS) solving More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Parking Network BV
For airports, for partners, for people. We are CAVU. At CAVU our purpose is to find new and better ways to make airport travel seamless and enjoyable for everybody. From the smallest ideas to the biggest changes. Every day here More ❯