and maintain efficient ETL/ELT data pipelines using Microsoft Fabric and its components, including Fabric Pipelines and Spark jobs. Notebook-based Transformation: Write, test, and optimize Python notebooks (PySpark) within the Fabric/Synapse environment to perform complex data ingestion, cleansing, and transformation tasks. Data Warehouse and Lakehouse Management: Implement and manage data storage solutions, leveraging Azure Data … Microsoft Azure ecosystem. Strong hands-on experience with Microsoft Fabric, specifically its data engineering workloads (Notebooks, Pipelines, and Lakehouses). Demonstrable expertise in Azure Synapse Analytics, including Synapse Notebooks (PySpark) and SQL pool resources. Proficiency in Python for data manipulation, transformation, and automation. Experience in building and maintaining ETL/ELT pipelines using Azure Data Factory and Fabric Pipelines. More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to a high level More ❯
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
in designing and developing interactive dashboards and reports using BI tools like Oracle Business Intelligence Enterprise Edition (OBIEE) and Oracle Analytics Cloud (OAC).Proficiency in programming languages such as Pyspark and PL/SQL for data processing and automation.Have a strong understanding of Azure cloud platform and its services for deploying and managing cloud-based solutions.Experience with Cloud services … such as Azure Data Services, ADLS and AKSExperience with Python and PySpark for distributed data processing, along with proficiency in Numpy, Pandas and other data manipulation libraries.Experience in optimizing big data architectures for high availability and performance.Strong problem-solving skills, analytical mindset, and ability to work in fast-paced environments.Experience with the Incorta platform, including its Direct Data Mapping … support related documents and maintain & upgrade the existing ones.Discuss with business users for new requirements and prepare Functional & Technical Specification Documents.Build, optimize and manage large scale data pipelines leveraging pyspark and spark clusters.Deploy, Orchestrate and manage containerized workloads using Azure Kubernetes services (AKS) for distributed data processing and analytics.Design and manage Azure IaaS resources for high performance data processing. More ❯
engineering practices. Experience with ETL development in data lake and data warehouse (preferably Snowflake) Exhibited hands-on experience in building data pipelines and reusable components using AWS Services and PySpark, and Snowflake within the last two years Exposure to Databricks Unity Catalog and UniForm is a plus The ability to deliver work at a steady, predictable pace to achieve More ❯
engineering practices. Experience with ETL development in data lake and data warehouse (preferably Snowflake) Exhibited hands-on experience in building data pipelines and reusable components using AWS Services and PySpark, and Snowflake within the last two years Exposure to Databricks Unity Catalog and UniForm is a plus The ability to deliver work at a steady, predictable pace to achieve More ❯
/long-term Interview Criteria: Telephonic + Zoom Direct Client Requirement Role: AWS Data Engineer We are seeking a skilled AWS Data Engineer who has experience working with Python, PySpark, lambda, Airflow, and Snowflake.Responsibilities: Design, build, and optimize ETLs using Python, PySpark, lambda, Airflow and other AWS services. Create SQL queries to segment, manipulate, and formatdata. Build automations … and maintain ETL/ELT pipelines to ingest data into Amazon Redshift for analytics and reporting. Requirements: Minimum 5 years of experience as Data Engineer. 3+ years of Python, PySpark, and Lambda. Must have experience with Airflow and Snowflake. Advanced SQL query development proficiency Understanding of data modelling principles and techniques. Knowledge of data security best practices and compliance More ❯
specialists, contributing to the development and maintenance of advanced data pipelines and supporting various analytical initiatives. Responsibilities: • Assist in the development and maintenance of data pipelines using Spark, Scala, PySpark, and Python. • Support the deployment and management of AWS services including EC2, S3, and IAM. • Work with the team to implement and optimize big data processing frameworks such as … equivalent practical experience. • Basic knowledge of Spark and Hadoop distributed processing frameworks. • Familiarity with AWS services, particularly EC2, S3, and IAM. • Some experience with programming languages such as Scala, PySpark, Python, and SQL. • Understanding of data pipeline development and maintenance. • Strong problem-solving skills and the ability to work collaboratively in a team environment. • Eagerness to learn and grow More ❯
scalable data pipelines and services. (Hybrid 2 days office - Dublin) Responsibilities Architecture leadership: Define roadmaps, set standards, choose tools, review designs. Hands-on engineering: Still coding in Python/PySpark, designing pipelines, embedding CI/CD. Mentorship: Guide and review other engineers' work. Security ownership: Especially around AWS IAM roles and least privilege access. Governance: Data models, metadata, documentation … Collaboration: Work closely with global colleagues; communicate designs clearly. Requirements AWS data stack, hands on experience: Glue, Airflow/MWAA, Athena, Redshift, RDS, S3. Strong coding skills in Python & PySpark/Pandas & SQL. CI/CD automation: AWS CodePipeline, CloudFormation (infrastructure as code). Architect-level experience: At least 5 years in a senior/lead role. Security-first More ❯
Technology Architect - Data Engineering (Hybrid - London) Contract: 6 months Work Mode: Hybrid (12 days WFO/month)We are looking for an experienced Technology Architect with deep expertise in PySpark, ADF, and Databricks to lead and design data engineering solutions for our client. What You'll Do: Lead technical design using Medallion architecture and Azure ServicesCreate conceptual diagrams, source … pipelines Collaborate effectively with team members and stakeholdersOptional: Work with Log Analytics and KQL queries Must-Have Skills: 10+ years of experience in Data Engineering Hands-on experience with PySpark, ADF, Databricks, SQL Strong understanding of dimensional modeling, normalization, schema design, and data harmonization Experience with Erwin and data modeling toolsExcellent communication, problem-solving, and client-facing skills Why More ❯
Our financial services client is seeking a Database Developer for an exciting direct hire opportunity. DUTIES:Design and develop ETL processes to transform a variety of raw data, flat files and excel spreadsheets into SQL databases. Develop and optimize queries More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
equivalent combination of education, technical training, or work/military experience Over 5 years of hands-on Pentaho experience Over 2 years of software development experience in JavaScript, Python, PySpark, or other object-oriented development languages Required Skills: 1) Min 5 years of solid Pentaho-specific ETL development experience 2) Min 2 years of Python and PySpark (Pandas More ❯
leaders, working at the intersection of cutting-edge technology and real-world impact. As part of this role, you will be responsible for: Executing complex data integration projects using PySpark and distributed technologies Designing and implementing scalable generative AI workflows using modern AI infrastructure Collaborating with cross-functional teams to ensure successful delivery and adoption Driving continuous improvement and … innovation across client engagements To be successful in this role, you will have: Experience working in data engineering or data integration Strong technical skills in Python or PySpark Exposure to generative AI platforms or interest in building AI-powered workflows Ability to work closely with clients and lead delivery in fast-paced environments Exposure to Airflow, Databricks or DBT More ❯
leaders, working at the intersection of cutting-edge technology and real-world impact. As part of this role, you will be responsible for: Executing complex data integration projects using PySpark and distributed technologies Designing and implementing scalable generative AI workflows using modern AI infrastructure Collaborating with cross-functional teams to ensure successful delivery and adoption Driving continuous improvement and … innovation across client engagements To be successful in this role, you will have: Experience working in data engineering or data integration Strong technical skills in Python or PySpark Exposure to generative AI platforms or interest in building AI-powered workflows Ability to work closely with clients and lead delivery in fast-paced environments Exposure to Airflow, Databricks or DBT More ❯
Bournemouth, Dorset, United Kingdom Hybrid / WFH Options
LV=
About The Role We are looking for an experienced Test Manager with a strong background in data platform testing, particularly across Microsoft Fabric, Azure services, PySpark notebooks, and automated testing frameworks. You will play a pivotal role in ensuring quality and governance across data pipelines, lakehouses, and reporting layers. This role requires a hands-on leader who can define … Microsoft Fabric workloads (Lakehouses, Pipelines, Notebooks, Power BI). Lead and manage the QA effort across multiple agile teams. Drive the development and maintenance of automated testing for: 1-PySpark notebooks in Fabric/Databricks 2-Data pipelines and transformations 3-Delta tables and lakehouse validation Embed testing into CI/CD pipelines (Azure DevOps or GitHub Actions). More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
Data Engineer | Fabric | Azure | Python | SQL | PySpark Senior Data Engineer - Up to £90,000 Hybrid Working - 1 day per fortnight in office Method Resourcing are thrilled to be partnering with a rapidly scaling business who are preparing to embark on a greenfield Fabric implementation to unify data platforms and analytics across the business. To achieve this, they are looking … roadmap for the business. The ideal candidate will have commercial hands-on experience implementing Fabric into a business and come from an Azure background, with expertise in Python/PySpark and SQL development. The role is paying up to £100,000 and is offered on a hybrid model, 1 day per fortnight in the London office. RSG Plc is More ❯
alerting systems to maintain data health and accuracy Define KPIs and thresholds in collaboration with technical and non-technical stakeholders Develop and productionise machine learning and statistical models (Python, PySpark) Deploy monitoring solutions on AWS infrastructure Create scalable frameworks for future monitoring needs Investigate anomalies and ensure quick resolution of issues in the data pipeline Advocate for data quality … best practices across the business Provide mentorship and contribute to a culture of continuous improvement About You: Proficient in Python and SQL Experience working with large datasets, preferably using PySpark Solid understanding of AWS or similar cloud infrastructure Methodical, detail-oriented, and comfortable working independently Able to translate business needs into technical solutions Previous experience building monitoring or data More ❯
West London, London, England, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
to enable data based decision makers across the business. The Data Analyst will work primarily with finance, product and planning teams utilising the likes of Power BI, SQL, Python, PySpark and SAP amongst other tools. This is an exciting opportunity best suited to those that thrive in busy business facing environments This is a hybrid role based in Central … of the possible with BI, reporting and dashboards Key Experience, Skills and Knowledge: At least 3 years as a Data Analyst or similar data related experience Experience with SQL, PySpark and Python Expert user in Power BI Expert user in Excel Demonstratable experience in building out analysis and reports Highly organised but flexible with excellent communication skills Able to More ❯