enhance cloud capabilities. Key Skills & Experience: Strong proficiency in SQL and Python. Experience in cloud data solutions (AWS, GCP, or Azure). Experience in AI/ML. Experience with PySpark or equivalent. Strong problem-solving and analytical skills. Excellent attention to detail. Ability to manage stakeholder relationships effectively. Strong communication skills and a collaborative approach. Why Join Us? Work More ❯
Luton, Bedfordshire, South East, United Kingdom Hybrid / WFH Options
Anson Mccade
practice Essential Experience: Proven expertise in building data warehouses and ensuring data quality on GCP Strong hands-on experience with BigQuery, Dataproc, Dataform, Composer, Pub/Sub Skilled in PySpark, Python and SQL Solid understanding of ETL/ELT processes Clear communication skills and ability to document processes effectively Desirable Skills: GCP Professional Data Engineer certification Exposure to Agentic More ❯
on experience with the Azure Data Stack, critically ADF and Synapse (experience with Microsoft Fabric is a plus) Highly developed python and data pipeline development knowledge, must include substantial PySpark experience Demonstrable DevOps and DataOps experience with an understanding of best practices for engineering, test and ongoing service delivery An understanding of Infrastructure as Code concepts (Demonstrable Terraform experience More ❯
years in data engineering or backend development focused on data platforms. Strong hands-on experience with AWS services, especially Glue, Athena, Lambda, and S3 . Proficient in Python (ideally PySpark) and modular SQL for transformations and orchestration. Solid grasp of data modeling (partitioning, file formats like Parquet, etc.). Comfort with CI/CD, version control, and infrastructure-as More ❯
Create solutions and environments to enable Analytics and Business Intelligence capabilities. Your Profile Essential skills/knowledge/experience: Design, develop, and maintain scalable ETL pipelines using AWS Glue (PySpark) . Strong hands-on experience with DBT (Cloud or Core) . Implement and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL … Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data workflows for performance, scalability, and cost More ❯
Create solutions and environments to enable Analytics and Business Intelligence capabilities. Your Profile Essential skills/knowledge/experience: Design, develop, and maintain scalable ETL pipelines using AWS Glue (PySpark) . Strong hands-on experience with DBT (Cloud or Core) . Implement and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL … Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data workflows for performance, scalability, and cost More ❯
Integration : Develop and integrate efficient data pipelines by collecting high-quality, consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance accessibility and performance. ML R More ❯
City of London, London, United Kingdom Hybrid / WFH Options
un:hurd music
Integration : Develop and integrate efficient data pipelines by collecting high-quality, consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance accessibility and performance. ML R More ❯
South East London, England, United Kingdom Hybrid / WFH Options
un:hurd music
Integration : Develop and integrate efficient data pipelines by collecting high-quality, consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance accessibility and performance. ML R More ❯
international regions. The role leverages a modern tech stack including SQL, Python, Airflow, Kubernetes, and various other cutting-edge technologies. You'll work with tools like dbt on Databricks, PySpark, Streamlit, and Django, ensuring robust data infrastructure that powers business-critical operations. What makes this role particularly exciting is the combination of technical depth and business impact. You'll More ❯
As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes, with a demonstrated ability to implement solutions in a cloud environment. Position - Senior Data Engineer Experience - 6+ yrs Location - London Job Type - Hybrid, Permanent Mandatory … Skills : Design, build, maintain data pipelines using Python, Pyspark and SQL Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS/AZURE/GCP . Collaborate with data scientists, business analysts to understand their data needs & develop solutions that meet their requirements. Develop & maintain data models and data dictionaries for … improve the performance and scalability of our data solutions. Qualifications : Minimum 6+ years of Total experience. At least 4 years of Hands on Experience using The Mandatory skills - Python, Pyspark, SQL. More ❯
As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes, with a demonstrated ability to implement solutions in a cloud environment. Position - Sr Data Engineer Experience - 6-9 Years Location - London Job Type - Hybrid, Permanent … Mandatory Skills: Design, build, maintain data pipelines using Python, Pyspark and SQL Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS/AZURE/GCP . Collaborate with data scientists, business analysts to understand their data needs & develop solutions that meet their requirements. Develop & maintain data models and data dictionaries … improve the performance and scalability of our data solutions. Qualifications: Minimum 6+ years of Total experience. At least 4+ years of Hands on Experience using The Mandatory skills - Python, Pyspark, SQL. More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Databuzz Ltd
As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes, with a demonstrated ability to implement solutions in a cloud environment. Position - Sr Data Engineer Experience - 6-9 Years Location - London Job Type - Hybrid, Permanent … Mandatory Skills: Design, build, maintain data pipelines using Python, Pyspark and SQL Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS/AZURE/GCP . Collaborate with data scientists, business analysts to understand their data needs & develop solutions that meet their requirements. Develop & maintain data models and data dictionaries … improve the performance and scalability of our data solutions. Qualifications: Minimum 6+ years of Total experience. At least 4+ years of Hands on Experience using The Mandatory skills - Python, Pyspark, SQL. More ❯
Bracknell, Berkshire, South East, United Kingdom Hybrid / WFH Options
Halian Technology Limited
business intelligence, reporting, and regulatory needs Lead the integration and optimisation of large-scale data platforms using Azure Synapse and Databricks Build and maintain robust data pipelines using Python (PySpark) and SQL Collaborate with data engineers, analysts, and stakeholders to ensure data quality, governance, and security Ensure all solutions adhere to financial regulations and internal compliance standards Key Skills … Experience: Proven experience as a Data Architect within the financial services sector Hands-on expertise with Azure Synapse Analytics and Databricks Strong programming and data engineering skills in Python (PySpark) and SQL Solid understanding of financial data and regulatory compliance requirements Excellent stakeholder communication and documentation skills More ❯
and motivated Data Engineer to play a key role in the creation of a brand-new data platform within the Azure ecosystem including Azure Data Factory (ADF), Synapse and PySpark/Databricks and Snowflake. You will be a data ingestion and ETL Pipeline guru, tackling complex problems at source in order to retrieve the data and ensure to can … unless you have skills and desire to work on data ingestion, ETL/ELT. Key Responsibilities Build & Develop robust ETL/Data Ingestion pipelines leveraging Azure Data Factory, Synapse, PySpark and Python. Connect APIs, databases, and data streams to the platform, implementing ETL/ELT processes. Data Integrity – Embed quality measures, monitoring, and alerting mechanisms. CI/CD & Automation More ❯
Promote clean, efficient, and maintainable coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are looking to build a best More ❯
Promote clean, efficient, and maintainable coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are looking to build a best More ❯
Promote clean, efficient, and maintainable coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are looking to build a best More ❯
recruiting on behalf of our global energies trading client in London for a Principal Data Engineer who can offer demonstrable experience in: *Technologies* - Databricks (DLT, Performance Tunning, Cost Optimization ), PySpark, Python, SQL, ADF. *Capabilities* – leading a data engineering team, being technically hands on and drive a project to completion, experience running a scrum team. *Skills* - Data Modelling, Data Integration More ❯
recruiting on behalf of our global energies trading client in London for a Principal Data Engineer who can offer demonstrable experience in: *Technologies* - Databricks (DLT, Performance Tunning, Cost Optimization ), PySpark, Python, SQL, ADF. *Capabilities* – leading a data engineering team, being technically hands on and drive a project to completion, experience running a scrum team. *Skills* - Data Modelling, Data Integration More ❯
ownership, lineage, sensitivity and definitions. Ensure compliance with GDPR and other data regulations when handling sensitive information. Support the stability and performance of enterprise data platforms. Requirements: Proficient with PySpark, Delta Lake, Unity Catalog and Python (including unit and integration testing). Deep understanding of software development principles (SOLID, testing, CI/CD, version control). Strong knowledge of More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineer, you will: Digest … and maintain data engineering best practices and contribute to data analytics insights and visualization concepts, methods and techniques We are looking for experience in the following skills: Palantir PythonPySpark/PySQL AWS or GCP Set yourself apart: What's in it for you At Accenture in addition to a competitive basic salary, you will also have an extensive More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Intec Select
complex ideas. Proven ability to manage multiple projects and meet deadlines in dynamic environments. Proficiency with SQL Server in high-transaction settings. Experience with either C# or Python/PySpark for data tasks. Hands-on knowledge of Azure cloud services, such as Databricks, Event Hubs, and Function Apps. Solid understanding of DevOps principles and tools like Git, Azure DevOps More ❯
complex ideas. Proven ability to manage multiple projects and meet deadlines in dynamic environments. Proficiency with SQL Server in high-transaction settings. Experience with either C# or Python/PySpark for data tasks. Hands-on knowledge of Azure cloud services, such as Databricks, Event Hubs, and Function Apps. Solid understanding of DevOps principles and tools like Git, Azure DevOps More ❯
Learning role; additional data or commercial experience is a plus Strong understanding of mathematical background, focusing on statistics and linear algebra Highly proficient in Python (Pandas, Scikit-Learn, PyTorch, PySpark) and SQL Experience with Snowflake (function & procedure) and Snowpark is a plus Experience with unit and integration tests Strong understanding of machine learning algorithms and best practices Vision for More ❯