closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for … sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, andload it into data warehouses, data lakes or lakehouses. Big Data Technologies … teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Continuously learn and apply best practices in data engineering and cloud computing. QUALIFICATIONS Proven experience as More ❯
focused but other Cloud technologies such as Oracle andAmazon Web Services (AWS) are usedin FCDO. The successful candidates will build complex data pipelines (both ExtractTransformandLoadETLandExtractLoadandTransform ELT ) in the Azure cloud platforms. You will work with structured and unstructured data, data lakes, data warehouses to service operational and analytical business needs. The More ❯
the Role Designing, building and maintaining data pipelines. Building and maintaining data warehouses. Data cleansing and transformation. Developing and maintaining ETL processes (ELT = extract, transform, load) to extract, transform, andload data from various sources into data warehouses. Validating charts and reports created by systems built in-house. Creating validation tools. Developing and maintaining data models, data tools. Monitoring and … Experience in R programming language. Experience in Python programming language. Experience in designing, building and maintaining data pipelines. Experience with data warehousing and data lakes. Experience in developing and maintaining ETL processes. Experience in developing data integration tools. Experience in data manipulation, data analysis, data modelling. Experience with cloud platforms (AWS, Azure, etc.) Experience in designing scalable, secure, and cost More ❯
Technical Business Analysis experience. A proactive awareness of industry standards, regulations, and developments. Ideally, you'll also have: Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building More ❯
work as part of a collaborative team to solve problems and assist other colleagues. • Ability to learn new technologies, programs and procedures. Technical Essentials: • Expertise across data warehouse andETL/ELT development in AWS preferred with experience in the following: • Strong experience in some of the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation More ❯
markets, balancing mechanisms, and regulatory frameworks (e.g., REMIT, EMIR). Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. More ❯
data modeling, DAX, report design). Experience with Azure Data Factory and/or Microsoft Fabric for pipeline development (or python pipeline development) Understanding of data warehouse design andETL/ELT best practices Strong communication and stakeholder engagement skills. Customer service mindset with integrity, professionalism and confidentiality. Self-motivated, diligent, and results oriented. Willingness to learn and grow in More ❯
Knutsford, Cheshire, North West, United Kingdom Hybrid / WFH Options
The Veterinary Defence Society
improve data processes Contribute to project teams, ensuring data requirements are addressed from the outset Continuously improve BI tools and data engineering practices Required Skills & Experience Proficiency in SQL, ETL processes, data warehousing, and data modelling (MS SQL preferred) Proven experience in data engineering or analysis Strong analytical and problem-solving skills Excellent communication skills able to explain technical concepts More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
. Experience with Spark, Databricks, or similar data processing tools. Strong technical proficiency in data modeling, SQL, NoSQL databases, and data warehousing. Hands-on experience with data pipeline development, ETL processes, and big data technologies (e.g., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Google Cloud and cloud-based data services (e.g., AWS Redshift, Azure More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
south west london, south east england, united kingdom
Mars
. Experience with Spark, Databricks, or similar data processing tools. Strong technical proficiency in data modeling, SQL, NoSQL databases, and data warehousing. Hands-on experience with data pipeline development, ETL processes, and big data technologies (e.g., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Google Cloud and cloud-based data services (e.g., AWS Redshift, Azure More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
Middlesbrough, Yorkshire, United Kingdom Hybrid / WFH Options
Causeway Technologies
products. Work with the Group Architect to align team standards and processes to Causeway's and influence the evolution of Causeway's standards and processes. Essential experience Experience with ETL/ELT processes and frameworks. Experience with CI/CD pipelines and Infrastructure as Code and understanding SDLC principles of data engineering workflows. Previous background in a similar software engineering More ❯
on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and infrastructure-as-code practices Familiarity with data visualisation tools (Tableau, PowerBI, Looker) and analytics frameworks Leadership & Communication Proven experience leading technical More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Halian Technology Limited
platform and tooling. Required Skills & Experience 35+ years of experience as a Data Engineer , preferably in a FinTech or Payments environment. Strong expertise in data modeling (dimensional and normalized), ETL/ELT design , and data warehousing techniques . Proven experience with at least one major cloud data warehouse : Snowflake, Redshift, BigQuery, or similar. Proficiency in SQL and experience with scripting More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
organisation to recruit a Junior Data Engineer. Hybrid working - London based Role The Junior Data Engineer will assist in the design, development, and maintenance of scalable data pipelines andETL/ELT processes in Azure. Write efficient and reliable SQL queries for data extraction, transformation, and analysis. Support data integration from various sources (internal systems, third-party vendors) into centralised More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
clients to gather requirements and deliver solutions. Be willing to engage and assist in pre-sales activities, bids, proposals etc. Use key techniques such as Governance, Architecture, Data Modelling, ETL/ELT, Data Lakes, Data Warehousing, Master Data, and BI. Consistently utilise key processes in the engineering delivery cycle including Agile and DevOps, Git, APIs, Containers, Microservices and Data Pipelines. More ❯
clients to gather requirements and deliver solutions. Be willing to engage and assist in pre-sales activities, bids, proposals etc. Use key techniques such as Governance, Architecture, Data Modelling, ETL/ELT, Data Lakes, Data Warehousing, Master Data, and BI. Consistently utilise key processes in the engineering delivery cycle including Agile and DevOps, Git, APIs, Containers, Microservices and Data Pipelines. More ❯
that impact the data warehouse. Ensure data accuracy, consistency, and integrity across warehouse and source systems. Maintain and evolve the data dictionary and associated metadata for the warehouse andETL systems. Mentor and support team members to build a high-performing, resilient data function. Keep up to date with industry developments and maintain relevant technical expertise. Complete all mandatory training More ❯
platform for data analytics, including design and deployment of infrastructure. Expertise in creating CI/CD pipelines. Experience in creating FTP (SFTP/FTPS) configurations. Experience in working with ETL/ELT workflows for data analytics. Degree in Computer Science, Mathematics or related subject. Highly desirable skills & exposure: Working collaboratively as part of an Agile development squad. Experience and knowledge More ❯
plus. Experience in data modeling and warehouse architecture design. Experience with data orchestration and workflow management tools such as Airflow. Solid understanding of data ingestion, transformation, and ELT/ETL practices. Expertise on data transformation frameworks such as DBT. Experience with Infrastructure as Code (IaC) toolsTerraform preferred. Experience with version control systems, GitHub preferred. Experience with continuous integration and delivery More ❯
and develops proactive solutions for data quality and system performance issues. Act as a liaison between departments, facilitating communication and ensuring alignment on data-driven goals. Support and enhance ETL (Extract, Transform, Load) processes for seamless data integration. Provide ongoing training and guidance to individuals and teams on using data, tools, dashboards, and reports. Design, refine, and implement interactive reports More ❯
Required Skills: Proficiency in BI tools (Power BI, Tableau, QlikView) and data visualization techniques Strong SQL skills and experience with relational databases (e.g., SQL Server, Oracle, MySQL) Familiarity with ETL processes and data warehousing concepts Experience in data modeling and designing effective reporting structures Knowledge of programming languages such as Python or R for data manipulation and analysis Strong analytical More ❯
discuss how your expertise could strengthen our growing data practice. As a data engineer/scientist, you will: Data Engineering Focus: Design, implement, and maintain scalable data pipelines andETL processes Develop and maintain data warehouses and data lakes Implement data quality monitoring and validation systems Create and maintain data documentation and cataloguing systems Optimize data storage and retrieval systems More ❯