closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for … sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, andload it into data warehouses, data lakes or lakehouses. Big Data Technologies … teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Continuously learn and apply best practices in data engineering and cloud computing. QUALIFICATIONS Proven experience as More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
Key Responsibilities: Data Pipelines andETL Processes ETL Design: Design and implement ETL processes within MaPS architectural patterns to extract, transform, andload data from various source systems into our reporting solutions. Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like Apache Spark to ensure seamless … these tools. You will need to demonstrate the following skills and experience; Proven experience as a Data Engineer, with a focus on Azure services. Strong expertise in designing and implementing ETL processes. Experience in using SQL to query and manipulate data. Proficiency in Azure data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, andMore ❯
our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI/CD pipelines using Azure DevOps More ❯
Maintain and leverage CI/CD deployment pipelines to promote application code into higher-tier environments. To be successful in this role, you'll: Have experience developing ELT/ETL ingestion pipelines for structured and unstructured data sources. Have experience with Azure cloud platform tools such as Azure Data Factory, Databricks, Logic Apps, Azure Functions, ADLS, SQL Server, and Unity More ❯
Technical Business Analysis experience. A proactive awareness of industry standards, regulations, and developments. Ideally, you'll also have: Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building More ❯
work as part of a collaborative team to solve problems and assist other colleagues. • Ability to learn new technologies, programs and procedures. Technical Essentials: • Expertise across data warehouse andETL/ELT development in AWS preferred with experience in the following: • Strong experience in some of the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation More ❯
Advanced SQL skills and experience architecting solutions on modern data warehouses (e.g., Snowflake, BigQuery, Redshift). Hands-on experience with advanced modelling techniques in dbt. A deep understanding of ETL/ELT processes and tools (e.g., Fivetran, Airbyte, Stitch). Experience with data visualisation tools (e.g., Mode, Looker, Tableau, Power BI) and designing robust BI semantic layers. Exceptional understanding of More ❯
markets, balancing mechanisms, and regulatory frameworks (e.g., REMIT, EMIR). Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. More ❯
drift, and anomalies using Azure Monitor or other observability tools. * Schedule and automate model retraining pipelines to maintain performance over time. 3. Data Engineering & Preprocessing * Develop and maintain scalable ETL/ELT data pipelines using Azure Data Factory, Data Lake, and SQL. * Prepare and preprocess large datasets to ensure data quality and readiness for ML modelling and analytics workflows. 4. More ❯
drift, and anomalies using Azure Monitor or other observability tools.* Schedule and automate model retraining pipelines to maintain performance over time. 3. Data Engineering & Preprocessing * Develop and maintain scalable ETL/ELT data pipelines using Azure Data Factory, Data Lake, and SQL.* Prepare and preprocess large datasets to ensure data quality and readiness for ML modelling and analytics workflows. 4. More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
stakeholders. Experience with data visualization tools (e.g., Power BI, Tableau) is a plus. Experience working with Azure data services (e.g., Azure Data Lake Storage, Azure Databricks). Understanding of ETL/ELT processes. Familiarity with Agile development methodologies. Knowledge of the media or advertising industry. Life at WPP Media & Benefits Our passion for shaping the next era of media includes More ❯
data models and user behavioral data accessible and reusable for data professionals throughout the company. In this role, your main responsibilities and deliverables will be to build and maintain ETL pipelines, create foundational data models in our data warehouse, and integrate new data sources using DataOps best practices, including documentation and testing. You will also support other data specialists in More ❯
Leeds, West Yorkshire, England, United Kingdom Hybrid / WFH Options
VIQU IT Recruitment
of data pipelines and solutions using Databricks, Python, and SQL Oversee the design and maintenance of data models that support analytics and business intelligence needs Ensure efficient ELT/ETL processes on AWS or Azure cloud platforms Collaborate with analysts, architects, and stakeholders to deliver high-quality data products Promote best practices in testing, CI/CD, version control, andMore ❯
Proven experience as a Data Engineer, Data Platform Engineer or similar technical role in enterprise environments, with advanced SQL development skills, Python/PySpark, and demonstrated experience with modern ETL/ELT frameworks Hands-on expertise with cloud-native data platforms (preferably Microsoft Fabric, Synapse, Data Factory, and related services Experience integrating data from ERP/CRM systems (e.g., Dynamics More ❯
delivering data engineering solutions on cloud platforms, preferably Oracle OCI, AWS, or Azure Proficient in Python and workflow orchestration tools such as Airflow or Prefect Expert in data modeling, ETL, and SQL Experience with real-time analytics from telemetry and event-based streaming (e.g., Kafka) Experience managing operational data stores with high availability, performance, and scalability Expertise in data lakes More ❯
plus. Experience in data modeling and warehouse architecture design. Experience with data orchestration and workflow management tools such as Airflow. Solid understanding of data ingestion, transformation, and ELT/ETL practices. Expertise on data transformation frameworks such as DBT. Experience with Infrastructure as Code (IaC) toolsTerraform preferred. Experience with version control systems, GitHub preferred. Experience with continuous integration and delivery More ❯
Engineer, your role involves designing, implementing, and managing data solutions on the Microsoft Azure cloud platform. Your responsibilities will include, but not limited to: Data Integration Data Integration andETL (Extract, Transform, Load): You'll develop and implement data integration processes to extract data from diverse sources andload it into data warehouses or data lakes. Azure Data Factory is … a key tool for building efficient ETL pipelines. Your goal is to ensure that data is effectively cleaned, transformed, and loaded. Your sources will include Rest API from various systems. Being able to onboard data from API's is essential. Use of Postman for testing outside of ADF is a preferred skill. Data Engineering As an Azure Data Engineer, you More ❯
in energy trading environments, particularly Natural Gas and Power markets. Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. More ❯
Required Skills: Proficiency in BI tools (Power BI, Tableau, QlikView) and data visualization techniques Strong SQL skills and experience with relational databases (e.g., SQL Server, Oracle, MySQL) Familiarity with ETL processes and data warehousing concepts Experience in data modeling and designing effective reporting structures Knowledge of programming languages such as Python or R for data manipulation and analysis Strong analytical More ❯
must be confident working across: Azure Data Services, including: Azure Data Factory Azure Synapse Analytics Azure Databricks Microsoft Fabric (desirable) Python and PySpark for data engineering, transformation, and automation ETL/ELT pipelines across diverse structured and unstructured data sources Data lakehouse and data warehouse architecture design Power BI for enterprise-grade reporting and visualisation Strong knowledge of data modelling More ❯
Scala. Experience with AWS RDS, AWS Glue, AWS Kinesis, AWS S3, Redis and/or Azure SQL Database, Azure Data Lake Storage , Azure Data Factory. Knowledge of data modelling , ETL processes, and data warehousing principles. Preferred Skills Strong problem-solving and analytical skills . Experience with cloud databases, cloud data services, messaging/data streaming & big data technologies. Familiarity with More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
initiatives Ensure high standards of documentation and data security compliance Technical Skills (desirable): Microsoft Azure Data Services (e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Datatech Analytics
Skills & Experience: Proven experience in senior data engineering roles, preferably within regulated industries Expertise in SQL, Snowflake, DBT Cloud, and CI/CD pipelines (Azure DevOps) Hands-on with ETL tools (e.g. Matillion, SNP Glue, or similar) Experience with AWS and/or Azure platforms Solid understanding of data modelling, orchestration, and warehousing techniques Strong communication, mentoring, and stakeholder engagement More ❯
error handling strategies. Optional scripting skills for creating custom NiFi processors. Programming & Data Technologies: Proficiency in Java and SQL. Experience with C# and Scala is a plus. Experience with ETL tools and big data platforms. Knowledge of data modeling, replication, and query optimization. Hands-on experience with SQL and NoSQL databases is desirable. Familiarity with data warehousing solutions (e.g., Snowflake More ❯