closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for … sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, andload it into data warehouses, data lakes or lakehouses. Big Data Technologies … teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Continuously learn and apply best practices in data engineering and cloud computing. QUALIFICATIONS Proven experience as More ❯
our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI/CD pipelines using Azure DevOps More ❯
our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI/CD pipelines using Azure DevOps More ❯
Maintain and leverage CI/CD deployment pipelines to promote application code into higher-tier environments. To be successful in this role, you'll: Have experience developing ELT/ETL ingestion pipelines for structured and unstructured data sources. Have experience with Azure cloud platform tools such as Azure Data Factory, Databricks, Logic Apps, Azure Functions, ADLS, SQL Server, and Unity More ❯
Technical Business Analysis experience. A proactive awareness of industry standards, regulations, and developments. Ideally, you'll also have: Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building More ❯
Serve as a subject matter expert in cloud data engineering providing technical guidance and mentorship to the team. Drive the design development and implementation of complex data pipelines andETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data More ❯
Advanced SQL skills and experience architecting solutions on modern data warehouses (e.g., Snowflake, BigQuery, Redshift). Hands-on experience with advanced modelling techniques in dbt. A deep understanding of ETL/ELT processes and tools (e.g., Fivetran, Airbyte, Stitch). Experience with data visualisation tools (e.g., Mode, Looker, Tableau, Power BI) and designing robust BI semantic layers. Exceptional understanding of More ❯
CI/CD pipelines. Data Modelling: Apply deep hands-on expertise in enterprise data modelling using tools like ERwin, ER/Studio, or PowerDesigner, ensuring scalability, performance, and maintainability. ETL/ELT Frameworks: Design and build robust data pipelines with Cloud Composer, Dataproc, Dataflow, Informatica, or IBM DataStage, supporting both batch and streaming data ingestion. Data Governance & Quality: Implement data More ❯
markets, balancing mechanisms, and regulatory frameworks (e.g., REMIT, EMIR). Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. More ❯
London, England, United Kingdom Hybrid / WFH Options
PACE Global
BI to communicate findings effectively to stakeholders. Automation & Scripting : Utilize Python scripting to automate data processing, cleaning, and analysis tasks. Database Management : Write, optimize, and maintain SQL queries to extract, manipulate, and analyze data from various databases. App Development : Develop and maintain low-code/no-code applications using Power Apps to streamline business processes and improve data accessibility. Data … workflows. Proficiency in Python scripting for data analysis and automation tasks. Strong knowledge of SQL for querying, data manipulation, and database management. Experience with data modeling, data warehousing, andETL processes. Familiarity with cloud-based data platforms (e.g., Azure, AWS) is a plus. Excellent analytical and problem-solving skills with attention to detail. Strong communication skills and the ability to … developing and optimizing Python scripts for data analysis, automation, and workflow improvements. Strong background in writing and optimizing SQL queries for complex data retrieval, manipulation, and analysis. Experience with ETL (Extract, Transform, Load) processes and data integration. Familiarity with data visualization best practices to ensure clear and effective communication of insights. Understanding of data modeling techniques to support the creation More ❯
drift, and anomalies using Azure Monitor or other observability tools. * Schedule and automate model retraining pipelines to maintain performance over time. 3. Data Engineering & Preprocessing * Develop and maintain scalable ETL/ELT data pipelines using Azure Data Factory, Data Lake, and SQL. * Prepare and preprocess large datasets to ensure data quality and readiness for ML modelling and analytics workflows. 4. More ❯
drift, and anomalies using Azure Monitor or other observability tools.* Schedule and automate model retraining pipelines to maintain performance over time. 3. Data Engineering & Preprocessing * Develop and maintain scalable ETL/ELT data pipelines using Azure Data Factory, Data Lake, and SQL.* Prepare and preprocess large datasets to ensure data quality and readiness for ML modelling and analytics workflows. 4. More ❯
to write complex SQL queries Should have strong implementation experience in all the below technology areas (breadth) and deep technical expertise in some of the below technologies Data integration – ETL tools like IBM DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau More ❯
to write complex SQL queries Should have strong implementation experience in all the below technology areas (breadth) and deep technical expertise in some of the below technologies Data integration – ETL tools like IBM DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau More ❯
focused degree (such as business, data analytics, computer science, or information systems). Strong extensive work experience, ideally within a professional services environment. A deep understanding of data warehousing, ETL processes and data modelling. Expertise in database design and development in Azure SQL Server, Azure Data Warehouse, MS SQL Server and be proficient in creating highly flexible Database Architecture. Proven More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
stakeholders. Experience with data visualization tools (e.g., Power BI, Tableau) is a plus. Experience working with Azure data services (e.g., Azure Data Lake Storage, Azure Databricks). Understanding of ETL/ELT processes. Familiarity with Agile development methodologies. Knowledge of the media or advertising industry. Life at WPP Media & Benefits Our passion for shaping the next era of media includes More ❯
data models and user behavioral data accessible and reusable for data professionals throughout the company. In this role, your main responsibilities and deliverables will be to build and maintain ETL pipelines, create foundational data models in our data warehouse, and integrate new data sources using DataOps best practices, including documentation and testing. You will also support other data specialists in More ❯
plus. Experience in data modeling and warehouse architecture design. Experience with data orchestration and workflow management tools such as Airflow. Solid understanding of data ingestion, transformation, and ELT/ETL practices. Expertise on data transformation frameworks such as DBT. Experience with Infrastructure as Code (IaC) toolsTerraform preferred. Experience with version control systems, GitHub preferred. Experience with continuous integration and delivery More ❯
Engineer, your role involves designing, implementing, and managing data solutions on the Microsoft Azure cloud platform. Your responsibilities will include, but not limited to: Data Integration Data Integration andETL (Extract, Transform, Load): You'll develop and implement data integration processes to extract data from diverse sources andload it into data warehouses or data lakes. Azure Data Factory is … a key tool for building efficient ETL pipelines. Your goal is to ensure that data is effectively cleaned, transformed, and loaded. Your sources will include Rest API from various systems. Being able to onboard data from API's is essential. Use of Postman for testing outside of ADF is a preferred skill. Data Engineering As an Azure Data Engineer, you More ❯
in energy trading environments, particularly Natural Gas and Power markets. Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. More ❯
Required Skills: Proficiency in BI tools (Power BI, Tableau, QlikView) and data visualization techniques Strong SQL skills and experience with relational databases (e.g., SQL Server, Oracle, MySQL) Familiarity with ETL processes and data warehousing concepts Experience in data modeling and designing effective reporting structures Knowledge of programming languages such as Python or R for data manipulation and analysis Strong analytical More ❯
Scala. Experience with AWS RDS, AWS Glue, AWS Kinesis, AWS S3, Redis and/or Azure SQL Database, Azure Data Lake Storage , Azure Data Factory. Knowledge of data modelling , ETL processes, and data warehousing principles. Preferred Skills Strong problem-solving and analytical skills . Experience with cloud databases, cloud data services, messaging/data streaming & big data technologies. Familiarity with More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Datatech Analytics
Skills & Experience: Proven experience in senior data engineering roles, preferably within regulated industries Expertise in SQL, Snowflake, DBT Cloud, and CI/CD pipelines (Azure DevOps) Hands-on with ETL tools (e.g. Matillion, SNP Glue, or similar) Experience with AWS and/or Azure platforms Solid understanding of data modelling, orchestration, and warehousing techniques Strong communication, mentoring, and stakeholder engagement More ❯
Skills & Experience: ·Proven experience in senior data engineering roles, preferably within regulated industries ·Expertise in SQL, Snowflake, DBT Cloud, and CI/CD pipelines (Azure DevOps) ·Hands-on with ETL tools (e.g. Matillion, SNP Glue, or similar) ·Experience with AWS and/or Azure platforms ·Solid understanding of data modelling, orchestration, and warehousing techniques ·Strong communication, mentoring, and stakeholder engagement More ❯