closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for … sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, andload it into data warehouses, data lakes or lakehouses. Big Data Technologies … teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Continuously learn and apply best practices in data engineering and cloud computing. QUALIFICATIONS Proven experience as More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
Key Responsibilities: Data Pipelines andETL Processes ETL Design: Design and implement ETL processes within MaPS architectural patterns to extract, transform, andload data from various source systems into our reporting solutions. Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like Apache Spark to ensure seamless … these tools. You will need to demonstrate the following skills and experience; Proven experience as a Data Engineer, with a focus on Azure services. Strong expertise in designing and implementing ETL processes. Experience in using SQL to query and manipulate data. Proficiency in Azure data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, andMore ❯
our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI/CD pipelines using Azure DevOps More ❯
such as Tray.io, Azure Data Factory. • Implement data lakes and data warehouses using Azure Synapse Analytics, Azure Data Fabric or similar tools. Data Integration & Transformation • Develop and maintain ELT (Extract, Load, Transform) andETL (Extract, Transform, Load) processes to support the Data Platform development enabling best practice reporting and analytics. • Integrate data from multiple systems, including legal practice/matter … and reporting teams by ensuring access to clean and structured data. • Document processes and provide training on data tools and workflows. Skills and experience • Experience in building ELT/ETL pipelines and managing data workflows. • Proficiency in programming languages such as PySPark, Python, SQL, or Scala. • Solid understanding of data modelling and relational database concepts. • Knowledge of GDPR and UK More ❯
the Role Designing, building and maintaining data pipelines. Building and maintaining data warehouses. Data cleansing and transformation. Developing and maintaining ETL processes (ELT = extract, transform, load) to extract, transform, andload data from various sources into data warehouses. Validating charts and reports created by systems built in-house. Creating validation tools. Developing and maintaining data models, data tools. Monitoring and … Experience in R programming language. Experience in Python programming language. Experience in designing, building and maintaining data pipelines. Experience with data warehousing and data lakes. Experience in developing and maintaining ETL processes. Experience in developing data integration tools. Experience in data manipulation, data analysis, data modelling. Experience with cloud platforms (AWS, Azure, etc.) Experience in designing scalable, secure, and cost More ❯
Data Factory, Azure Databricks). Experience with data modeling, data warehousing, and big data processing (Hadoop, Spark, Kafka). Strong understanding of SQL and NoSQL databases, data modeling, andETL/ELT processes. Proficiency in at least one programming language (Python, C#, Java). Experience with CI/CD pipelines and tools (Azure DevOps, Jenkins). Knowledge of data governance More ❯
Maintain and leverage CI/CD deployment pipelines to promote application code into higher-tier environments. To be successful in this role, you'll: Have experience developing ELT/ETL ingestion pipelines for structured and unstructured data sources. Have experience with Azure cloud platform tools such as Azure Data Factory, Databricks, Logic Apps, Azure Functions, ADLS, SQL Server, and Unity More ❯
Technical Business Analysis experience. A proactive awareness of industry standards, regulations, and developments. Ideally, you'll also have: Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building More ❯
Serve as a subject matter expert in cloud data engineering providing technical guidance and mentorship to the team. Drive the design development and implementation of complex data pipelines andETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data More ❯
Serve as a subject matter expert in cloud data engineering providing technical guidance and mentorship to the team. Drive the design development and implementation of complex data pipelines andETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data More ❯
work as part of a collaborative team to solve problems and assist other colleagues. • Ability to learn new technologies, programs and procedures. Technical Essentials: • Expertise across data warehouse andETL/ELT development in AWS preferred with experience in the following: • Strong experience in some of the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation More ❯
CI/CD pipelines. Data Modelling: Apply deep hands-on expertise in enterprise data modelling using tools like ERwin, ER/Studio, or PowerDesigner, ensuring scalability, performance, and maintainability. ETL/ELT Frameworks: Design and build robust data pipelines with Cloud Composer, Dataproc, Dataflow, Informatica, or IBM DataStage, supporting both batch and streaming data ingestion. Data Governance & Quality: Implement data More ❯
markets, balancing mechanisms, and regulatory frameworks (e.g., REMIT, EMIR). Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. More ❯
data modeling, DAX, report design). Experience with Azure Data Factory and/or Microsoft Fabric for pipeline development (or python pipeline development) Understanding of data warehouse design andETL/ELT best practices Strong communication and stakeholder engagement skills. Customer service mindset with integrity, professionalism and confidentiality. Self-motivated, diligent, and results oriented. Willingness to learn and grow in More ❯
Knutsford, Cheshire, North West, United Kingdom Hybrid / WFH Options
The Veterinary Defence Society
improve data processes Contribute to project teams, ensuring data requirements are addressed from the outset Continuously improve BI tools and data engineering practices Required Skills & Experience Proficiency in SQL, ETL processes, data warehousing, and data modelling (MS SQL preferred) Proven experience in data engineering or analysis Strong analytical and problem-solving skills Excellent communication skills able to explain technical concepts More ❯
drift, and anomalies using Azure Monitor or other observability tools. * Schedule and automate model retraining pipelines to maintain performance over time. 3. Data Engineering & Preprocessing * Develop and maintain scalable ETL/ELT data pipelines using Azure Data Factory, Data Lake, and SQL. * Prepare and preprocess large datasets to ensure data quality and readiness for ML modelling and analytics workflows. 4. More ❯
drift, and anomalies using Azure Monitor or other observability tools.* Schedule and automate model retraining pipelines to maintain performance over time. 3. Data Engineering & Preprocessing * Develop and maintain scalable ETL/ELT data pipelines using Azure Data Factory, Data Lake, and SQL.* Prepare and preprocess large datasets to ensure data quality and readiness for ML modelling and analytics workflows. 4. More ❯
to write complex SQL queries Should have strong implementation experience in all the below technology areas (breadth) and deep technical expertise in some of the below technologies Data integration – ETL tools like IBM DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau More ❯
to write complex SQL queries Should have strong implementation experience in all the below technology areas (breadth) and deep technical expertise in some of the below technologies Data integration – ETL tools like IBM DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau More ❯
focused degree (such as business, data analytics, computer science, or information systems). Strong extensive work experience, ideally within a professional services environment. A deep understanding of data warehousing, ETL processes and data modelling. Expertise in database design and development in Azure SQL Server, Azure Data Warehouse, MS SQL Server and be proficient in creating highly flexible Database Architecture. Proven More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
and other upstream systems into the DWH. Analyse end-to-end data flows across operational and reporting systems. Document and optimise processes relating to data extraction, transformation, andload (ETL). Translate business requirements into clear, testable functional and non-functional specifications. Produce and maintain data dictionaries, lineage documentation, and reporting logic. Act as the primary liaison between business SMEs More ❯
and other upstream systems into the DWH. Analyse end-to-end data flows across operational and reporting systems. Document and optimise processes relating to data extraction, transformation, andload (ETL). Translate business requirements into clear, testable functional and non-functional specifications. Produce and maintain data dictionaries, lineage documentation, and reporting logic. Act as the primary liaison between business SMEs More ❯
on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and infrastructure-as-code practices Familiarity with data visualisation tools (Tableau, PowerBI, Looker) and analytics frameworks Leadership & Communication Proven experience leading technical More ❯