Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks and Power BI. Experience with creating low-level designs for data platform implementations. ETL pipeline development for the integration with data sources and data transformations including the creation of supplementary documentation. Proficiency in working with APIs and integrating them More ❯
with relational SQL databases either on premises or in the cloud. Experience delivering multiple solutions using key techniques such as Governance, Architecture, Data Modelling, ETL/ELT, Data Lakes, Data Warehousing, Master Data, and BI. A solid understanding of key processes in the engineering delivery cycle including Agile and DevOps More ❯
reporting. Strong programming skills in Python, with experience in data processing libraries like Pandas and NumPy. Experience with data pipeline development, data warehousing, andETL processes. Strong analytical and problem-solving skills, with attention to detail and ability to work in a fast-paced environment. Excellent communication and collaboration skills. More ❯
services, specifically in Microsoft Azure, Fabric, Dataverse, Synapse, Data Lake, Purview. Deep expertise in data engineering tools and practices, including Python, SQL, and modern ETL/ELT frameworks (e.g., Azure Data Factory, Talend, dbt). Experience designing and implementing scalable data pipelines and integration patterns across structured and unstructured data More ❯
tools and frameworks such as Spark, dbt, Airflow, Kafka, Databricks, and cloud-native services (AWS, GCP, or Azure) Understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL and at least one programming language (e.g., Python, Scala, or Java) Demonstrated experience owning complex technical More ❯
and deliver sustainable solutions. Monitor and troubleshoot data pipeline issues to maintain data integrity and accuracy. Assist in the development, maintenance, and optimization of ETL (Extract, Transform, Load) processes for efficiency and reliability. Project & Improvement: Assist in gathering, documenting, and managing data engineering requirements and workflows. Contribute to the development … quality reviews of designs, prototypes, and other work products to ensure requirements are met. Skills & Experience: Basic understanding of data engineering concepts, such as ETL processes, data pipelines, and data quality management. Hands-on experience with SQL (e.g., writing queries, basic database management). Familiarity with data tools and platforms More ❯
and experience managing servers and virtual environments in Microsoft Azure BI & Data Skills : Experience with other BI platforms (e.g. SAP BusinessObjects, Power BI), SQL, ETL processes, data modelling, and diverse data sources (including SAP HANA). Tableau Tools : Knowledge of Tableau Server Resource Monitoring Tool (RMT) and Content Migration Tool. More ❯
Experience with Power BI or other data visualisation tools. Familiarity with Python, C#, Angular, or Microsoft Power Automate. Exposure to data modelling, pipeline optimisation (ETL/ELT), and API provisioning. Understanding of data science workflows and practices. This is a fantastic opportunity to join a forward thinking organisation who offer More ❯
Advanced SQL skills for querying and managing relational databases. Familiarity with data visualisation tools (e.g., Sisense, Power BI, Streamlit). Technical Skills Experience with ETL processes and APIs for data integration. Understanding of statistical methods and data modelling techniques. Familiarity with cloud platforms like Snowflake is advantageous. Knowledge of data More ❯
even direct user input. The data engineers on the Portfolio Data Engineering team help build and maintain the transformation and cleaning steps of our ETL (Extract, Transform, Load) pipeline before it can be stored and accessed by our customers in a standardised fashion. As a data engineer on this team … you’ll be building components within the ETL pipeline that automate these cleaning and transformation steps. As you gain more experience, you’ll contribute to increasingly challenging engineering projects within our broader data infrastructure. This is a crucial, highly visible role within the company. Your team is a big component … Spark/PySpark, Java/Spring Amazon Web Services SQL, relational databases Understanding of data structures and algorithms Interest in data modeling, visualisation, andETL pipelines Knowledge of financial concepts (e.g., stocks, bonds, etc.) is encouraged but not necessary Our Values Act Like an Owner - Think and operate with intention More ❯
meet current best practices and internal standards. Work closely with project managers and technical leads to integrate new enterprise data sources into ongoing projects. ETL Development Develop robust, automated ETL (Extract, Transform, Load) pipelines using industry-standard tools and frameworks, prioritizing scalability, reliability, and fault tolerance. Strong background in data … ESRI, 3GIS, Bentley, Hexagon, Crescent Link, CadTel, etc.). Experience with business requirement analysis and the development of reporting and analytics structures. Familiarity with ETL solutions, including experience with SAFE FME, is highly desirable. Strong knowledge of data privacy regulations and practices. Exposure to analytics and reporting tools is considered More ❯
implemented in C#/.NET or Typescript/NodeJS DynamoDB, Redshift, Postgres, Elasticsearch, and S3 are our go to data stores We run our ETL data pipelines using Python Equal Opportunities We are an equal opportunities employer. This means we are committed to recruiting the best people regardless of their More ❯
transformative models using our modern data stack, including GCP, Airflow, dbt, and potentially emerging technologies like real-time streaming platforms Develop and manage reverse ETL processes to seamlessly integrate data with our commercial systems, ensuring operational efficiency Maintain and optimise our Customer Data Platform (CDP, ensuring effective data collection, unification More ❯
maintaining robust data pipelines, transforming raw data into clean datasets, and delivering compelling dashboards and insights to drive business decisions. Design, develop, and optimize ETL/ELT pipelines using Python and SQL. Develop and maintain Power BI dashboards and reports to visualize data and track KPIs. Work with stakeholders to … improve pipeline performance, scalability, and reliability. Advanced SQL skills (joins, CTEs, indexing, optimization) Experience with relational databases (e.g., SQL Server, PostgreSQL, MySQL) Understanding of ETL/ELT principles , data architecture, and data warehouse concepts Familiarity with APIs, RESTful services, and JSON/XML data handling Experience with Azure Data Factory More ❯
even direct user input. The data engineers on the Portfolio Data Engineering team help build and maintain the transformation and cleaning steps of our ETL (Extract, Transform, Load) pipeline before it can be stored and accessed by our customers in a standardised fashion. As a data engineer on this team … you'll be building components within the ETL pipeline that automate these cleaning and transformation steps. As you gain more experience, you'll contribute to increasingly challenging engineering projects within our broader data infrastructure. This is a crucial, highly visible role within the company. Your team is a big component … Spark/PySpark, Java/Spring Amazon Web Services SQL, relational databases Understanding of data structures and algorithms Interest in data modeling, visualisation, andETL pipelines Knowledge of financial concepts (e.g., stocks, bonds, etc.) is encouraged but not necessary Our Values Act Like an Owner - Think and operate with intention More ❯
media space. Job Description: As a Lead Data Scientist at Luupli, you will play a pivotal role in leveraging AWS analytics services to analyse andextract valuable insights from our data sources. You will collaborate with cross-functional teams, including data engineers, product managers, and business stakeholders, to develop data … services, such as Amazon Redshift, Amazon Athena, Amazon EMR, and Amazon QuickSight. Design and build robust data pipelines andETL processes to extract, transform, andload data from diverse sources into AWS for analysis. Apply advanced statistical and machine learning techniques to perform predictive and prescriptive analyses, clustering, segmentation, and … analytics services. 3.Strong proficiency in AWS analytics services, such as Amazon Redshift, Amazon Athena, Amazon EMR, and Amazon QuickSight. 4.Solid understanding of data modelling, ETL processes, and data warehousing concepts. 5.Proficiency in statistical analysis, data mining, and machine learning techniques. 6.Proficiency in programming languages such as Python, R, or Scala More ❯
and frameworks such as Spark, dbt, Airflow, Kafka, Databricks, and cloud-native services (AWS, GCP, or Azure) Deep understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL and at least one programming language (e.g., Python, Scala, or Java) Demonstrated experience owning complex technical More ❯
business/data analyst role, ideally in a consultancy or commercial setting. - Strong analytical, problem-solving, and communication skills. - Experience with operational data processes, ETL, data warehouse migration, schema mapping, and MI/BI reporting. - Proficient in tools such as JIRA, Confluence, Asana, Miro, and Excel. - Familiarity with Agile (SCRUM More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
They're Looking For: Proven Data Engineering experience (5+ years). Consultancy experience is a must. Leadership and multi-project environments experience. Expertise in ETL, data modelling, and Azure Data Services. Experience in designing and implementing data pipelines, data lakes, and data warehouses. Hands-on experience with Apache Spark andMore ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
In Technology Group
pipelines and transforming raw data into valuable insights, we want you on our team! Key Responsibilities: Design, develop, and maintain scalable data pipelines andETL processes. Collaborate with data analysts and stakeholders to understand data requirements. Ensure data integrity, security, and compliance with industry standards. Optimize data architectures for performance More ❯
knowledge through guides, training, and collaboration ?What We’re Looking For: -Strong skills in SQL and comfort working with large, complex datasets -Experience with ETL, data warehousing , and cloud data sources (Azure, APIs, Excel, etc.) -A natural communicator who can turn data into stories and strategy -Confident working independently but More ❯
science, machine learning, and business analytics Practical experience in coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
They're Looking For: Proven Data Engineering experience (5+ years). Consultancy experience is a must. Leadership and multi-project environments experience. Expertise in ETL, data modelling, and Azure Data Services. Experience in designing and implementing data pipelines, data lakes, and data warehouses. Hands-on experience with Apache Spark andMore ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
JR United Kingdom
They're Looking For: Proven Data Engineering experience (5+ years). Consultancy experience is a must. Leadership and multi-project environments experience. Expertise in ETL, data modelling, and Azure Data Services. Experience in designing and implementing data pipelines, data lakes, and data warehouses. Hands-on experience with Apache Spark andMore ❯
source control management, issue tracking tools and branching strategies. Track record of bootstrapping projects in new business domains. Understanding and experience of data engineering, ETLand data modelling. Working experience of relational databases, either PostgreSQL, MySQL/MariaDB. Experience and understanding of deploying software in a cloud environment, either GCP More ❯