responsible for designing, building, and maintaining robust data pipelines, transforming raw data into clean datasets, and delivering compelling dashboards and insights to drive business decisions. Design, develop, and optimize ETL/ELT pipelines using Python and SQL. Develop and maintain Power BI dashboards and reports to visualize data and track KPIs. Work with stakeholders to gather business requirements and translate … support advanced analytics. Monitor and improve pipeline performance, scalability, and reliability. Advanced SQL skills (joins, CTEs, indexing, optimization) Experience with relational databases (e.g., SQL Server, PostgreSQL, MySQL) Understanding of ETL/ELT principles , data architecture, and data warehouse concepts Familiarity with APIs, RESTful services, and JSON/XML data handling Experience with Azure Data Factory , Databricks , or AWS Glue Familiarity More ❯
/or Data Lake. Technologies : Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks and Power BI. Experience with creating low-level designs for data platform implementations. ETL pipeline development for the integration with data sources and data transformations including the creation of supplementary documentation. Proficiency in working with APIs and integrating them into data pipelines. Strong programming More ❯
and efficient data pipelines to collect, transform, and integrate data from various sources. Data Architecture: Develop and optimise data architectures, including data warehouses, data lakes, and other storage solutions. ETL/ELT Processes: Create and maintain reliable ETL/ELT workflows that ensure data quality and accessibility. Collaboration: Work closely with data scientists, analysts, and client stakeholders to understand requirements More ❯
and efficient data pipelines to collect, transform, and integrate data from various sources. Data Architecture: Develop and optimise data architectures, including data warehouses, data lakes, and other storage solutions. ETL/ELT Processes: Create and maintain reliable ETL/ELT workflows that ensure data quality and accessibility. Collaboration: Work closely with data scientists, analysts, and client stakeholders to understand requirements More ❯
or Python). Experience working with relational SQL databases either on premises or in the cloud. Experience delivering multiple solutions using key techniques such as Governance, Architecture, Data Modelling, ETL/ELT, Data Lakes, Data Warehousing, Master Data, and BI. A solid understanding of key processes in the engineering delivery cycle including Agile and DevOps, Git, APIs, Containers, Microservices andMore ❯
Translating Business requirements to technical solutions and the production of specifications, Designing and implementing business intelligence & modern data analytics platform technical solutions, Data architecture design and implementation, Data modelling, ETL, data integration and data migration design and implementation, Master data management system and process design and implementation, Data quality system and process design and implementation, Major focus on data science … data modelling, data staging, and data extraction processes, including data warehouse and cloud infrastructure. Experience with multi-dimensional design, star schemas, facts and dimensions. Experience and demonstrated competencies in ETL development techniques. Experience in data warehouse performance optimization. Experience on projects across a variety of industry sectors an advantage. Comprehensive understanding of data management best practices including demonstrated experience with More ❯
BI for data visualization and reporting. Strong programming skills in Python, with experience in data processing libraries like Pandas and NumPy. Experience with data pipeline development, data warehousing, andETL processes. Strong analytical and problem-solving skills, with attention to detail and ability to work in a fast-paced environment. Excellent communication and collaboration skills. Nice to Have: Experience with More ❯
experience using cloud-native data services, specifically in Microsoft Azure, Fabric, Dataverse, Synapse, Data Lake, Purview. Deep expertise in data engineering tools and practices, including Python, SQL, and modern ETL/ELT frameworks (e.g., Azure Data Factory, Talend, dbt). Experience designing and implementing scalable data pipelines and integration patterns across structured and unstructured data sources (e.g., Azure SQL, MySQL More ❯
platforms. Expertise in modern data tools and frameworks such as Spark, dbt, Airflow, Kafka, Databricks, and cloud-native services (AWS, GCP, or Azure) Understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL and at least one programming language (e.g., Python, Scala, or Java) Demonstrated experience owning complex technical systems end-to-end, from More ❯
stakeholders to identify data requirements and deliver sustainable solutions. Monitor and troubleshoot data pipeline issues to maintain data integrity and accuracy. Assist in the development, maintenance, and optimization of ETL (Extract, Transform, Load) processes for efficiency and reliability. Project & Improvement: Assist in gathering, documenting, and managing data engineering requirements and workflows. Contribute to the development of guidelines and documentation for … to data infrastructure. Participate in quality reviews of designs, prototypes, and other work products to ensure requirements are met. Skills & Experience: Basic understanding of data engineering concepts, such as ETL processes, data pipelines, and data quality management. Hands-on experience with SQL (e.g., writing queries, basic database management). Familiarity with data tools and platforms (e.g., Python, Power BI, Tableau More ❯
ID, MFA, Conditional Access policies, and experience managing servers and virtual environments in Microsoft Azure BI & Data Skills : Experience with other BI platforms (e.g. SAP BusinessObjects, Power BI), SQL, ETL processes, data modelling, and diverse data sources (including SAP HANA). Tableau Tools : Knowledge of Tableau Server Resource Monitoring Tool (RMT) and Content Migration Tool. DevOps : Familiarity with Azure DevOps More ❯
and best practices. Desirable skills: Experience with Power BI or other data visualisation tools. Familiarity with Python, C#, Angular, or Microsoft Power Automate. Exposure to data modelling, pipeline optimisation (ETL/ELT), and API provisioning. Understanding of data science workflows and practices. This is a fantastic opportunity to join a forward thinking organisation who offer a supportive, collaborative and rewarding More ❯
including handling large, complex datasets. Advanced SQL skills for querying and managing relational databases. Familiarity with data visualisation tools (e.g., Sisense, Power BI, Streamlit). Technical Skills Experience with ETL processes and APIs for data integration. Understanding of statistical methods and data modelling techniques. Familiarity with cloud platforms like Snowflake is advantageous. Knowledge of data governance frameworks and data security More ❯
the opportunities in the social media space. Job Description: As a Lead Data Scientist at Luupli, you will play a pivotal role in leveraging AWS analytics services to analyse andextract valuable insights from our data sources. You will collaborate with cross-functional teams, including data engineers, product managers, and business stakeholders, to develop data-driven solutions and deliver actionable … analysis strategies using AWS analytics services, such as Amazon Redshift, Amazon Athena, Amazon EMR, and Amazon QuickSight. Design and build robust data pipelines andETL processes to extract, transform, andload data from diverse sources into AWS for analysis. Apply advanced statistical and machine learning techniques to perform predictive and prescriptive analyses, clustering, segmentation, and pattern recognition. Identify key metrics … cloud-based environment using AWS analytics services. 3.Strong proficiency in AWS analytics services, such as Amazon Redshift, Amazon Athena, Amazon EMR, and Amazon QuickSight. 4.Solid understanding of data modelling, ETL processes, and data warehousing concepts. 5.Proficiency in statistical analysis, data mining, and machine learning techniques. 6.Proficiency in programming languages such as Python, R, or Scala for data analysis and modelling. More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
JR United Kingdom
that are customer-centric and business-aligned. What You’ll Do: Design and build scalable, reliable, and high-performance data systems. Define and drive best practices for data modeling, ETL/ELT pipelines, and real-time streaming architectures. Set technical direction and architectural standards across the data platform. Work closely with cross-functional partners to meet evolving business and analytical … tools and frameworks such as Spark , dbt , Airflow , Kafka , Databricks , and cloud-native services (AWS, GCP, or Azure). Deep understanding of data modeling , distributed systems , streaming architectures , andETL/ELT pipelines . Proficiency in SQL and at least one programming language such as Python , Scala , or Java . Demonstrated experience owning and delivering complex systems from architecture through More ❯
Greater Edinburgh Area, United Kingdom Hybrid / WFH Options
Lorien
into this function already from Software and Hardware Engineers to PMs, Support, Operations staff, Managers and more. What You’ll Be Doing: Design and develop Data Solutions, Pipelines andETL Processes using tools such as Azure Data Factory/Azure Data Lake/Azure SQL/SSIS and other relevant offerings Build and tailor Data Models and Data Warehouses Work … tooling as possible including Azure Data Factory, Azure Data Lake, Azure SQL, Azure Synapse Analytics, SQL Server/SSIS/T-SQL Strong skills across Data Modelling, Warehousing andETL processes Skills in Business Intelligence tooling such as Power BI/similar Scripting with Python/PowerShell/similar Why This Role? This business is known for evolving with the More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Lorien
this firm already from Software and Hardware Engineers to PMs, Support and Operations staff, Managers and more. What You'll Be Doing: Design and develop Data Solutions, Pipelines andETL Processes using tools such as Azure Data Factory/Azure Data Lake/Azure SQL/SSIS and other relevant offerings Build and tailor Data Models and Data Warehouses Work … tooling as possible including Azure Data Factory, Azure Data Lake, Azure SQL, Azure Synapse Analytics, SQL Server/SSIS/T-SQL Strong skills across Data Modelling, Warehousing andETL processes Skills in Business Intelligence tooling such as Power BI/similar Scripting with Python/PowerShell/similar Why This Role? This business is known for evolving with the More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Net Talent
that are customer-centric and business-aligned. What You’ll Do: Design and build scalable, reliable, and high-performance data systems. Define and drive best practices for data modeling, ETL/ELT pipelines, and real-time streaming architectures. Set technical direction and architectural standards across the data platform. Work closely with cross-functional partners to meet evolving business and analytical … tools and frameworks such as Spark , dbt , Airflow , Kafka , Databricks , and cloud-native services (AWS, GCP, or Azure). Deep understanding of data modeling , distributed systems , streaming architectures , andETL/ELT pipelines . Proficiency in SQL and at least one programming language such as Python , Scala , or Java . Demonstrated experience owning and delivering complex systems from architecture through More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Motability Operations Ltd
of its data. The Successful Candidate Would Be Working in an agile scrum team to design and build data feeds and related applications Writing, testing and peer review of ETL code in Oracle ODI Working with business users to design and configure self-serve data environments within our snowflake data lake Analysing, developing, delivering, and managing BI reports Assisting in … Enjoys technical challenges and learning new skills Willingness to take part in overnight support rota You’ll need all of these. Experience of building a data warehouse using an ETL/ELT tool, preferably Oracle ODI Significant database experience in Snowflake or Oracle Good knowledge of standard data formats (XML, JSON, csv, etc) Proven experience of delivering BI solutions for … of the Customer team, where we work across the business to digitise and improve interactions with our customers and business partners. This could be data transfer from third parties, ETL into the data warehouse or data lake, providing insights and metrics, or improving performance or processes. This is a dynamic team, with data engineers and analysts working closely alongside members More ❯
custodial banks, data APIs, and even direct user input. The data engineers on the Portfolio Data Engineering team help build and maintain the transformation and cleaning steps of our ETL (Extract, Transform, Load) pipeline before it can be stored and accessed by our customers in a standardised fashion. As a data engineer on this team, you’ll be building components … within the ETL pipeline that automate these cleaning and transformation steps. As you gain more experience, you’ll contribute to increasingly challenging engineering projects within our broader data infrastructure. This is a crucial, highly visible role within the company. Your team is a big component of growing and serving Addepar’s client base with minimal manual data cleaning effort required … technologies we use: Python, Apache Spark/PySpark, Java/Spring Amazon Web Services SQL, relational databases Understanding of data structures and algorithms Interest in data modeling, visualisation, andETL pipelines Knowledge of financial concepts (e.g., stocks, bonds, etc.) is encouraged but not necessary Our Values Act Like an Owner - Think and operate with intention, purpose and care. Own outcomes. More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
BlackRock, Inc
the team is multi-disciplinary with the following skills and capabilities: machine learning, optimization, statistical modeling, signal detection, natural language processing, data visualization, generative AI, network/graph modeling, ETL, data pipelines, data architecture, ML engineering, communication, product management and strategy. We work with data from a wide variety of sources including text, news feeds, financial reports, time series transactions … solving and communication skills Knowledge and experience we are looking for includes: DevOps automation, idempotent deployment testing, and continuous delivery pipelines Networking and security protocols, load balancers, API Gateways ETL tooling and workflow engines (e.g., Spark, Airflow, Dagster, Flyte) Accelerated compute libraries and hardware (e.g. pytorch, NVIDIA GPUs) Data modeling, and strategies for cleaning and validating data at scale Performance More ❯
Deep technical knowledge of database development, design and migration Experience of deployment in cloud using Terraform or CloudFormation Automation or Scripting experience using languages such as Python & Bash etc. ETLand workflow management knowledge Experience of Agile methodologies Experience in the Financial Services Sector Data Engineering or Data Science experience Job responsibilities Interface with client project sponsors to gather, assess … and principles from the AWS Well-Architected Framework Assess, document and translate goals, objectives, problem statements, etc. to offshore/onshore management teams Advise on database performance, altering the ETL process, providing SQL transformations, discussing API integration, and deriving business and technical KPIs Guide the transition of solutions into the hands of the client, providing documentation to operate and maintain More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Widen the Net Limited
to join their FinTech team! You will develop scalable data pipelines, ensure data quality, and support business decision-making with high-quality datasets. -Work across technology stack: SQL, Python, ETL, Big Query, Spark, Hadoop, Git, Apache Airflow, Data Architecture, Data Warehousing -Design and develop scalable ETL pipelines to automate data processes and optimize delivery -Implement and manage data warehousing solutions … data engineering and data analytics Requirements: -5+ years of experience in SQL -5+ years of development in Python -MUST have strong experience in Apache Airflow -Experience with ETL tools, data architecture, and data warehousing solutions -Strong communication skills This contract is £450 per day inside IR35 6 month contract with likely extensions into More ❯
years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process … data for modeling Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application More ❯
Python Our backend services are implemented in C#/.NET or Typescript/NodeJS DynamoDB, Redshift, Postgres, Elasticsearch, and S3 are our go to data stores We run our ETL data pipelines using Python Equal Opportunities We are an equal opportunities employer. This means we are committed to recruiting the best people regardless of their race, colour, religion, age, sex More ❯