services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft ecosystem. Solid understanding of data lake and lakehouse architectures. Hands-on experience with Power BI for data integration and visualisation. Familiarity More ❯
a full cloud migration. 🔧 Key Responsibilities Manage and enhance cloud-based data infrastructure (Azure) across multiple international markets Own the development and performance of ETL pipelines, ensuring integrity and consistency Lead the migration of legacy environments into Azure Cloud Maintain and synchronise Dev, Test, and Production environments using DevOps principles More ❯
IF YOU ARE Experienced with Python/PySpark Proficient working with Databricks Lakehouse architecture and principles Having 2+ years of designing data models, building ETL pipelines, and wrangling data to solve business problems Experienced with Azure cloud technologies Modern Data Estate such as Azure Data Factory, Azure DevOps, Azure Synapse More ❯
collaborate with many technology partners like AWS, GCP, Microsoft, Databricks, Snowflake, Confluent, and others. IF YOU ARE A technology professional focused on Data Warehouses, ETL, and BI solutions development Experienced in eliciting business requirements to address the customer’s data visualization needs Ready to dive into a customer's subject More ❯
and deliver sustainable solutions. Monitor and troubleshoot data pipeline issues to maintain data integrity and accuracy. Assist in the development, maintenance, and optimization of ETL (Extract, Transform, Load) processes for efficiency and reliability. Project & Improvement: Assist in gathering, documenting, and managing data engineering requirements and workflows. Contribute to the development … reviews of designs, prototypes, and other work products to ensure requirements are met. Skills & Experience: Essential: Basic understanding of data engineering concepts, such as ETL processes, data pipelines, and data quality management. Hands-on experience with SQL (e.g., writing queries, basic database management). Familiarity with data tools and platforms More ❯
On a daily basis your varied role will include, but will not be limited to: Design, build, and optimize high-performance data pipelines andETL workflows using tools like Azure Synapse, Azure Databricks or Microsoft Fabric. Implement scalable solutions to ingest, store, andtransform vast datasets, ensuring data availability andMore ❯
reporting development (advanced/expert levels only). Key Tools & Technologies Visualization Tools: Tibco Spotfire, Tableau, Power BI, or QlikView Data Management & Querying: SQL, ETL pipelines, Data Warehousing (e.g., ODW) Scripting/Programming: Iron Python, R, R Shiny, SAS Collaboration & Platforms: SharePoint, clinical trial data platforms Qualifications Entry Level: Bachelor More ❯
to offices and client sites Employment: Permanent, Full-time What You'll Actually Be Doing: Designing, developing, and deploying robust Azure data solutions including ETL pipelines, data warehouses, and real-time analytics. No fluff, just solid engineering. Turning complex client requirements into clear, scalable Azure solutions alongside experienced Architects andMore ❯
meet current best practices and internal standards. Work closely with project managers and technical leads to integrate new enterprise data sources into ongoing projects. ETL Development Develop robust, automated ETL (Extract, Transform, Load) pipelines using industry-standard tools and frameworks, prioritizing scalability, reliability, and fault tolerance. Essential Skills & Experience Strong … ESRI, 3GIS, Bentley, Hexagon, Crescent Link, CadTel, etc.). Experience with business requirement analysis and the development of reporting and analytics structures. Familiarity with ETL solutions, including experience with SAFE FME, is highly desirable. Strong knowledge of data privacy regulations and practices. Exposure to analytics and reporting tools is considered More ❯
in: Azure Data & AI services (e.g., Azure Machine Learning, Azure OpenAI, Cognitive Services, Synapse) Programming with Python for data and AI workloads Data pipelines, ETL/ELT processes, and analytics foundations App development techniques to integrate AI capabilities Working in secure, enterprise-ready cloud environments Consulting fundamentals and effective customer More ❯
Cardiff, South Glamorgan, United Kingdom Hybrid / WFH Options
Creditsafe
We integrate data from Dynamics 365 (CE & F&O) and other sources, building efficient data structures to support analytical insights. By developing and optimizing ETL/ELT pipelines, we ensure data accuracy, consistency, and performance across the warehouse. Leveraging Azure Data Services such as Synapse, Data Factory, and SQL Server … design, build, and maintain an internal data warehouse using the Kimball methodology. The ideal candidate will have expertise in creating fact and dimension tables, ETL processes, and ensuring data integrity. Experience with Dynamics 365 CE & F&O is highly desirable. KEY DUTIES AND RESPONSIBILITIES Design and implement a Kimball-style … data warehouse architecture, including fact and dimension tables. Develop and optimize ETL/ELT pipelines to integrate data from Dynamics 365 (CE & F&O) and other sources. Collaborate with business stakeholders to define key business metrics and reporting needs. Ensure data quality, consistency, and performance across the warehouse. Work with More ❯
business/data analyst role, ideally in a consultancy or commercial setting. - Strong analytical, problem-solving, and communication skills. - Experience with operational data processes, ETL, data warehouse migration, schema mapping, and MI/BI reporting. - Proficient in tools such as JIRA, Confluence, Asana, Miro, and Excel. - Familiarity with Agile (SCRUM More ❯
OAC). Should be aware about process of creating Semantic data model in FDI. Proven experience in data modeling, data extraction, transformation, and loading (ETL) using Oracle Cloud tools Hands-on experience in creating dashboards, KPIs, custom reports, and visual analytics for Finance, Supply Chain, and HCM modules Strong understanding More ❯
Solid understanding of investment data flows and financial instruments across asset classes. Hands-on proficiency with tools and languages such as SQL, Python, andETL/data integration platforms. Strong communication, presentation, and stakeholder engagement capabilities. Advanced skills in MS Office Suite and the creation of professional deliverables. More ❯
We integrate data from Dynamics 365 (CE & F&O) and other sources, building efficient data structures to support analytical insights. By developing and optimizing ETL/ELT pipelines, we ensure data accuracy, consistency, and performance across the warehouse. Leveraging Azure Data Services such as Synapse, Data Factory, and SQL Server … design, build, and maintain an internal data warehouse using the Kimball methodology. The ideal candidate will have expertise in creating fact and dimension tables, ETL processes, and ensuring data integrity. Experience with Dynamics 365 CE & F&O is highly desirable. KEY DUTIES AND RESPONSIBILITIES Design and implement a Kimball-style … data warehouse architecture, including fact and dimension tables. Develop and optimize ETL/ELT pipelines to integrate data from Dynamics 365 (CE & F&O) and other sources. Collaborate with business stakeholders to define key business metrics and reporting needs. Ensure data quality, consistency, and performance across the warehouse. Work with More ❯
shape something from the ground up — this is for you. What you’ll do: Design and build a cloud-native data warehouse Develop scalable ETL/ELT pipelines and dimensional models (Kimball, Data Vault, etc.) Integrate multiple data sources (cloud & on-prem) Ensure high data quality, performance and reliability Collaborate More ❯
data sets, produce dashboards and drive actionable insights. SQL Development: Write and optimise complex Microsoft SQL Server queries for data extraction, transformation and loading (ETL). Data Governance: Implement master data management and governance policies to maintain data quality, compliance and lineage. Stakeholder Management: Communicate effectively with project managers andMore ❯
of opportunities for ownership and innovation. Key Responsibilities Design and deploy cloud-based data platforms using Snowflake, DBT, and related tools Develop performant, scalable ETL/ELT pipelines across varied data sources Build and maintain dimensional models using Kimball, Data Vault, or Data Mesh methodologies Collaborate with cross-functional teams More ❯
cardiff (grangetown), cardiff, United Kingdom Hybrid / WFH Options
Accelero
pipelines using Python and PySpark , enabling powerful analytics and smarter business decisions across the organisation. What You'll Be Doing Design and build scalable ETL/ELT data pipelines using Python and PySpark Lead and support data migration initiatives across legacy and cloud-based platforms Collaborate with analysts, data scientists More ❯
needs and business strategies. Cloud Architecture: Design and implement scalable, cost-effective architectures across AWS, Azure, or GCP environments. Data Engineering: Develop and manage ETL/ELT pipelines, data integration workflows, and implement CDC and delta load design for effective data management. SAP Tools: Lead SAP BW to SAP Datasphere More ❯
Knowledge (Snowflake, PostgreSQL, MSSQL, etc.). Scripting languages (Python, PowerShell). Data quality and resolving related processes Cloud Data Warehousing solutions and ELT/ETL solutions (e.g., Snowflake, DBT). Experience in working within an Agile environment. Experience with Automation, including Unit and Integration Testing. Knowledge of Cloud Concepts. For More ❯
and dynamic team. Responsibilities: Collaborate with various squads within the data team on project-based work. Develop and optimize data models, warehouse solutions, andETL processes. Work with Scala, Spark, and Java to handle large-scale data processing. Contribute to manual Databricks-like data processing solutions. Requirements: Minimum of More ❯
Exposure to multi-platform integration (MS tools preferred). On premise SQL environments, legacy SSIS, SSRS and other SQL-related technologies employed in complex ETL or ELT patterns Synapse Link for Dataverse Dataverse, Data Flows, Cloud Flows, DAX and Power Platform implementations Synchronisation methods for Synapse and Fabric from D365 More ❯
experience writing technical documentation Comfortable working with Agile, TOGAF, or similar frameworks Desirable: Experience with Python and data libraries (Pandas, Scikit-learn) Knowledge of ETL tools (Airflow, Talend, NiFi) Familiarity with analytics platforms (SAS, Posit) Prior work in high-performance or large-scale data environments Why Join? This is more More ❯
pipelines, ensuring data quality, and supporting a reliable, cloud-native data platform. What You’ll Do: Design, build, and maintain scalable data pipelines andETL/ELT workflows Work with Snowflake , SQL , and Python to transformand manage large datasets Leverage AWS services (e.g., S3, Lambda, Glue) to build modern More ❯