South East London, London, United Kingdom Hybrid / WFH Options
Datatech Analytics
team, with demonstrable people leadership and project coordination skills Deep hands-on knowledge of Microsoft Fabric, Azure Data Factory, Power BI, and related Azure tools Strong proficiency in SQL, SparkSQL, and Python for data processing and automation Solid understanding of ETL/ELT workflows, data modelling, and structuring datasets for analytics Experience working More ❯
Key Technology (awareness of) Azure Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (SparkSQL) & Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSPO Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) What More ❯
as a Data Engineer: Design and architect modern data solutions that align with business objectives and technical requirements Design and implement advanced ETL/ELT pipelines using Python, SQL, and Apache Airflow Build highly scalable and performant data solutions leveraging cloud platforms and technologies Develop complex data models to handle enterprise-level analytical needs Make critical technical decisions … solutions to leadership and non-technical stakeholders Contribute to the development of the Methods Analytics Engineering Practice by participating in our internal community of practice Requirements Experience in SQL Server Integration Services (SSIS) Good experience with … ETL - SSIS, SSRS, T-SQL (On-prem/Cloud) Strong proficiency in SQL and Python for handling complex data problems Hands-on experience with ApacheSpark (PySpark or SparkSQL) Experience with the Azure data stack Knowledge of workflow orchestration tools like Apache Airflow Experience with containerisation technologies like Docker More ❯
Wyton, Cambridgeshire, United Kingdom Hybrid / WFH Options
Atreides LLC
engineering to embed analytic logic into data pipelines and services. Conduct bespoke, high-complexity analysis in support of customer-facing or operational needs. Guide team best practices in SparkSQL usage, data documentation, and exploratory reproducibility. Desired Qualifications: 5+ years of experience in data science, applied analytics, or data R&D. Advanced expertise in Python, SparkSQL, and distributed data environments. Strong background in statistical inference, anomaly detection, and interaction modeling. Proven track record in developing quality control or validation processes for complex analytics. Experience working with multi-source data and deriving insights from structured or semi-structured inputs. Excellent mentorship, communication, and analytical design leadership skills. Compensation and Benefits: Competitive salary More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Hays
You'll work across multiple workstreams, delivering high-impact data solutions that drive efficiency and compliance for Markets and its clients. Key Responsibilities Build and optimize PySpark and SQL queries to analyze, reconcile, and interrogate large datasets. Recommend improvements to reporting processes, data quality, and query performance. Contribute to the architecture and design of Hadoop environments. Translate architecture … in SQL, Python, and Spark. Experience within an investment banking or financial services environment. Exposure to Hive, Impala, and Spark ecosystem technologies (e.g. HDFS, ApacheSpark, Spark-SQL, UDF, Sqoop). Experience building and optimizing Big Data pipelines, architectures, and data sets. Familiarity with Hadoop and Big Data ecosystems. Strong More ❯
delivering robust data solutions and managing changes that impact diverse stakeholder groups in response to regulatory rulemaking, supervisory requirements, and discretionary transformation programs. Key Responsibilities: Develop PySpark and SQL queries to analyze, reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture and design … plans for technical deliverables. Manage day-to-day project activities, including setting milestones, tracking tasks, coordinating deliverables, and ensuring timely, high-quality execution. Required Skills & Experience: Proficiency in SQL, Python, and Spark. Minimum 5 years of hands-on technical data analysis experience. Familiarity with Hadoop/Big Data environments. Understanding of Data Warehouse/ETL design and development … and able to work independently. Preferred Qualifications: Background in investment banking or financial services. Hands-on experience with Hive, Impala, and the Spark ecosystem (e.g., HDFS, ApacheSpark, Spark-SQL, UDFs, Sqoop). Proven experience building and optimizing big data pipelines, architectures, and data sets. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
WüNDER TALENT
You’ll be involved in designing and building production-grade ETL pipelines, driving DevOps practices across data systems and contributing to high-availability architectures using tools like Databricks, Spark and Airflow- all within a modern AWS ecosystem. Responsibilities Architect and build scalable, secure data pipelines using AWS, Databricks and PySpark. Design and implement robust ETL/ELT solutions … services (e.g. S3, Glue, Redshift). Advanced PySpark and Databricks experience (Delta Lake, Unity Catalog, Databricks Jobs etc). Proficient in SQL (T-SQL/SparkSQL) and Python for data transformation and scripting. Hands-on experience with workflow orchestration tools such as Airflow. Strong version control and DevOps exposure (Git, GitHub Actions, Terraform). Familiar with More ❯
trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and solutions Mentor … engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions Excellent communication More ❯
trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and solutions Mentor … engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions Excellent communication More ❯
trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and solutions Mentor … engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions Excellent communication More ❯
trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and solutions Mentor … engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions Excellent communication More ❯