City of London, London, United Kingdom Hybrid / WFH Options
Datatech Analytics
engineering, ideally within a media, entertainment, or consumer-facing industry Experience managing or mentoring a small team, with demonstrable people leadership and project coordination skills Strong proficiency in SQL, SparkSQL, and Python for data processing and automation Knowledge of Microsoft Fabric and Azure Data Factory would be useful but not essential Power BI More ❯
engineering, ideally within a media, entertainment, or consumer-facing industry Experience managing or mentoring a small team, with demonstrable people leadership and project coordination skills Strong proficiency in SQL, SparkSQL, and Python for data processing and automation Knowledge of Microsoft Fabric and Azure Data Factory would be useful but not essential Power BI More ❯
South East London, London, United Kingdom Hybrid / WFH Options
Datatech Analytics
team, with demonstrable people leadership and project coordination skills Deep hands-on knowledge of Microsoft Fabric, Azure Data Factory, Power BI, and related Azure tools Strong proficiency in SQL, SparkSQL, and Python for data processing and automation Solid understanding of ETL/ELT workflows, data modelling, and structuring datasets for analytics Experience working More ❯
Key Technology (awareness of) Azure Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (SparkSQL) & Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSPO Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) What More ❯
as a Data Engineer: Design and architect modern data solutions that align with business objectives and technical requirements Design and implement advanced ETL/ELT pipelines using Python, SQL, and Apache Airflow Build highly scalable and performant data solutions leveraging cloud platforms and technologies Develop complex data models to handle enterprise-level analytical needs Make critical technical decisions … solutions to leadership and non-technical stakeholders Contribute to the development of the Methods Analytics Engineering Practice by participating in our internal community of practice Requirements Experience in SQL Server Integration Services (SSIS) Good experience with … ETL - SSIS, SSRS, T-SQL (On-prem/Cloud) Strong proficiency in SQL and Python for handling complex data problems Hands-on experience with ApacheSpark (PySpark or SparkSQL) Experience with the Azure data stack Knowledge of workflow orchestration tools like Apache Airflow Experience with containerisation technologies like Docker More ❯
Wyton, Cambridgeshire, United Kingdom Hybrid / WFH Options
Atreides LLC
engineering to embed analytic logic into data pipelines and services. Conduct bespoke, high-complexity analysis in support of customer-facing or operational needs. Guide team best practices in SparkSQL usage, data documentation, and exploratory reproducibility. Desired Qualifications: 5+ years of experience in data science, applied analytics, or data R&D. Advanced expertise in Python, SparkSQL, and distributed data environments. Strong background in statistical inference, anomaly detection, and interaction modeling. Proven track record in developing quality control or validation processes for complex analytics. Experience working with multi-source data and deriving insights from structured or semi-structured inputs. Excellent mentorship, communication, and analytical design leadership skills. Compensation and Benefits: Competitive salary More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Hays
You'll work across multiple workstreams, delivering high-impact data solutions that drive efficiency and compliance for Markets and its clients. Key Responsibilities Build and optimize PySpark and SQL queries to analyze, reconcile, and interrogate large datasets. Recommend improvements to reporting processes, data quality, and query performance. Contribute to the architecture and design of Hadoop environments. Translate architecture … in SQL, Python, and Spark. Experience within an investment banking or financial services environment. Exposure to Hive, Impala, and Spark ecosystem technologies (e.g. HDFS, ApacheSpark, Spark-SQL, UDF, Sqoop). Experience building and optimizing Big Data pipelines, architectures, and data sets. Familiarity with Hadoop and Big Data ecosystems. Strong More ❯
delivering robust data solutions and managing changes that impact diverse stakeholder groups in response to regulatory rulemaking, supervisory requirements, and discretionary transformation programs. Key Responsibilities: Develop PySpark and SQL queries to analyze, reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture and design … plans for technical deliverables. Manage day-to-day project activities, including setting milestones, tracking tasks, coordinating deliverables, and ensuring timely, high-quality execution. Required Skills & Experience: Proficiency in SQL, Python, and Spark. Minimum 5 years of hands-on technical data analysis experience. Familiarity with Hadoop/Big Data environments. Understanding of Data Warehouse/ETL design and development … and able to work independently. Preferred Qualifications: Background in investment banking or financial services. Hands-on experience with Hive, Impala, and the Spark ecosystem (e.g., HDFS, ApacheSpark, Spark-SQL, UDFs, Sqoop). Proven experience building and optimizing big data pipelines, architectures, and data sets. More ❯
the heart of the business. The role This role sits within the Group Enterprise Systems (GES) Technology team. The ideal candidate is an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) capable of working independently and within a team to deliver enterprise-class data warehouse solutions and analytics platforms. The role involves working on Actuarial Reserving systems … models. Build and maintain automated pipelines supporting BI and analytics use cases, ingesting, transforming, and loading data from multiple sources, structured and unstructured. Utilise Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies, and explore other solutions where appropriate. Develop patterns, best practices, and standardized data pipelines to ensure consistency across the organisation. Essential Core Technical … Experience 5 to 10+ years of experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing and stored procedure experience. Experience developing ETL solutions in SQL Server, including SSIS & T-SQL. Experience with Microsoft BI technologies (SQL Server Management Studio, SSIS, SSAS, SSRS). Knowledge of data/ More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
WüNDER TALENT
You’ll be involved in designing and building production-grade ETL pipelines, driving DevOps practices across data systems and contributing to high-availability architectures using tools like Databricks, Spark and Airflow- all within a modern AWS ecosystem. Responsibilities Architect and build scalable, secure data pipelines using AWS, Databricks and PySpark. Design and implement robust ETL/ELT solutions … services (e.g. S3, Glue, Redshift). Advanced PySpark and Databricks experience (Delta Lake, Unity Catalog, Databricks Jobs etc). Proficient in SQL (T-SQL/SparkSQL) and Python for data transformation and scripting. Hands-on experience with workflow orchestration tools such as Airflow. Strong version control and DevOps exposure (Git, GitHub Actions, Terraform). Familiar with More ❯
experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and … permissions - Experience writing and optimizing SQL queries with large-scale, complex datasets - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for More ❯
trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and solutions Mentor … engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions Excellent communication More ❯
trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and solutions Mentor … engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions Excellent communication More ❯
trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and solutions Mentor … engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions Excellent communication More ❯
trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and solutions Mentor … engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions Excellent communication More ❯
scheduling and tracking Circle CI for continuous deployment Parquet and Delta file formats on S3 for data lake storage Spark for data processing DBT for data modelling SparkSQL for analytics Why else you'll love it here Wondering what the salary for this role is? Just ask us! On a call with one of our recruiters it's More ❯
big data technology with experience ranging from platform architecture, data management, data architecture and application architecture High Proficiency working with Hadoop platform including Spark/Scala, Kafka, SparkSQL, HBase, Impala, Hive and HDFS in multi-tenant environments Solid base in data technologies like warehousing, ETL, MDM, DQ, BI and analytical tools extensive experience in metadata management and data … distributed, fault-tolerant applications with attention to security, scalability, performance, availability and optimization Requirements 4+ years of hands-on experience in designing, building and supporting Hadoop Applications using Spark, Scala, Sqoop and Hive. Strong knowledge of working with large data sets and high capacity big data processing platform. Strong experience in Unix and Shell scripting. Experience using Source More ❯
scheduling and tracking Circle CI for continuous deployment Parquet and Delta file formats on S3 for data lake storage Spark for data processing DBT for data modelling SparkSQL for analytics Why else you'll love it here Wondering what the salary for this role is? Just ask us! On a call with one of our recruiters it's More ❯
scheduling and tracking Circle CI for continuous deployment Parquet and Delta file formats on S3 for data lake storage Spark for data processing DBT for data modelling SparkSQL for analytics Why else you'll love it here Wondering what the salary for this role is? Just ask us! On a call with one of our recruiters it's More ❯