Software Engineer, Analytics & Data Engineering London, England, United Kingdom Software and Services Description The ASE Analytics & Data Engineering team is responsible for building analytics platforms, datasets and processes required by Apple for analysing and powering customer experiences. This means we build computation platforms and datasets to empower our product, marketing, feature, analytic and data … complexity of our datasets, this is not a trivial task. We are looking for an outstanding Software Engineer who can effectively collaborate with our partner teams to deliver data engineering solutions to improve and power the next generation of Apple features.You will be working on cross-functional projects with other engineering teams, product leads and analytics leaders to … build insights, metrics and data pipelines. The projects you will be working on will be truly impactful. You will have the freedom to innovate as you work closely with our partners to drive meaningful change and build elegant systems to deliver the results.The ideal candidate will have a strong quality focus and be motivated by taking early production More ❯
via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers). ? Work with platform engineers to embed Spark jobs into InfoSum's platform APIs and data pipelines. ? Troubleshoot job failures, memory and resource issues, and execution anomalies across various runtime environments. ? Optimize Spark job performance and advise on best practices to reduce cloud compute and … in at least two major cloud environments (AWS, GCP, Azure). ? In-depth knowledge of AWS Glue, including job authoring, triggers, and cost-aware configuration. ? Familiarity with distributed data formats (Parquet, Avro), data lakes (Iceberg, Delta Lake), and cloud storage systems (S3, GCS, Azure Blob). ? Hands-on experience with Docker, Kubernetes, and CI/CD … to support and coach internal teams. Key Indicators of Success: ? Spark jobs are performant, fault-tolerant, and integrated into InfoSum's platform with minimal overhead. ? Cost of running data processing workloads is optimized across cloud environments. ? Engineering teams are equipped with best practices for writing, deploying, and monitoring Spark workloads. ? Operational issues are rapidly identified and resolved, with More ❯
via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers). ? Work with platform engineers to embed Spark jobs into InfoSum's platform APIs and data pipelines. ? Troubleshoot job failures, memory and resource issues, and execution anomalies across various runtime environments. ? Optimize Spark job performance and advise on best practices to reduce cloud compute and … in at least two major cloud environments (AWS, GCP, Azure). ? In-depth knowledge of AWS Glue, including job authoring, triggers, and cost-aware configuration. ? Familiarity with distributed data formats (Parquet, Avro), data lakes (Iceberg, Delta Lake), and cloud storage systems (S3, GCS, Azure Blob). ? Hands-on experience with Docker, Kubernetes, and CI/CD … to support and coach internal teams. Key Indicators of Success: ? Spark jobs are performant, fault-tolerant, and integrated into InfoSum's platform with minimal overhead. ? Cost of running data processing workloads is optimized across cloud environments. ? Engineering teams are equipped with best practices for writing, deploying, and monitoring Spark workloads. ? Operational issues are rapidly identified and resolved, with More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
AWS Data Engineer - Contract Location: Reading (Hybrid - 1-2 days/month onsite) Rate: £500-550/day (Inside IR35) Start Date: ASAP Duration: 5 months (with potential for extension) A leading financial services organisation is seeking an experienced AWS Data Engineer to join their Compliance Reporting team. This backend-focused role involves designing and deploying … scalable data solutions that support the delivery of regulatory compliance reports across the business. You'll work with a modern AWS stack and infrastructure-as-code tools to build robust datapipelines and applications that process complex datasets from multiple operational systems. Key Responsibilities: Build and maintain AWS-based ETL/ELT pipelines using S3, Glue … PySpark/Python), Lambda, Athena, Redshift, and Step Functions Develop backend applications to automate and support compliance reporting Process and validate complex data formats including nested JSON, XML, and CSV Collaborate with stakeholders to deliver technical solutions aligned with regulatory requirements Manage CI/CD workflows using Bitbucket, Terraform, and Atlantis Support database management and improve dataMore ❯
For more information, contact Mieke Van Tonder at . This is an exciting opportunity for someone passionate about technology, data, and machine learning, eager to contribute to a collaborative environment. The Data Scientist will build and deploy machine learning models to support personalization, recommendations, anomaly detection, and data insights. The role involves working with … as monitoring, continuous integration, and automated retraining. Utilize AI-assisted development tools like Cursor and Copilot to improve productivity. Collaborate with engineers, DevOps, and leadership to ensure robust datapipelines and translate business needs into technical solutions. Requirements At least 3 years of experience in applied machine learning and deploying production models. Proficiency in Python, SQL, and frameworks … learn. Experience with AWS services and Databricks; understanding of ML Ops is highly beneficial. Ability to adapt quickly to new tools and deliver scalable solutions independently. Familiarity with datapipelines involving Kafka, Debezium, S3, Lambda, and Delta Lake is a plus. Note: Some irrelevant content and personal opinions were removed for clarity and focus. This job posting is More ❯
s eCommerce ontology - the authoritative source of product knowledge driving exceptional customer experiences. Applied Scientists in this role solve problems related to product classification, attribute extraction, ontology modeling, data integration and enrichment, and scalable knowledge services. It's challenging due to the vast scale, heterogeneous data sources, and evolving domains, but exciting for pushing boundaries in … from you! Key job responsibilities - Lead the research and development of novel AI solutions to enrich and curate Amazon's product ontology (Product Knowledge) at scale - Develop scalable data processing pipelines and architectures to ingest, transform, and enrich product data from various sources (seller listings, customer reviews, etc.) - Collaborate with engineers to design and implement robust … them to Product Knowledge A day in the life The Amazon product ontology is a structured knowledge base representing product types, attributes, classes, and relationships. It standardizes product data, enabling enhanced customer experiences through improved search and recommendations, streamlined selling processes, and internal data enrichment across Amazon's eCommerce ecosystem. You will work with following stakeholders More ❯
BI Developer/Data Analyst (Power BI) Location: West Midlands Salary: Up to £65,000 Type: Full-time, Office-based About the Role An established food manufacturer and supplier is implementing their first-ever ERP system, focusing on four key modules: Manufacturing, Compliance, Reliability, and Learning. Each module will take approximately 4-6 months to complete. To support … and maintain Power BI dashboards and reports delivering actionable insights. Collaborate with internal stakeholders and external implementation partners throughout the ERP rollout and ongoing system maintenance. Develop robust datapipelines and write advanced SQL queries for ETL processes and data analysis. Migrate and integrate data between systems to ensure seamless communication and reporting. Continuously … monitor and improve BI tools post-ERP implementation. Analyse large datasets to uncover trends supporting strategic decision-making. Ensure data integrity, security, and automation of workflows to maintain reliability and efficiency. Benefits Competitive salary up to £65,000 25 days holiday plus bank holidays Onsite parking available Employee rewards and retail discounts Pension scheme (details to be confirmed More ❯
on highly impactful problems Promote a positive culture of collaboration, through open and effective communication, particularly when addressing issues or raising concerns. Are able to form well reasoned, data driven or otherwise evidence based arguments, to influence key stakeholders across the business. Required Skills and Experience Has 5+ years commercial experience Expert level in (Javascript Typescript Python) Familiar … with Postgres and K8s Feels at home in AWS console Has built infrastructure with Terraform Bonus points Has worked in an intelligence collection setting Experience with "big data" technologies, the management of data, and datapipelines Familiarity with functional programming concepts Has run production workloads of 1000s QPS Has been part of an "on More ❯
Deployed Engineers sit at the intersection of product, engineering, and customer success. You'll own full-stack features end-to-end, with a focus on building for enterprise data requirements. You will collaborate closely with customer teams to architect and implement sophisticated datapipelines and APIs, directly fueling our cutting-edge agentic AI with terabytes of … and growing your AI skills in a truly AI-first company at the forefront of agentic systems. Tasks What You'll Do Design & build scalable backend services and datapipelines - written in Python and deployed with Docker & Kubernetes. Integrate with enterprise ecosystems -enterprise software systems such as SAP and Oracle ERP, GraphQL/REST APIs, SFTP feeds, and … event buses (Kafka, Pulsar). Wrangle large, heterogeneous data sets -model, transform, and index multi-modal, multi-terabyte enterprise datasets for advanced workloads Develop enterprise-level next generation AI systems with the support of Magentic's AI specialists Ship complete customer features -from architecture and code to CI/CD, infra-as-code (Terraform), rollout, and user training. More ❯
to humanity's enduring challenges. We are looking for a Senior DevSecOps Engineer to join the Pathogen Programme at EIT. In this role, you'll help ensure our data platform is built to the highest standards, with a strong emphasis on automation across the development lifecycle. You'll work closely with engineers to deploy datapipelines … their workflows. You'll be responsible for maintaining infrastructure, designing secure automation pipelines, managing cloud environments, and ensuring security and compliance. You'll collaborate with cross-functional teams, data engineers, backend, and full-stack developers, to build robust, automated deployment pipelines across our environments. Key Responsibilities Design, implement, and maintain secure cloud infrastructure using Oracle Cloud Infrastructure (OCI … tools like Terraform to enable secure, repeatable deployments. Implement and manage CI/CD pipelines, focusing on automated security testing, deployment, and monitoring. Ensure all aspects of the data platform OCI infrastructure, data ingest pipelines, tool deployments, access controls, and monitoring are developed, tested, and deployed using automation best practices. Support bioinformaticians in building pipelines that More ❯
Requirements The engineering challenge is building systems reliable enough to power these high-stakes decisions. You'll work across datapipelines processing billions of tokens, real-time simulation platforms, and B2B SaaS products that make complex AI feel intuitive , (Desirable) Experience with Python, TypeScript/JavaScript, PostgreSQL, or Google Cloud Platform , Strong software engineering fundamentals with experience in … modern programming languages , (Desirable) Background in building scalable systems, APIs, and data-intensive applications , Proven track record of shipping production software and taking ownership of systems end-to-end , (Desirable) Familiarity with AI/ML systems, large language models, or high-throughput data processing , Experience working collaboratively in cross-functional teams with diverse technical backgrounds , (Desirable … monitoring, testing, and GitHub Actions CI/CD pipelines to maintain high availability for systems that influence million-pound business decisions , Collaborate Across Disciplines: Partner closely with our data science team to translate research innovations into robust, scalable production systems , Drive Technical Excellence: Contribute to engineering standards and systematic approaches as we scale our platform and grow the More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
working £300 to £350 a day A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable datapipelines, specifically using the Hadoop ecosystem and related tools. The role … will focus on designing, building and maintaining scalable datapipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems. The successful … candidate should have the following key skills Experience with Open Data Platform Hands on experience with Python for Scripting Apache Spark Prior experience of building ETL pipelinesData Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then this is More ❯
About Route Reports Route Reports builds cutting-edge devices and software for inspecting roads, railways, and airports. We empower infrastructure teams with real-time data and analytics to keep infrastructure safe and costs down. Our AI hardware and software solution is market-leading in the UK, and we work with clients like Network Rail, Gatwick Airport, Essex Highways … develop the platform. Key Responsibilities Architect and implement new features across our React front end and Python/JS backend. Build and integrate with RESTful APIs, microservices, and datapipelines Optimise application performance, load times, and reliability Collaborate on UX design, sprint planning, and code reviews Ensure security best practices and compliance with data-protection standards More ❯
Are you a highly skilled Senior Backend Engineer with a passion for Data and Machine Learning, ready to make a global impact? Join Spotify's Commerce Platform to build the next generation of our robust, high-performing, and resilient payments ecosystem. This is a unique opportunity to solve complex challenges, work with ML-powered systems, and contribute to … prevention. The reliability of these systems is business-critical for Spotify. What You'll Do Architect, design, and implement highly scalable backend services (Java/Python) and robust datapipelines that power Spotify's internal Commerce platform. Develop and enhance our ML-powered systems taking solutions from concept to production. Take the lead on API design, platform development … and ensuring the scalability, reliability, and performance of our services. Collaborate with a talented, cross-functional team of engineers, product managers, and data scientists to deliver impactful solutions. Mentor other engineers and foster a culture of continuous learning and technical excellence within the team. Who You Are You have significant experience building and scaling backend services (Java and More ❯
models, we build the driving intelligence that feeds them. Our mission is to drive safer roads and fairer insurance through data. The Telematics team transforms raw phone sensor data into meaningful insights about how people drive. Using advanced signal processing and machine learning, we process high-frequency data to extract behavioural signals that power our understanding … of driving quality, context, and risk. About the Role As a Lead Data Scientist, you'll drive the technical direction of our behavioural modelling work. You'll lead the development of telematics features from idea to production, including exploratory analysis, signal design, risk evaluation, and scalable deployment. This is a hands-on leadership role where you'll design … things into production, values strong engineering, and has opinions on building reliable, scalable systems. What you will be doing Drive the development of telematics features from mobile sensor data, including project design, modelling, validation, and deployment. Conduct deep dives into driving signals, develop hypotheses, and evaluate their predictive power for risk. Design technical solutions with clear trade-offs More ❯
business units. ️ Design AI-based architectures and recommend appropriate tools, technologies, and methodologies ️ Collaborate with business leaders to understand objectives and translate them into scalable AI models ️ Architect datapipelines to support machine learning workflows and model deployments ️ Evaluate and implement deep learning, computer vision, NLP, and predictive modeling solutions ️ Conduct risk assessments on AI implementations and mitigate … compliance or ethical concerns ️ Partner with DevOps to integrate AI models into production systems with CI/CD ️ Define enterprise AI standards and ensure reusability of components ️ Mentor data scientists and engineers in applying best practices ️ Present technical documentation and demos to internal and external stakeholders ️ Build prototypes to validate AI use cases and accelerate adoption ️ Ensure AI More ❯
different types of databases: Relational, Graph etc Design and optimise APIs using Python and FastAPI to serve AI solutions. Familiar with GCP ecosystem and Cloudrun Build and optimise datapipelines for vector search and knowledge retrieval using Vector databases and embedding models. What We're Looking For: Professional AI engineering experience. Background in Software Engineering with Python. Solid … Excellent communication skills and the ability to work well in a collaborative team environment Nice to have: Strong experience with GCP Experience with Graph databases Experience in bringing data-intensive projects into production Experience with CI/CD pipelines Benefits: Private Health Insurance: Comprehensive coverage for both physical and mental health. Flexible and Remote-First Work Environment: Choose More ❯
OpenTofu, Terragrunt or Pulumi, we want to hear from you. This isn't just about building new things. You'll be enhancing and maturing one of thelargest health data platforms in the world, driving improvements, fixing bugs, and supporting a system that truly matters. You'll be hands-on with infrastructure-as-code, GitOps workflows, and engineering best … will include but not limited to the participant-facing websites, import and processing of high-volume health, NHS and genetic datasets to the de-identification/sharing of data into accredited Trusted Research Environments (TREs). At Our Future Health, our mission is to transform the prevention,detectionand treatment of conditions such as dementia, cancer, diabetes, heart disease … Hands-on experience working directly with software engineering best practices: unit testing, code reviews, design documentation, excellent debugging, troubleshooting skills. Experience in building/deploying tools related to datapipelines, ETL processes. Confident with cloud-native technologies like Kubernetes, Docker. Experience deploying open-source technologies such as Python, Node.JS, Ruby, Postgres and related CI/CD pipelines. Good More ❯
an engineer early in the career, or a more experienced engineer to take on new challenges. As part of Spotify's Commerce Platform, we manage backend services and data sets that power end-to-end merchandising, order management, payment orchestration and purchase flows. You have the chance to create a significant impact by evolving our technology stack aimed … team that implements change to complex systems at high scale. Collaborate with talented peers and teams across Spotify to deliver value to our users. Gain exposure to backend, data and ML-based systems. Who You Are Experience building and scaling Java or Python services on a large scale cloud platform, such as Google Cloud Platform, or equivalent. Exposure … to scalable database technologies, such as Postgres. Exposure to data and datapipelines, being adept is a plus. Excellent problem-solving skills with a strong bias for action. Strong writing and communication skills. Comfortable driving engineering deep dives and workshops, as well as stakeholder demos. Self-driven and enjoys being part of a team that works More ❯
prioritise innovation and growth - they're considered one of the most well-funded and exciting InsurTech scale-ups in the UK. ROLE AND RESPONSIBILITIES Build and design robust data models, working end-to-end across modelling in DBT and ETL processes Develop and maintain scalable datapipelines using SQL and Snowflake Work closely with analysts, data scientists and product teams to ensure data is reliable and well-structured Own the design and implementation of new models, not just building but shaping how they’re developed Contribute to improving analytics engineering standards and best practices Communicate findings and recommendations clearly across technical and non-technical stakeholders SKILLS AND EXPERIENCE Required: 3+ years in More ❯
prioritise innovation and growth - they're considered one of the most well-funded and exciting InsurTech scale-ups in the UK. ROLE AND RESPONSIBILITIES Build and design robust data models, working end-to-end across modelling in DBT and ETL processes Develop and maintain scalable datapipelines using SQL and Snowflake Work closely with analysts, data scientists and product teams to ensure data is reliable and well-structured Own the design and implementation of new models, not just building but shaping how they’re developed Contribute to improving analytics engineering standards and best practices Communicate findings and recommendations clearly across technical and non-technical stakeholders SKILLS AND EXPERIENCE Required: 3+ years in More ❯
prioritise innovation and growth - they're considered one of the most well-funded and exciting InsurTech scale-ups in the UK. ROLE AND RESPONSIBILITIES Build and design robust data models, working end-to-end across modelling in DBT and ETL processes Develop and maintain scalable datapipelines using SQL and Snowflake Work closely with analysts, data scientists and product teams to ensure data is reliable and well-structured Own the design and implementation of new models, not just building but shaping how they’re developed Contribute to improving analytics engineering standards and best practices Communicate findings and recommendations clearly across technical and non-technical stakeholders SKILLS AND EXPERIENCE Required: 3+ years in More ❯
london (city of london), south east england, united kingdom
Harnham
prioritise innovation and growth - they're considered one of the most well-funded and exciting InsurTech scale-ups in the UK. ROLE AND RESPONSIBILITIES Build and design robust data models, working end-to-end across modelling in DBT and ETL processes Develop and maintain scalable datapipelines using SQL and Snowflake Work closely with analysts, data scientists and product teams to ensure data is reliable and well-structured Own the design and implementation of new models, not just building but shaping how they’re developed Contribute to improving analytics engineering standards and best practices Communicate findings and recommendations clearly across technical and non-technical stakeholders SKILLS AND EXPERIENCE Required: 3+ years in More ❯
Senior Software Engineer/Architect - £80,000-£100,000 (Hybrid - Cambridge HQ) An emerging startup at the forefront of drone data systems for commercial and government use is looking for a Senior Software Engineer/Architect to join their growing team. This is a unique opportunity to shape cloud-native platforms that convert multi-sensor drone data into real-time insights. You'll work closely with ML engineers and systems architects to design secure, scalable backend systems and data infrastructure from the ground up. Key Responsibilities: Design and build robust, cloud-native backend systems using C++, Python, or Go Develop scalable, secure infrastructure on AWS Collaborate with cross-functional teams including ML and … data engineers Contribute to the architecture of datapipelines and system integrations Lead backend development best practices in a fast-paced startup environment Support the evolution of systems from prototype to production-ready platforms Required Skills & Experience: Strong backend development skills in C++, Python, or Go Proven experience with AWS and cloud infrastructure Expertise in building More ❯
control and branching methodologies using GIT Application integration using SOAP web services and REST APIs OWASP Top 10 security framework Agile and SCRUM Significant experience developing and implementing data solutions in a high-volume data loading environment Excellent understanding of SSIS framework, ADF datapipelines, administration, maintenance, and configuration Significant experience and clear demonstration More ❯