do you have in below skills with rating ( Out of 5) : AWS Core Java Microservices/REST Spring Coding skills Terraform/Ansible UI/Angular Database/SQL ApacheSpark About the Role We are seeking a highly skilled and experienced Senior Java Developer to design, develop, and maintain robust, scalable, and high-performance applications. The ideal … to deliver secure, efficient, and maintainable software solutions. • Implement and manage cloud infrastructure using AWS services. • Automate deployment and infrastructure provisioning using Terraform or Ansible. • Optimize application performance using ApacheSpark for data processing where required. • Write clean, efficient, and maintainable code following best coding practices. • Troubleshoot, debug, and resolve complex technical issues in production and development environments. … RDS, etc.). • Proficiency in Terraform or Ansible for infrastructure automation. • Working knowledge of Angular or similar UI frameworks. • Solid understanding of SQL and relational database design. • Experience with ApacheSpark for distributed data processing (preferred). • Strong problem-solving, analytical, and debugging skills. • Excellent communication and teamwork abilities. Nice to Have • Experience in CI/CD pipelines More ❯
while staying close to the code. Perfect if you want scope for growth without going "post-technical." What you'll do Design and build modern data platforms using Databricks, ApacheSpark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, ApacheSpark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
while staying close to the code. Perfect if you want scope for growth without going "post-technical." What you'll do Design and build modern data platforms using Databricks, ApacheSpark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, ApacheSpark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
generation data platform at FTSE Russell - and we want you to shape it with us. Your role will involve: Designing and developing scalable, testable data pipelines using Python and ApacheSpark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated … testing Contributing to the development of a lakehouse architecture using Apache Iceberg Collaborating with business teams to translate requirements into data-driven solutions Building observability into data flows and implementing basic quality checks Participating in code reviews, pair programming, and architecture discussions Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn ApacheSpark for large-scale data processing Is familiar with the AWS data stack (eg S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
You May Be a Good Fit If You Have Strong software engineering skills, with proficiency in Python and experience building data pipelines. Familiarity with data processing frameworks such as ApacheSpark, Apache Beam, Pandas, or similar tools. Experience working with large-scale web datasets like CommonCrawl. A passion for bridging research and engineering to solve complex data More ❯
You May Be a Good Fit If You Have Strong software engineering skills, with proficiency in Python and experience building data pipelines. Familiarity with data processing frameworks such as ApacheSpark, Apache Beam, Pandas, or similar tools. Experience working with large-scale web datasets like CommonCrawl. A passion for bridging research and engineering to solve complex data More ❯
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
of experienced leaders from Big Tech and Scale-ups. Opportunity to build an AI-native company from the ground up, architecting the data foundation using cutting-edge technologies like Apache Iceberg. What you will do: Design, implement, and maintain scalable data pipelines that ingest gigabytes to terabytes of security data daily, processing millions of records rapidly. Architect and evolve … S3-based data lake infrastructure using Apache Iceberg, creating distributed systems for efficient storage and transformations. Take end-to-end ownership of the complete data lifecycle, from Kafka ingestion to Spark/EMR transformations, enabling AI-powered analysis. The ideal candidate: 7+ years of software engineering experience with at least 4+ years focused specifically on data engineering, demonstrating … Proven track record building and scaling data ingestion systems handling gigabytes to terabytes daily, with experience at companies moving massive data volumes. Deep, hands-on production experience with Python, Apache Kafka, and ApacheSpark, using these technologies intimately. How to Apply: To apply for this job speak to Jack, our AI recruiter. Step 1. Visit our website More ❯
of experienced leaders from Big Tech and Scale-ups. Opportunity to build an AI-native company from the ground up, architecting the data foundation using cutting-edge technologies like Apache Iceberg. What you will do: Design, implement, and maintain scalable data pipelines that ingest gigabytes to terabytes of security data daily, processing millions of records rapidly. Architect and evolve … S3-based data lake infrastructure using Apache Iceberg, creating distributed systems for efficient storage and transformations. Take end-to-end ownership of the complete data lifecycle, from Kafka ingestion to Spark/EMR transformations, enabling AI-powered analysis. The ideal candidate: 7+ years of software engineering experience with at least 4+ years focused specifically on data engineering, demonstrating … Proven track record building and scaling data ingestion systems handling gigabytes to terabytes daily, with experience at companies moving massive data volumes. Deep, hands-on production experience with Python, Apache Kafka, and ApacheSpark, using these technologies intimately. How to Apply: To apply for this job speak to Jack, our AI recruiter. Step 1. Visit our website More ❯
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid/Remote Options
Client Server
Data Software Engineer (Python Spark SaaS) Nottingham/WFH to £100k Are you a data centric Software Engineer with strong Python coding skills? You could be progressing your career in a senior, hands-on role Data Software Engineer at a scaling, global technical services company as they look to expand their product offerings with a new SaaS data analytics … technical challenges, you'll be collaboratively problem solving as part of an Agile development using a range of technology to create data pipelines with a focus on Python and Spark; you'll be working with Azure, ETL pipelines and CI/CD, ingesting and analysing terabytes of data with varying structures from a range of sources. You'll be … Nottingham office. About you : You have strong Python backend software engineer skills You have experience working with large data sets You have experience of using PySpark and ideally also ApacheSpark You believe in automating wherever possible You're a collaborative problem solver with great communication skills Other technology in the stack includes: FastAPI, Django, Airflow, Kafka, ETL More ❯
tick-level data pipelines for financial or crypto markets. Prior experience in low-latency or high-availability systems preferred. Skills Azure: ADLS Gen2, Event Hubs, Synapse Analytics, Azure Databricks (Spark), Azure Functions, Azure Data Factory/Databricks Workflows, Key Vault, Azure Monitoring/Log Analytics; IaC with Terraform/Bicep; CI/CD with Azure DevOps or GitHub Actions. More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
High Wycombe, Buckinghamshire, UK Hybrid/Remote Options
Williams Lea
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯
control, code review), and with project management tools (JIRA or similar) in an agile environment. Familiarity with other ML Ops tools (Kubeflow, MLflow, etc.) or big data processing frameworks (Spark) can be an added advantage Rewards and Benefits We believe in supporting our employees in both their professional and personal lives. As part of our commitment to your well More ❯