Guildford, Surrey, United Kingdom Hybrid / WFH Options
Deloitte LLP
for Deloitte landscape and uses cases. Build data pipelines, models, and AI applications, using cloud platforms and frameworks such as Azure AI/ML Studio, AWS Bedrock, GCP Vertex, Spark, TensorFlow, PyTorch, etc. Build and deploy production grade fine-tuned LLMs and complex RAG architectures. Create and manage the complex and robust prompts across the GenAI solutions. Communicate effectively More ❯
business value and how to deliver this end to end then this is the role you've been searching for. We're looking for someone with an entrepreneur's spark who wants to collaborate with business partners in an agile manner to create ah-ha moments, which fuels the thirst for more insights and data driven solutions. You'll More ❯
REST APIs and integration techniques Familiarity with data visualization tools and libraries (e.g. Power BI) Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow Previous exposure to big data technologies (e.g. Hadoop, Spark) for large data processing Strong analytical skills, including a thorough understanding of how to interpret customer business requirements More ❯
Ashburn, Virginia, United States Hybrid / WFH Options
Adaptive Solutions, LLC
Minimum of 3 years' experience building and deploying scalable, production-grade AI/ML pipelines in AWS and Databricks • Practical knowledge of tools such as MLflow, Delta Lake, and ApacheSpark for pipeline development and model tracking • Experience architecting end-to-end ML solutions, including feature engineering, model training, deployment, and ongoing monitoring • Familiarity with data pipeline orchestration More ❯
South West London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
on experience with cloud platforms like AWS, Azure, GCP, or Snowflake. Strong knowledge of data governance, compliance, and security standards (GDPR, CCPA). Proficiency in big data technologies like ApacheSpark and understanding of data product strategies. Strong leadership and stakeholder management skills in Agile delivery environments. Package: £90,000 - £115,000 base salary Bonus Pension and company More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
that help us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on More ❯
Experience in ML and AI, or other deep learning-related independent project work with evidence of completion. • Experience with data ingestion or processing tools and frameworks, such as Docling, Spark, Pandas, Feast, Airflow, NumPy. • Experience with retrieval augmented generation approaches, including vector databases and dense retrieval methods. • Knowledgeable in various synthetic data generation approaches, such as GANs, VAEs, or More ❯
researching new technologies and software versions Working with cloud technologies and different operating systems Working closely alongside Data Engineers and DevOps engineers Working with big data technologies such as spark Demonstrating stakeholder engagement by communicating with the wider team to understand the functional and non-functional requirements of the data and the product in development and its relationship to … networks into production Experience with Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
options Hybrid working - 1 day a week in a central London office High-growth scale-up with a strong mission and serious funding Modern tech stack: Python, SQL, Snowflake, Apache Iceberg, AWS, Airflow, dbt, Spark Work cross-functionally with engineering, product, analytics, and data science leaders What You'll Be Doing Lead, mentor, and grow a high-impact More ❯
these roles include; Multiple Databricks projects delivered Excellent consulting and client facing experience 7 - 10 years+ experience of Consulting in Data Engineering, Data Platform and Analytics Deep experience with ApacheSpark, PySpark CI/CD for Production deployments Working knowledge of MLOps Strong experience with Optmisations for performance and scalability These roles will be paid at circa More ❯
experience as a Data Engineer (3-5 years); Deep expertise in designing and implementing solutions on Google Cloud; Strong interpersonal and stakeholder management skills; In-depth knowledge of Hadoop, Spark, and similar frameworks; In-depth knowledge of programming languages including Java; Expert in cloud-native technologies, IaC, and Docker tools; Excellent project management skills; Excellent communication skills; Proactivity; Business More ❯
and applying best practices in security and compliance, this role offers both technical depth and impact. Key Responsibilities Design & Optimise Pipelines - Build and refine ETL/ELT workflows using Apache Airflow for orchestration. Data Ingestion - Create reliable ingestion processes from APIs and internal systems, leveraging tools such as Kafka, Spark, or AWS-native services. Cloud Data Platforms - Develop … DAGs and configurations. Security & Compliance - Apply encryption, access control (IAM), and GDPR-aligned data practices. Technical Skills & Experience Proficient in Python and SQL for data processing. Solid experience with Apache Airflow - writing and configuring DAGs. Strong AWS skills (S3, Redshift, etc.). Big data experience with Apache Spark. Knowledge of data modelling, schema design, and partitioning. Understanding of More ❯
know your way around complex joins and large datasets. Git - Version control is second nature. You know how to branch, commit, and collaborate cleanly. Bonus Skills (nice to have): Apache Hadoop, Spark/Docker, Kubernetes/Grafana, Prometheus, Graylog/Jenkins/Java, Scala/Shell scripting Team ️ Our Tech Stack We build with the tools we love More ❯
know your way around complex joins and large datasets. Git - Version control is second nature. You know how to branch, commit, and collaborate cleanly. Bonus Skills (nice to have): Apache Hadoop, Spark/Docker, Kubernetes/Grafana, Prometheus, Graylog/Jenkins/Java, Scala/Shell scripting Team ️ Our Tech Stack We build with the tools we love More ❯
you'll bring: Experienced user of at least one analytical program, such as R, Python and SAS/WPS or equivalent. Experience using Java, Scala, R or Python within Apache Spark. Worked with distributed file systems (such as HDFS) Proficient in the use of MS excel (can perform complex functions). Experience interpreting analysis outputs, providing insight and making More ❯
Bromsgrove, Worcestershire, United Kingdom Hybrid / WFH Options
Talk Recruitment
Stress-testing, performance-tuning, and optimization skills. Debugging in multi-threaded environments. Eligible to work in the UK. Desirable Skills: Technologies such as Zookeeper, Terraform, Ansible, Cassandra, RabbitMQ, Kafka, Spark, Redis, MongoDB, CosmoDB, Xsolla Backend(AcceleratXR), Pragma, Playfab, Epic Online Services, Unity Game Services, Firebase, Edgegap, Photon Game engine experience with Unreal or Unity Web application development experience (NodeJS More ❯
Bromsgrove, Worcestershire, United Kingdom Hybrid / WFH Options
Talk Recruitment
Stress-testing, performance-tuning, and optimization skills. Debugging in multi-threaded environments. Eligible to work in the UK. Desirable Skills: Technologies such as Zookeeper, Terraform, Ansible, Cassandra, RabbitMQ, Kafka, Spark, Redis, MongoDB, CosmoDB, Xsolla Backend(AcceleratXR), Pragma, Playfab, Epic Online Services, Unity Game Services, Firebase, Edgegap, Photon Game engine experience with Unreal or Unity Web application development experience (NodeJS More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Rackner
in Kubernetes (AWS EKS, Rancher) with CI/CD Apply DevSecOps + security-first practices from design to delivery Tech You'll Touch AWS Python FastAPI Node.js React Terraform Apache Airflow Trino Spark Hadoop Kubernetes You Have Active Secret Clearance 3+ years in Agile, cloud-based data engineering Experience with API design, ORM + SQL, AWS data services More ❯
on experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, Spark Streaming, Kinesis). Proven experience developing modern data architectures including Data Lakehouse and Data … and data governance including GDPR. Bonus Points For Expertise in Data Modelling, schema design, and handling both structured and semi-structured data. Familiarity with distributed systems such as Hadoop, Spark, HDFS, Hive, Databricks. Exposure to AWS Lake Formation and automation of ingestion and transformation layers. Background in delivering solutions for highly regulated industries. Passion for mentoring and enabling data More ❯
contributing to knowledge sharing across the team. What We're Looking For Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building real-time event streaming pipelines (e.g., Kafka, Spark Streaming, Kinesis). Proficiency in AWS cloud environments. Proven experience developing modern data architectures … and data governance including GDPR. Bonus Points For Expertise in Data Modelling, schema design, and handling both structured and semi-structured data. Familiarity with distributed systems such as Hadoop, Spark, HDFS, Hive, Databricks. Exposure to AWS Lake Formation and automation of ingestion and transformation layers. Background in delivering solutions for highly regulated industries. Passion for mentoring and enabling data More ❯
Role: Data Engineer Role type: Permanent Location: UK or Greece Preferred start date: ASAP LIFE AT SATALIA As an organization, we push the boundaries of data science, optimization, and artificial intelligence to solve the hardest problems in industry. Satalia is More ❯
in delivering data architecture projects - ideally across both traditional and modern cloud-native stacks Knowledge of at least 3 mainstream data technologies (e.g. AWS data services, SQL/NoSQL, Spark, Kafka, etc.) Confident communicator - able to influence clients and internal stakeholders, and represent the company in external forums A self-starter mindset, with the ability to lead in uncertain More ❯
to solve any given problem. Technologies We Use A variety of languages, including Java, Python, Rust and Go for backend and Typescript for frontend Open-source technologies like Cassandra, Spark, Iceberg, ElasticSearch, Kubernetes, React, and Redux Industry-standard build tooling, including Gradle for Java, Cargo for Rust, Hatch for Python, Webpack & PNPM for Typescript What We Value Strong engineering More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
Drive best practices around CI/CD, infrastructure-as-code, and modern data tooling Introduce and advocate for scalable, efficient data processes and platform enhancements Tech Environment: Python, SQL, Spark, Airflow, dbt, Snowflake, Postgres AWS (S3), Docker, Terraform Exposure to Apache Iceberg, streaming tools (Kafka, Kinesis), and ML pipelines is a bonus What We're Looking For: 5+ More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
AI to move faster and smarter. You will be experienced in AI and enjoy writing code. Responsibilities Build and maintain scalable distributed systems using Scala and Java Design complex Spark jobs, asynchronous APIs, and parallel processes Use Gen AI tools to enhance development speed and quality Collaborate in Agile teams to improve their data collection pipelines Apply best practices … structures, algorithms, and design patterns effectively Foster empathy and collaboration within the team and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience More ❯