skills in Python and SQL Demonstrable hands-on experience in AWS cloud Data ingestions both batch and streaming data and data transformations (Airflow, Glue, Lambda, Snowflake Data Loader, FiveTran, Spark, Hive etc.). Apply agile thinking to your work. Delivering in iterations that incrementally build on what went before. Excellent problem-solving and analytical skills. Good written and verbal … translate concepts into easily understood diagrams and visuals for both technical and non-technical people alike. AWS cloud products (Lambda functions, Redshift, S3, AmazonMQ, Kinesis, EMR, RDS (Postgres . Apache Airflow for orchestration. DBT for data transformations. Machine Learning for product insights and recommendations. Experience with microservices using technologies like Docker for local development. Apply engineering best practices to More ❯
and reliability across our platform. Working format: full-time, remote. Schedule: Monday to Friday (the working day is 8+1 hours). Responsibilities: Design, develop, and maintain data pipelines using Apache Airflow . Create and support data storage systems (Data Lakes/Data Warehouses) based on AWS (S3, Redshift, Glue, Athena, etc.). Integrate data from various sources, including mobile … attribution, retention, LTV , and other mobile metrics . Ability to collect and aggregate user data from mobile sources for analytics. Experience building real-time data pipelines (e.g., Kinesis, Kafka, Spark Streaming ). Hands-on CI/CD experience with GitHub . Startup or small team experience - the ability to quickly switch between tasks , suggest lean architectural solutions , make independent More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter , LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
as Computer Science, Statistics, Applied Mathematics, or Engineering - Strong experience with Python and R - A strong understanding of a number of the tools across the Hadoop ecosystem such as Spark, Hive, Impala & Pig - An expertise in at least one specific data science area such as text mining, recommender systems, pattern recognition or regression models - Previous experience in leading a More ❯
learning, data processing technologies and a broad set of AWS technologies. In order to drive the expansion of Amazon selection, we use cluster-computing technologies such as MapReduce and Spark to process billions of products and find the products/brands not already sold on Amazon. We work with structured and unstructured content such as text and images and More ❯
are a skilled programmer with a strong command of major machine learning languages such as Python or Scala, and have expertise in utilising statistical and machine learning libraries like Spark MLlib, scikit-learn, or PyTorch to write clear, efficient, and well-documented code. Experience with optimisation techniques, control theory, causal modelling or elasticity modelling is desirable. Prior experience in More ❯
in their career path, as well as the rollout of a new tool. You prepare your section of a weekly business review document, to review data-driven insights and spark discussion with leadership on team wins and areas for improvement. BASIC QUALIFICATIONS 6+ years professional experience 3+ years experience managing direct reports 2+ years experience in programmatic advertising Experience More ❯
technology to solve a given problem. Right now, we use: • A variety of languages, including Java and Go for backend and TypeScript for frontend • Open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux • Industry-standard build tooling, including Gradle, CircleCI, and GitHub What We Value Passion for helping other developers build better applications. Empathy for the impact your More ❯
. Experience in AWS cloud services, particularly Lambda, SNS, S3, and EKS, API Gateway Knowledge of data warehouse design, ETL/ELT processes, and big data technologies (e.g., Snowflake, Spark). Familiarity with data governance and compliance frameworks (e.g., GDPR, HIPAA). Strong communication and stakeholder management skills. Analytical mindset with attention to detail. Ability to lead and mentor … developing and implementing enterprise data models. Experience with Interface/API data modelling. Experience with CI/CD GITHUB Actions (or similar) Knowledge of Snowflake/SQL Knowledge of Apache Airflow Knowledge of DBT Familiarity with Atlan for data catalog and metadata management Understanding of iceberg tables Who we are: Were a business with a global reach that empowers More ❯
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Engineers work alongside Machine learning engineers, BI Developers and Data Scientists in cross-functional teams with key impacts and visions. Using your skills with SQL, Python, data modelling and Spark to ingest and transform high volume complex raw event data into user-friendly high impact tables. As a department we strive to give our Data Engineers have high levels … Be deploying applications to the Cloud (AWS) We'd love to hear from you if you Have strong experience with Python & SQL Have experience developing data pipelines using dbt, Spark and Airflow Have experience Data modelling (building optimised and efficient data marts and warehouses in the cloud) Work with Infrastructure as code (Terraform) and containerising applications (Docker) Work with More ❯
It has come to our notice that Fractal Analytics' name and logo are being misused by certain unscrupulous persons masquerading as Fractal's authorized representatives to approach job seekers to part with sensitive personal information and/or money in More ❯
West Midlands, United Kingdom Hybrid / WFH Options
Experis
data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Anson McCade
Lead Data Engineer Location: Leeds (hybrid) Salary: Up to £70,000 (depending on experience) + bonus Clearance Requirement: Candidates must be eligible for UK National Security Vetting. We're looking for an experienced Lead Data Engineer to join a fast More ❯
recommendation engines, NLP, and Computer Vision. Responsibilities: Design, develop, and productionize machine learning models across various applications. Work with Python (ideally production-level code) and other tools like SQL, Spark, and Databricks. Apply clustering, classification, regression, time series modelling, NLP, and deep learning. Develop recommendation engines and leverage third-party data enhancements. Implement MLOps/DevOps practices in cloud … to translate business challenges into data-driven solutions. Requirements: MSc or PhD Degree in Computer Science, Artificial Intelligence, Mathematics, Statistics or related fields. Strong Python skills (bonus: C++, SQL, Spark) Experience in ML algorithms (XGBoost, clustering, regression) Expertise in Time Series, NLP, Computer Vision, MLOps Knowledge of AWS/Azure/GCP, CI/CD, and Agile development Ability More ❯
recommendation engines, NLP, and Computer Vision. Responsibilities: Design, develop, and productionize machine learning models across various applications. Work with Python (ideally production-level code) and other tools like SQL, Spark, and Databricks. Apply clustering, classification, regression, time series modelling, NLP, and deep learning. Develop recommendation engines and leverage third-party data enhancements. Implement MLOps/DevOps practices in cloud … to translate business challenges into data-driven solutions. Requirements: MSc or PhD Degree in Computer Science, Artificial Intelligence, Mathematics, Statistics or related fields. Strong Python skills (bonus: C++, SQL, Spark) Experience in ML algorithms (XGBoost, clustering, regression) Expertise in Time Series, NLP, Computer Vision, MLOps Knowledge of AWS/Azure/GCP, CI/CD, and Agile development Ability More ❯
part of an Agile engineering or development team Strong hands-on experience and understanding of working in a cloud environment such as AWS Experience with EMR (Elastic Map Reduce), Spark Strong experience with CI/CD pipelines with Jenkins Experience with the following technologies: SpringBoot, Gradle, Terraform, Ansible, GitHub/GitFlow, PCF/OCP/Kubernetes technologies, Artifactory, IaC … part of an Agile engineering or development team Strong hands-on experience and understanding of working in a cloud environment such as AWS Experience with EMR (Elastic Map Reduce), Spark Strong experience with CI/CD pipelines with Jenkins Experience with the following technologies: SpringBoot, Gradle, Terraform, Ansible, GitHub/GitFlow, PCF/OCP/Kubernetes technologies, Artifactory, IaC More ❯