the latest tech, serious brain power, and deep knowledge of just about every industry. We believe a mix of data, analytics, automation, and responsible AI can do almost anything-spark digital metamorphoses, widen the range of what humans can do, and breathe life into smart products and services. Want to join our crew of sharp analytical minds? You'll More ❯
to solve any given problem. Technologies We Use A variety of languages, including Java, Python, Rust and Go for backend and Typescript for frontend Open-source technologies like Cassandra, Spark, Iceberg, ElasticSearch, Kubernetes, React, and Redux Industry-standard build tooling, including Gradle for Java, Cargo for Rust, Hatch for Python, Webpack & PNPM for Typescript What We Value Strong engineering More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
Drive best practices around CI/CD, infrastructure-as-code, and modern data tooling Introduce and advocate for scalable, efficient data processes and platform enhancements Tech Environment: Python, SQL, Spark, Airflow, dbt, Snowflake, Postgres AWS (S3), Docker, Terraform Exposure to Apache Iceberg, streaming tools (Kafka, Kinesis), and ML pipelines is a bonus What We're Looking For: 5+ More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
AI to move faster and smarter. You will be experienced in AI and enjoy writing code. Responsibilities Build and maintain scalable distributed systems using Scala and Java Design complex Spark jobs, asynchronous APIs, and parallel processes Use Gen AI tools to enhance development speed and quality Collaborate in Agile teams to improve their data collection pipelines Apply best practices … structures, algorithms, and design patterns effectively Foster empathy and collaboration within the team and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
AI to move faster and smarter. You will be experienced in AI and enjoy writing code. Responsibilities Build and maintain scalable distributed systems using Scala and Java Design complex Spark jobs, asynchronous APIs, and parallel processes Use Gen AI tools to enhance development speed and quality Collaborate in Agile teams to improve their data collection pipelines Apply best practices … structures, algorithms, and design patterns effectively Foster empathy and collaboration within the team and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience More ❯
join on a contract basis to support major digital transformation projects with Tier 1 banks. You'll help design and build scalable, cloud-based data solutions using Databricks , Python , Spark , and Kafka -working on both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , Delta Lake , Spark Structured Streaming , and More ❯
skills in Python and SQL Demonstrable hands-on experience in AWS cloud Data ingestions both batch and streaming data and data transformations (Airflow, Glue, Lambda, Snowflake Data Loader, FiveTran, Spark, Hive etc.). Apply agile thinking to your work. Delivering in iterations that incrementally build on what went before. Excellent problem-solving and analytical skills. Good written and verbal … translate concepts into easily understood diagrams and visuals for both technical and non-technical people alike. AWS cloud products (Lambda functions, Redshift, S3, AmazonMQ, Kinesis, EMR, RDS (Postgres . Apache Airflow for orchestration. DBT for data transformations. Machine Learning for product insights and recommendations. Experience with microservices using technologies like Docker for local development. Apply engineering best practices to More ❯
and reliability across our platform. Working format: full-time, remote. Schedule: Monday to Friday (the working day is 8+1 hours). Responsibilities: Design, develop, and maintain data pipelines using Apache Airflow . Create and support data storage systems (Data Lakes/Data Warehouses) based on AWS (S3, Redshift, Glue, Athena, etc.). Integrate data from various sources, including mobile … attribution, retention, LTV , and other mobile metrics . Ability to collect and aggregate user data from mobile sources for analytics. Experience building real-time data pipelines (e.g., Kinesis, Kafka, Spark Streaming ). Hands-on CI/CD experience with GitHub . Startup or small team experience - the ability to quickly switch between tasks , suggest lean architectural solutions , make independent More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter , LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
as Computer Science, Statistics, Applied Mathematics, or Engineering - Strong experience with Python and R - A strong understanding of a number of the tools across the Hadoop ecosystem such as Spark, Hive, Impala & Pig - An expertise in at least one specific data science area such as text mining, recommender systems, pattern recognition or regression models - Previous experience in leading a More ❯
learning, data processing technologies and a broad set of AWS technologies. In order to drive the expansion of Amazon selection, we use cluster-computing technologies such as MapReduce and Spark to process billions of products and find the products/brands not already sold on Amazon. We work with structured and unstructured content such as text and images and More ❯
learning, data processing technologies and a broad set of AWS technologies. In order to drive the expansion of Amazon selection, we use cluster-computing technologies such as MapReduce and Spark to process billions of products and find the products/brands not already sold on Amazon. We work with structured and unstructured content such as text and images and More ❯
are a skilled programmer with a strong command of major machine learning languages such as Python or Scala, and have expertise in utilising statistical and machine learning libraries like Spark MLlib, scikit-learn, or PyTorch to write clear, efficient, and well-documented code. Experience with optimisation techniques, control theory, causal modelling or elasticity modelling is desirable. Prior experience in More ❯
in their career path, as well as the rollout of a new tool. You prepare your section of a weekly business review document, to review data-driven insights and spark discussion with leadership on team wins and areas for improvement. BASIC QUALIFICATIONS 6+ years professional experience 3+ years experience managing direct reports 2+ years experience in programmatic advertising Experience More ❯
technology to solve a given problem. Right now, we use: • A variety of languages, including Java and Go for backend and TypeScript for frontend • Open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux • Industry-standard build tooling, including Gradle, CircleCI, and GitHub What We Value Passion for helping other developers build better applications. Empathy for the impact your More ❯
with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker/Bedrock) Deep understanding of distributed systems and microservices architecture Expert in data pipeline platforms (Apache Kafka, Airflow, Spark) Proficient in both SQL (PostgreSQL, MySQL) and NoSQL (Elasticsearch, MongoDB) databases Strong containerization and orchestration skills (Docker, Kubernetes) Experience with infrastructure as code (Terraform, CloudFormation More ❯
. Experience in AWS cloud services, particularly Lambda, SNS, S3, and EKS, API Gateway Knowledge of data warehouse design, ETL/ELT processes, and big data technologies (e.g., Snowflake, Spark). Familiarity with data governance and compliance frameworks (e.g., GDPR, HIPAA). Strong communication and stakeholder management skills. Analytical mindset with attention to detail. Ability to lead and mentor … developing and implementing enterprise data models. Experience with Interface/API data modelling. Experience with CI/CD GITHUB Actions (or similar) Knowledge of Snowflake/SQL Knowledge of Apache Airflow Knowledge of DBT Familiarity with Atlan for data catalog and metadata management Understanding of iceberg tables Who we are: Were a business with a global reach that empowers More ❯
experience Advanced AI/ML modelling (Python, PySpark, MS Copilot, kdb+/q, C++, Java) Must be well versed with SQL and have hands on experience writing SQL (preferably Spark SQL) that is productionized (not ad-hoc queries) for at least 2-4 years Familiarity with Cross-Product and Cross-Venue Surveillance Techniques particularly with vendors such as TradingHub … Steeleye, Nasdaq or NICE Statistical analysis and anomaly detection Large-scale data engineering and ETL pipeline development (Spark, Hadoop, or similar) Market microstructure and trading strategy expertise Experience with enterprise-grade surveillance systems in banking. Integration of cross-product and cross-venue data sources Regulatory compliance (MAR, MAD, MiFID II, Dodd-Frank) Code quality, version control, and best practices.Soft More ❯
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯