have Experience with Identity vendors Experience in online survey methodologies Experience in Identity graph methodologies Ability to write and optimize SQL queries Experience working with big data technologies (e.g. Spark) Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver across borders. Innovation is in our blood We’re More ❯
have Experience with Identity vendors Experience in online survey methodologies Experience in Identity graph methodologies Ability to write and optimize SQL queries Experience working with big data technologies (e.g. Spark) Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver across borders. Innovation is in our blood We’re More ❯
and applying best practices in security and compliance, this role offers both technical depth and impact. Key Responsibilities Design & Optimise Pipelines - Build and refine ETL/ELT workflows using Apache Airflow for orchestration. Data Ingestion - Create reliable ingestion processes from APIs and internal systems, leveraging tools such as Kafka, Spark, or AWS-native services. Cloud Data Platforms - Develop … DAGs and configurations. Security & Compliance - Apply encryption, access control (IAM), and GDPR-aligned data practices. Technical Skills & Experience Proficient in Python and SQL for data processing. Solid experience with Apache Airflow - writing and configuring DAGs. Strong AWS skills (S3, Redshift, etc.). Big data experience with Apache Spark. Knowledge of data modelling, schema design, and partitioning. Understanding of More ❯
Oversee pipeline performance, address issues promptly, and maintain comprehensive data documentation. What Youll Bring Technical Expertise: Proficiency in Python and SQL; experience with data processing frameworks such as Airflow, Spark, or TensorFlow. Data Engineering Fundamentals: Strong understanding of data architecture, data modelling, and scalable data solutions. Backend Development: Willingness to develop proficiency in backend technologies (e.g., Python with Django … to support data pipeline integrations. Cloud Platforms: Familiarity with AWS or Azure, including services like Apache Airflow, Terraform, or SageMaker. Data Quality Management: Experience with data versioning and quality assurance practices. Automation and CI/CD: Knowledge of build and deployment automation processes. Experience within MLOps A 1st class Data degree from one of the UKs top 15 Universities More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
discipline At least AAB at A-Level or equivalent UCAS points (please ensure A-Level grades are included on your CV). Outstanding customer-facing skills with a sales spark A motivated self-starter with a problem-solving attitude Strong aptitude for picking up technologies Ability to work with autonomy and as part of a team Great communication skills More ❯
scientific, engineering and business functions. You are highly proficient in programming languages common to data science (e.g. Python, R, Scala) and have experience with large scale data processing (e.g. Spark). A Master's degree or PhD in Computer Science, Machine Learning, Statistics, or related quantitative field is required. At Quantcast, we craft offers that reflect your unique skills More ❯
engineers to senior leadership - Develop custom metrics and models that measure the effectiveness of discovery mechanisms ABOUT AUDIBLE Audible is the leading producer and provider of audio storytelling. We spark listeners' imaginations, offering immersive, cinematic experiences full of inspiration and insight to enrich our customers daily lives. We are a global company with an entrepreneurial spirit. We are dreamers More ❯
and a strong background in using data to influence decisions and behaviours Experience with (in rough priority order): SQL (experience in writing performant queries) Python & DS libraries (sklearn, pandas, spark, etc) Data transformation Data visualisation & storytelling Any of the following would be a bonus DBT Experience working with ambiguity in a scale-up (or scale-up-like) environment Passion More ❯
on experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, Spark Streaming, Kinesis). Proven experience developing modern data architectures including Data Lakehouse and Data … and data governance including GDPR. Bonus Points For Expertise in Data Modelling, schema design, and handling both structured and semi-structured data. Familiarity with distributed systems such as Hadoop, Spark, HDFS, Hive, Databricks. Exposure to AWS Lake Formation and automation of ingestion and transformation layers. Background in delivering solutions for highly regulated industries. Passion for mentoring and enabling data More ❯
contributing to knowledge sharing across the team. What We're Looking For Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building real-time event streaming pipelines (e.g., Kafka, Spark Streaming, Kinesis). Proficiency in AWS cloud environments. Proven experience developing modern data architectures … and data governance including GDPR. Bonus Points For Expertise in Data Modelling, schema design, and handling both structured and semi-structured data. Familiarity with distributed systems such as Hadoop, Spark, HDFS, Hive, Databricks. Exposure to AWS Lake Formation and automation of ingestion and transformation layers. Background in delivering solutions for highly regulated industries. Passion for mentoring and enabling data More ❯
Role: Data Engineer Role type: Permanent Location: UK or Greece Preferred start date: ASAP LIFE AT SATALIA As an organization, we push the boundaries of data science, optimization, and artificial intelligence to solve the hardest problems in industry. Satalia is More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
here to help you develop into a better-rounded professional. BASIC QUALIFICATIONS - 7+ years of technical specialist, design and architecture experience - 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 7+ years of consulting, design and implementation of serverless distributed solutions experience - 5+ years of software development with object oriented language experience - 3+ years of cloud More ❯
the latest tech, serious brain power, and deep knowledge of just about every industry. We believe a mix of data, analytics, automation, and responsible AI can do almost anything-spark digital metamorphoses, widen the range of what humans can do, and breathe life into smart products and services. Want to join our crew of sharp analytical minds? You'll More ❯
to solve any given problem. Technologies We Use A variety of languages, including Java, Python, Rust and Go for backend and Typescript for frontend Open-source technologies like Cassandra, Spark, Iceberg, ElasticSearch, Kubernetes, React, and Redux Industry-standard build tooling, including Gradle for Java, Cargo for Rust, Hatch for Python, Webpack & PNPM for Typescript What We Value Strong engineering More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
Drive best practices around CI/CD, infrastructure-as-code, and modern data tooling Introduce and advocate for scalable, efficient data processes and platform enhancements Tech Environment: Python, SQL, Spark, Airflow, dbt, Snowflake, Postgres AWS (S3), Docker, Terraform Exposure to Apache Iceberg, streaming tools (Kafka, Kinesis), and ML pipelines is a bonus What We're Looking For: 5+ More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
AI to move faster and smarter. You will be experienced in AI and enjoy writing code. Responsibilities Build and maintain scalable distributed systems using Scala and Java Design complex Spark jobs, asynchronous APIs, and parallel processes Use Gen AI tools to enhance development speed and quality Collaborate in Agile teams to improve their data collection pipelines Apply best practices … structures, algorithms, and design patterns effectively Foster empathy and collaboration within the team and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
AI to move faster and smarter. You will be experienced in AI and enjoy writing code. Responsibilities Build and maintain scalable distributed systems using Scala and Java Design complex Spark jobs, asynchronous APIs, and parallel processes Use Gen AI tools to enhance development speed and quality Collaborate in Agile teams to improve their data collection pipelines Apply best practices … structures, algorithms, and design patterns effectively Foster empathy and collaboration within the team and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience More ❯
skills in Python and SQL Demonstrable hands-on experience in AWS cloud Data ingestions both batch and streaming data and data transformations (Airflow, Glue, Lambda, Snowflake Data Loader, FiveTran, Spark, Hive etc.). Apply agile thinking to your work. Delivering in iterations that incrementally build on what went before. Excellent problem-solving and analytical skills. Good written and verbal … translate concepts into easily understood diagrams and visuals for both technical and non-technical people alike. AWS cloud products (Lambda functions, Redshift, S3, AmazonMQ, Kinesis, EMR, RDS (Postgres . Apache Airflow for orchestration. DBT for data transformations. Machine Learning for product insights and recommendations. Experience with microservices using technologies like Docker for local development. Apply engineering best practices to More ❯
and reliability across our platform. Working format: full-time, remote. Schedule: Monday to Friday (the working day is 8+1 hours). Responsibilities: Design, develop, and maintain data pipelines using Apache Airflow . Create and support data storage systems (Data Lakes/Data Warehouses) based on AWS (S3, Redshift, Glue, Athena, etc.). Integrate data from various sources, including mobile … attribution, retention, LTV , and other mobile metrics . Ability to collect and aggregate user data from mobile sources for analytics. Experience building real-time data pipelines (e.g., Kinesis, Kafka, Spark Streaming ). Hands-on CI/CD experience with GitHub . Startup or small team experience - the ability to quickly switch between tasks , suggest lean architectural solutions , make independent More ❯
scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet the … Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector best practice guidance, e.g. ITIL, OGC toolkit. Additional Requirements More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
as Computer Science, Statistics, Applied Mathematics, or Engineering - Strong experience with Python and R - A strong understanding of a number of the tools across the Hadoop ecosystem such as Spark, Hive, Impala & Pig - An expertise in at least one specific data science area such as text mining, recommender systems, pattern recognition or regression models - Previous experience in leading a More ❯