. Experience in using cloud-native services for data engineering and analytics. Experience with distributed systems, serverless data pipelines, and big data technologies (e.g., Spark, Kafka). Ability to define and enforce data governance standards. Experience in providing architectural guidance, mentorship and leading cross-functional discussions to align on More ❯
london, south east england, united kingdom Hybrid / WFH Options
Merlin Entertainments
. Experience in using cloud-native services for data engineering and analytics. Experience with distributed systems, serverless data pipelines, and big data technologies (e.g., Spark, Kafka). Ability to define and enforce data governance standards. Experience in providing architectural guidance, mentorship and leading cross-functional discussions to align on More ❯
and their techniques. Experience with data science, big data analytics technology stack, analytic development for endpoint and network security, and streaming technologies (e.g., Kafka, Spark Streaming, and Kinesis). • Strong sense of ownership combined with collaborative approach to overcoming challenges and influencing organisational change. Amazon is committed to a More ❯
strategically about business, product, and technical challenges in an enterprise environment - Extensive hands-on experience with data platform technologies, including at least three of: Spark, Hadoop ecosystem, orchestration frameworks, MPP databases, NoSQL, streaming technologies, data catalogs, BI and visualization tools - Proficiency in at least one programming language (e.g., Python More ❯
background with 5+ years of experience in blockchain, cryptocurrency, and backend software development. Technologies we use (experience not required): AWS serverless architectures Kubernetes PostgreSQL Spark Typescript Terraform Kafka Github including Github Actions Java About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
tools (e.g., Matplotlib, Seaborn, Tableau). Ability to work independently and lead projects from inception to deployment. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure) is desirable. MSc or PhD in Computer Science, Data Science, or related field is preferred. Don't More ❯
Development Work with engineering teams to develop an AI-driven observability and automation platform, leveraging: Telemetry ingestion (Kafka, OpenTelemetry, Fluentd). Streaming analytics (Flink, Spark, CEP engines). AI-driven anomaly detection & automation (AutoGPT, LangChain, MLflow, TensorFlow). Define technical requirements and architecture priorities for engineering teams. Partner with More ❯
of Java and its ecosystems, including experience with popular Java frameworks (e.g. Spring, Hibernate). Familiarity with big data technologies and tools (e.g. Hadoop, Spark, NoSQL databases). Strong experience with Java development, including design, implementation, and testing of large-scale systems. Experience working on public sector projects and More ❯
DynamoDB, MSK). Our Technology Stack: Python and Scala Starburst and Athena Kafka and Kinesis DataHub ML Flow and Airflow Docker and Terraform Kafka, Spark, Kafka Streams and KSQL DBT AWS, S3, Iceberg, Parquet, Glue and EMR for our Data Lake Elasticsearch and DynamoDB More information: Enjoy fantastic perks More ❯
technologies. Experienced in writing and running SQL and Bash scripts to automate tasks and manage data. Skilled in installing, configuring, and managing Hive on Spark with HDFS. Strong analytical skills with the ability to troubleshoot complex issues and analyze large volumes of text or binary data in Linux or More ❯
london, south east england, united kingdom Hybrid / WFH Options
Kantar Media
technologies. Experienced in writing and running SQL and Bash scripts to automate tasks and manage data. Skilled in installing, configuring, and managing Hive on Spark with HDFS. Strong analytical skills with the ability to troubleshoot complex issues and analyze large volumes of text or binary data in Linux or More ❯
and their techniques. Experience with data science, big data analytics technology stack, analytic development for endpoint and network security, and streaming technologies (e.g., Kafka, Spark Streaming, and Kinesis). Strong sense of ownership combined with a collaborative approach to overcoming challenges and influencing organisational change. Amazon is an equal More ❯
experience working within a data driven organization Hands-on experience with architecting, implementing, and performance tuning of: Data Lake technologies (e.g. Delta Lake, Parquet, Spark, Databricks) API & Microservices Message queues, streaming technologies, and event driven architecture NoSQL databases and query languages Data domain and event data models Data Modelling More ❯
on a contract basis. You will help design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi These roles are supporting our clients team in Worcester (fully onsite), and requires active UK DV clearance. Key Responsibilities: Design, develop, and maintain … secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi Implement data ingestion, transformation, and integration processes, ensuring data quality and security Collaborate with data architects and security teams to ensure compliance with security policies and data governance standards Manage and monitor large-scale … Engineer in secure or regulated environments. Expertise in the Elastic Stack (Elasticsearch, Logstash, Kibana) for data ingestion, transformation, indexing, and visualization. Strong experience with Apache NiFi for building and managing complex data flows and integration processes. Knowledge of security practices for handling sensitive data, including encryption, anonymization, and access More ❯
processing large-scale data. Experience with ETL processes for data ingestion and processing. Proficiency in Python and SQL. Experience with big data technologies like Apache Hadoop and Apache Spark. Familiarity with real-time data processing frameworks such as Apache Kafka or Flink. MLOps & Deployment: Experience deploying and More ❯
flows through the pipeline. Collaborate with research to define data quality benchmarks . Optimize end-to-end performance across distributed data processing frameworks (e.g., ApacheSpark, Ray, Airflow). Work with infrastructure teams to scale pipelines across thousands of GPUs . Work directly with the leadership on the … and optimizing classifiers. Experience managing large-scale datasets and pipelines in production. Experience in managing and leading small teams of engineers. Expertise in Python , Spark , Airflow , or similar data frameworks. Understanding of modern infrastructure: Kubernetes , Terraform , object stores (e.g. S3, GCS) , and distributed computing environments. Strong communication and leadership More ❯
Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Data Inc. (UK) Ltd
a similar Data Engineering role before sharing their details with us. Keywords for Search: When reviewing CVs, please look for relevant technologies such as: Spark, Hadoop, Big Data, Scala, Spark-Scala, Data Engineer, ETL, AWS (S3, EMR, Glue ETL) . Interview Process: the client will conduct an interview More ❯
extracting value from large datasets Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience with ApacheSpark/Elastic Map Reduce Experience with continuous delivery, infrastructure as code More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
Python. Develop real-time streaming features using big data tools such as Spark. SKILLS AND EXPERIENCE Extensive experience using big data tools such as Apache Spark. Experience working in and maintaining an AWS database. Strong Python coding background. Good knowledge of working with SQL. THE BENEFITS Generous Holiday plan. More ❯
is at the heart of this business, and you can expect to work with a cutting-edge range of technologies, including big data tools (Spark, Hadoop) and cloud platforms (Microsoft Azure, AWS). If you are eager to grow in these areas, comprehensive, top-tier training will be provided. More ❯
contribute to architectural decisions. What We’re Looking For: Strong Python programming skills (5+ years preferred). Deep experience with distributed systems (e.g., Kafka, Spark, Ray, Kubernetes). Hands-on work with big data technologies and architectures. Solid understanding of concurrency, fault tolerance, and data consistency. Comfortable in a More ❯
london, south east england, united kingdom Hybrid / WFH Options
Oliver Bernard
contribute to architectural decisions. What We’re Looking For: Strong Python programming skills (5+ years preferred). Deep experience with distributed systems (e.g., Kafka, Spark, Ray, Kubernetes). Hands-on work with big data technologies and architectures. Solid understanding of concurrency, fault tolerance, and data consistency. Comfortable in a More ❯
datasets, data wrangling, and data preprocessing. Ability to work independently and lead projects from inception to deployment. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure). Preferred Skills: MSc or PhD in Computer Science, Artificial Intelligence, or related field. ADDITIONAL NOTES: Ability More ❯
learning algorithms and general statistical methodologies and theory. Basic knowledge of AB testing and design of experiment. Advanced Python and SQL skills, experience using Spark for processing large datasets. Understanding of software product development processes and governance, including CI/CD processes and release and change management. Familiarity with More ❯