pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine … experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with ApacheSpark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse … privacy, handling of sensitive data (e.g. GDPR) Experience in event-driven architecture, ingesting data in real time in a commercial production environment with SparkStreaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline More ❯
pipelines to serve the easyJet analyst and data science community. · Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. · Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploymachine learning … experience with Terraform or CloudFormation. · Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. · Significant experience with ApacheSpark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) · Familiarity with Databricks as a data and AI platform or the Lakehouse … privacy, handling of sensitive data (e.g. GDPR) · Experience in event-driven architecture, ingesting data in real time in a commercial production environment with SparkStreaming, Kafka, DLT or Beam. · Understanding of the challenges faced in the design and development of a streaming data pipeline More ❯
it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or … Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and … Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL (for database management and querying) ApacheSpark (for distributed data processing) ApacheSparkStreaming, Kafka or similar (for real-time data streaming) Experience using More ❯
it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or … Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines and version control systems like Git. … Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL (for database management and querying) ApacheSpark (for distributed data processing) ApacheSparkStreaming, Kafka or similar (for real-time data streaming) Experience using More ❯
unsupervised machine learning, deep learning, graph data analytics, statistical analysis, time series, geospatial analysis, NLP, sentiment analysis, pattern detection, etc.) Python, R, or Spark for data insights Data Bricks/Data QISQL for data access and processing (PostgreSQL preferred, but general SQL knowledge is important) Latest Data Science … TensorFlow, MXNet, scikit-learn) Software engineering practices (coding standards, unit testing, version control, code review) Hadoop distributions (Cloudera, Hortonworks), NoSQL databases (Neo4j, Elastic), streaming technologies (SparkStreaming) Data manipulation and wrangling techniques Development and deployment technologies (virtualisation, CI tools like Jenkins, configuration management with More ❯
record of building and managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, SparkStreaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real More ❯
record of building and managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, SparkStreaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real More ❯
CI/CD/YAML/ARM/BICEP/Terraform MSBI Traditional Stack (SQL, SSAS, SSIS, SSRS) Azure Automation/PowerShell Azure Streaming Analytics/SparkStreaming Azure Functions/C# .NET PowerApps Data Science Azure AI Services – Azure AI Foundry, Open AI More ❯
London, England, United Kingdom Hybrid / WFH Options
Checkout.com
Airflow for scheduling DBT for data transformation Montecarlo for monitoring The platform here encompasses the end-to-end, first for real time/streaming use cases but also for our analytical/warehouse needs. We're building for scale. As such, much of what we design and implement … processes, we're aiming for the platform to be as self-sustaining as possible. Stay up-to-date with the latest data and streaming engineering technologies and trends. Use that knowledge and subject matter expertise to mentor the more junior members of the team, and work with other … stream technologies, ideally Kafka, but Kinesis, Pulsar or similar would also be applicable. Experience designing and implementing stream processing applications (kStreams, kSQL, Flink, SparkStreaming). Experience with Data Warehousing tools like Snowflake/Bigquery/Databricks, and building pipelines on these. Experience working with modern More ❯
software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). · Experience with ApacheSpark or any other distributed data programming frameworks. · Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. · Experience with … an airline, e-commerce or retail industry · Experience in event-driven architecture, ingesting data in real time in a commercial production environment with SparkStreaming, Kafka, DLT or Beam. · Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and More ❯
Lambda, EC2, VPC, S3) for security response and/or automation - Experience with data science, machine learning, big data analytics, and/or streaming technologies (e.g., Kafka, SparkStreaming, Kinesis) Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce More ❯
security threats, actors and their techniques. Experience with data science, big data analytics technology stack, analytic development for endpoint and network security, and streaming technologies (e.g., Kafka, SparkStreaming, and Kinesis). Strong sense of ownership combined with a collaborative approach to overcoming challenges More ❯
security threats, actors and their techniques. Experience with data science, big data analytics technology stack, analytic development for endpoint and network security, and streaming technologies (e.g., Kafka, SparkStreaming, and Kinesis). Amazon is an equal opportunities employer. We believe passionately that employing a More ❯
ideally Python), focusing on testable, maintainable code. Expertise in cloud services (ideally AWS and Databricks), emphasizing secure, scalable architectures. Experience with large-scale streaming data systems (e.g., Kafka, SparkStreaming), especially on Databricks. Proficiency with low-latency time-series databases (e.g., Apache Druid). More ❯
Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: Required Skills: Spark - Must have Scala - Must have Hive & SQL - Must have Hadoop - Must have Communication - Must have Banking/Capital Markets Domain - Good to have Note … will not help here. Responsibilities: Good Big Data resource with the below Skillset: Experience in Big data technologies, real time data processing platform (SparkStreaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication. A history of delivering against agreed objectives. More ❯
London, England, United Kingdom Hybrid / WFH Options
EMBS Technology
forefront of enterprise-scale AI adoption , working with AWS cloud-native event-driven architectures that will shape the future of AI-powered data streaming and automation. The Role As an AWS Platform Engineer , you will play a key role in designing, deploying, and optimising event-driven architectures in … an AWS cloud environment. You will work alongside AI specialists, DevOps engineers, and data teams to build highly scalable, real-time data streaming solutions that drive AI-driven automation and decision-making. This is a client-facing role , requiring: Exceptional communication skills to engage with both technical and … remaining three days. Essential Skills and Experience Strong expertise in AWS cloud services , with practical experience or theoretical knowledge of AWS MSK (Managed Streaming for Apache Kafka) Deep understanding of Kafka (Apache Kafka, Confluent Kafka) and event-driven microservices architecture Proficiency in Infrastructure as Code tools (Terraform, AWS More ❯
architectures that business engineering teams buy into and build their applications around. Required Qualifications, Capabilities, and Skills: Experience across the data lifecycle with Spark-based frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming. Strong knowledge … end-to-end engineering experience supported by excellent tooling and automation. Preferred Qualifications, Capabilities, and Skills: Good understanding of the Big Data stack (Spark/Iceberg). Ability to learn new technologies and patterns on the job and apply them effectively. Good understanding of established patterns, such as More ❯
Experience Proficiency in SQL, data modeling, ETL processes, and modern data warehousing. Experience with functional programming and real-time stream processing (e.g., Flink, SparkStreaming, or similar). Demonstrated ability to handle production environments processing tens of thousands of events per second from diverse sources. This More ❯
Experience Proficiency in SQL, data modeling, ETL processes, and modern data warehousing. Experience with functional programming and real-time stream processing (e.g., Flink, SparkStreaming, or similar). Demonstrated ability to handle production environments processing tens of thousands of events per second from diverse sources. This More ❯
Experience Proficiency in SQL, data modeling, ETL processes, and modern data warehousing. Experience with functional programming and real-time stream processing (e.g., Flink, SparkStreaming, or similar). Demonstrated ability to handle production environments processing tens of thousands of events per second from diverse sources. This More ❯
Experience Proficiency in SQL, data modeling, ETL processes, and modern data warehousing. Experience with functional programming and real-time stream processing (e.g., Flink, SparkStreaming, or similar). Demonstrated ability to handle production environments processing tens of thousands of events per second from diverse sources. This More ❯
london (city of london), south east england, united kingdom
Velocity Tech
Experience Proficiency in SQL, data modeling, ETL processes, and modern data warehousing. Experience with functional programming and real-time stream processing (e.g., Flink, SparkStreaming, or similar). Demonstrated ability to handle production environments processing tens of thousands of events per second from diverse sources. This More ❯
Experience Proficiency in SQL, data modeling, ETL processes, and modern data warehousing. Experience with functional programming and real-time stream processing (e.g., Flink, SparkStreaming, or similar). Demonstrated ability to handle production environments processing tens of thousands of events per second from diverse sources. This More ❯