London, England, United Kingdom Hybrid / WFH Options
Rein-Ton
and standards. Test and validate solutions for quality assurance. Qualifications: Proven experience as a Data Engineer, especially with data pipelines. Proficiency in Python, Java, or Scala; experience with Hadoop, Spark, Kafka. Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Strong SQL and NoSQL database skills. Problem-solving skills and ability to work in More ❯
of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to travel More ❯
knowledge of data architecture, data modeling, and ETL/ELT processes. Proficiency in programming languages such as Python, Java, or Scala. Experience with big data technologies such as Hadoop, Spark, and Kafka. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Excellent problem-solving skills and the ability to think strategically. Strong communication and interpersonal skills, with the More ❯
microservice architecture, API development. Machine Learning (ML): • Deep understanding of machine learning principles, algorithms, and techniques. • Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. • Proficiency in data preprocessing, feature engineering, and model evaluation. • Knowledge of ML model deployment and serving strategies, including containerization and microservices. • Familiarity with ML lifecycle management, including versioning More ❯
programming languages such as Python and R, and ML libraries (TensorFlow, PyTorch, scikit-learn). Hands-on experience with cloud platforms (Azure ML) and big data ecosystems (e.g., Hadoop, Spark). Strong understanding of CI/CD pipelines, DevOps practices, and infrastructure automation. Familiarity with database systems (SQL Server, Snowflake) and API integrations. Strong skills in ETL processes, data More ❯
London, England, United Kingdom Hybrid / WFH Options
Freemarket
cron jobs , job orchestration, and error monitoring tools. Good to have Experience with Azure Bicep or other Infrastructure-as-Code tools. Exposure to real-time/streaming data (Kafka, Spark Streaming, etc.). Understanding of data mesh , data contracts , or domain-driven data architecture . Hands on experience with MLflow and Llama #J-18808-Ljbffr More ❯
microservice architecture, API development. Machine Learning (ML): Deep understanding of machine learning principles, algorithms, and techniques. Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. Proficiency in data preprocessing, feature engineering, and model evaluation. Knowledge of ML model deployment and serving strategies, including containerization and microservices. Familiarity with ML lifecycle management, including versioning More ❯
hold or gain a UK government security clearance. Preferred Technical And Professional Experience Experience with machine learning frameworks (TensorFlow, PyTorch, scikit-learn). Familiarity with big data technologies (Hadoop, Spark). Background in data science, IT consulting, or a related field. AWS Certified Big Data or equivalent Seniority level Seniority level Mid-Senior level Employment type Employment type Full More ❯
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
with modern cloud data warehouses such as Databricks/Snowflake, ideally on AWS Strong Python experience, including deep knowledge of the Python data ecosystem, with hands-on expertise in Spark and Airflow Hands-on experience in all phases of data modelling from conceptualization to database optimization supported by advanced SQL skills Hands-on Experience with implementing CICD, using Git More ❯
with modern cloud data warehouses such as Databricks/Snowflake, ideally on AWS Strong Python experience, including deep knowledge of the Python data ecosystem, with hands-on expertise in Spark and Airflow Hands-on experience in all phases of data modelling from conceptualization to database optimization supported by advanced SQL skills Hands-on Experience with implementing CICD, using Git More ❯
microservice architecture, API development. Machine Learning (ML): • Deep understanding of machine learning principles, algorithms, and techniques. • Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. • Proficiency in data preprocessing, feature engineering, and model evaluation. • Knowledge of ML model deployment and serving strategies, including containerization and microservices. • Familiarity with ML lifecycle management, including versioning More ❯
London, England, United Kingdom Hybrid / WFH Options
Rein-Ton
see from you Qualifications Proven experience as a Data Engineer with a strong background in data pipelines. Proficiency in Python, Java, or Scala, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Solid understanding of SQL and NoSQL databases. Strong problem-solving skills and ability More ❯
AI/ML platforms or other advanced analytics infrastructure. Familiarity with infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with modern data engineering technologies (e.g., Kafka, Spark, Flink, etc.). Why join YouLend? Award-Winning Workplace: YouLend has been recognised as one of the "Best Places to Work 2024" by the Sunday Times for being a More ❯
similar). Experience with ETL/ELT tools, APIs, and integration platforms. Deep knowledge of data modelling, warehousing, and real time analytics. Familiarity with big data technologies principals (e.g., Spark, Hadoop) and BI tools (e.g., Power BI, Tableau). Strong programming skills (e.g. SQL, Python, Java, or similar languages). Ability to exercise a substantial degree of independent professional More ❯
SageMaker, GCP AI Platform, Azure ML, or equivalent). Solid understanding of data-engineering concepts: SQL/noSQL, data pipelines (Airflow, Prefect, or similar), and batch/streaming frameworks (Spark, Kafka). Leadership & Communication: Proven ability to lead cross-functional teams in ambiguous startup settings. Exceptional written and verbal communication skills—able to explain complex concepts to both technical More ❯
our data systems, all while mentoring your team and shaping best practices. Role Lead Technical Execution & Delivery - Design, build, and optimise data pipelines and data infrastructure using Snowflake, Hadoop, Apache NiFi, Spark, Python, and other technologies. - Break down business requirements into technical solutions and delivery plans. - Lead technical decisions, ensuring alignment with data architecture and performance best practices. … data pipeline efficiency. All About You Technical & Engineering Skills - Extensive demonstrable experience in data engineering, with expertise in building scalable data pipelines and infrastructure. - Deep understanding of Snowflake, Hadoop, Apache NiFi, Spark, Python, and other data technologies. - Strong experience with ETL/ELT processes and data transformation. - Proficiency in SQL, NoSQL, and data modeling. - Familiarity with cloud data More ❯
tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as ApacheSpark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You … with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities: - Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business More ❯
field. Technical Skills Required · Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). · Experience with ApacheSpark or any other distributed data programming frameworks. · Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. · Experience with cloud infrastructure like AWS … CloudFormation. · Hands-on development experience in an airline, e-commerce or retail industry · Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. · Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. · Experience of More ❯
field. Technical Skills Required Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with ApacheSpark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with cloud infrastructure like AWS … Skills Hands-on development experience in an airline, e-commerce or retail industry Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. Experience of More ❯
in Computer Science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., ApacheSpark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., MySQL, PostgreSQL, MongoDB More ❯
a proven track record of building and managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills in Python, Java, Scala, or … a similar language. Proficiency in database technologies (SQL, NoSQL, time-series databases) and data modelling. Strong understanding of data pipeline orchestration tools (e.g., Apache Airflow, Kubernetes). You thrive when working as part of a team. Comfortable in a fast-paced environment. Have excellent written and verbal English skills. Last but not least, you'll have no ego! What More ❯
working with relational and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation Good knowledge of containers … AWS or Azure . Good understanding of cloud storage, networking and resource provisioning. It would be great if you had Certification in GCP "Professional Data Engineer". Certification in Apache Kafka (CCDAK). Proficiency across the data lifecycle. WORKING FOR US Our focus is to ensure we are inclusive every day, building an organisation that reflects modern society and More ❯
London, England, United Kingdom Hybrid / WFH Options
Applicable Limited
as AWS, Azure, GCP, and Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as ApacheSpark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT processes, SQL, NoSQL databases is More ❯
working with relational and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ) Proficiency in infrastructure as code (IaC) using Terraform Experience with CI/CD pipelines and related tools/frameworks Containerisation Good knowledge of containers ( Docker, Kubernetes … with GCP, AWS or Azure Good understating of cloud storage, networking and resource provisioning It would be great if you had... Certification in GCP "Professional Data Engineer" Certification in Apache Kafka (CCDAK) Proficiency across the data lifecycle WORKING FOR US Our focus is to ensure we are inclusive every day, building an organisation that reflects modern society and celebrates More ❯
proficiency, with experience in MS SQL Server or PostgreSQL Familiarity with platforms like Databricks and Snowflake for data engineering and analytics Experience working with Big Data technologies (e.g., Hadoop, ApacheSpark) Familiarity with NoSQL databases (e.g., columnar or graph databases like Cassandra, Neo4j) Research experience with peer-reviewed publications Certifications in cloud-based machine learning services (AWS, Azure More ❯