orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some More ❯
orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some More ❯
orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some More ❯
Cambourne, England, United Kingdom Hybrid / WFH Options
Remotestar
of Scala. Familiarity with distributed computing frameworks such as Spark, KStreams, Kafka. Experience with Kafka and streaming frameworks. Understanding of monolithic vs. microservice architectures. Familiarity with Apache ecosystem including Hadoop modules (HDFS, YARN, HBase, Hive, Spark) and Apache NiFi. Experience with containerization and orchestration tools like Docker and Kubernetes. Knowledge of time-series or analytics databases such as Elasticsearch. More ❯
evaluating exciting new technologies to design and build scalable real time data applications. Spanning the full data lifecycle and experience using mix of modern and traditional data platforms (e.g. Hadoop, Kafka, GCP, Azure, Teradata, SQL server) you'll get to work building capabilities with horizon-expanding exposure to a host of wider technologies and careers in data. Helping in … and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation Good knowledge of containers ( Docker, Kubernetes etc More ❯
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
cloud-based data architecture (AWS, Azure, GCP, Snowflake). Understanding of Data Mesh, Data Fabric, and product-led data strategies. Technical Knowledge: Familiarity with big data technologies (Apache Spark, Hadoop). Knowledge of programming languages such as Python, R, or Java. Experience with ETL/ELT processes, SQL, NoSQL databases, and DevOps principles. Understanding of AI and machine learning More ❯
Data Engineer for Cloud Data Lake activities. The candidate should have industry experience (preferably in Financial Services) in navigating enterprise Cloud applications using distributed computing frameworks as Apache Spark, Hadoop, Hive. Working knowledgeoptimizing database performance, scalability, ensuring data security and compliance. Education & Preferred Qualifications Bachelor's/Master's Degree in a Computer Science, Engineering or Math or equivalent More ❯
practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog). Familiarity with big data technologies like Apache Spark, Hadoop, or similar. ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data stores Leadership & Strategy: Lead Data Engineering team(s) in designing More ❯
evaluating exciting new technologies to design and build scalable real time data applications. Spanning the full data lifecycle and experience using mix of modern and traditional data platforms (e.g. Hadoop, Kafka, GCP, Azure, Teradata, SQL server) you'll get to work building capabilities with horizon-expanding exposure to a host of wider technologies and careers in data. Helping in … and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ) Proficiency in infrastructure as code (IaC) using Terraform Experience with CI/CD pipelines and related tools/frameworks Containerisation Good knowledge of containers ( Docker, Kubernetes etc) Cloud Experience More ❯
Data Engineering or a related field Strong proficiency in Python (PySpark, Pandas) and SQL Experience with cloud platforms (AWS, GCP, or Azure) Familiarity with big data technologies (Apache Spark, Hadoop, Kafka) is a plus How to Apply: Fill out the application form here: https://docs.google.com/forms/d/e/1FAIpQLSdnd7LZnaxgJf438qrP7O_8pWAptGF8nYUqpA8L-vI0NiEsKg/viewform More ❯
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector More ❯
Guildford, Surrey, United Kingdom Hybrid / WFH Options
Actica Consulting Limited
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector More ❯
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector More ❯
Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with BI tools , API integrations , and graph databases Why Join Us? Work with large-scale More ❯
Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with BI tools , API integrations , and graph databases Why Join Us? Work with large-scale More ❯
Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with BI tools , API integrations , and graph databases Why Join Us? Work with large-scale More ❯
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
SW19 3NW, Merton Park, Greater London, United Kingdom
Trinity Resource Solutions
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
implementing data governance, security standards, and compliance practices. Strong understanding of metadata management, data lineage, and data quality frameworks. Preferred Skills & Knowledge: Familiarity with big data technologies such as Hadoop, Spark, or Kafka Excellent communication skills with the ability to explain complex data strategies to non-technical stakeholders. Outstanding problem-solving abilities and organizational skills. Certifications (Preferred/Desirable More ❯
is important) Latest Data Science platforms (e.g., Databricks, Dataiku, AzureML, SageMaker) and frameworks (e.g., TensorFlow, MXNet, scikit-learn) Software engineering practices (coding standards, unit testing, version control, code review) Hadoop distributions (Cloudera, Hortonworks), NoSQL databases (Neo4j, Elastic), streaming technologies (Spark Streaming) Data manipulation and wrangling techniques Development and deployment technologies (virtualisation, CI tools like Jenkins, configuration management with Ansible More ❯
Experience with scripting languages like Python or KornShell. Knowledge of writing and optimizing SQL queries for large-scale, complex datasets. PREFERRED QUALIFICATIONS Experience with big data technologies such as Hadoop, Hive, Spark, EMR. Experience with ETL tools like Informatica, ODI, SSIS, BODI, or DataStage. We promote an inclusive culture that empowers Amazon employees to deliver the best results for More ❯
Telford, Shropshire, West Midlands, United Kingdom Hybrid / WFH Options
JLA Resourcing Ltd
solving and communication skills, including the ability to convey complex concepts to non-technical stakeholders. Desirable (but not essential): Experience with SSIS, AWS or Azure Data Factory. Familiarity with Hadoop, Jenkins, or DevOps practices including CI/CD. Cloud certifications (Azure or AWS). Knowledge of additional programming languages or ETL tools. This is a fantastic opportunity to take More ❯
solutions. Experience with data engineering, including SQL and NoSQL databases, and data services ecosystems (ADF, SQL DWH, Data Bricks). Experience in Distributed Data Computing and Big Data ecosystems (Hadoop, Cassandra, Teradata). Experience with designing and building APIs. Other/Beneficial skills Previous experience in Finance/payments/insurance industries. Experience with both on-premises and cloud More ❯