and Data Practice. You will have the following experience : 8+ years of experience in data engineering or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, Apache Spark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with Delta Lake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication skills. More ❯
london (city of london), south east england, united kingdom
Capgemini
and Data Practice. You will have the following experience : 8+ years of experience in data engineering or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, Apache Spark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with Delta Lake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication skills. More ❯
through data science projects Awareness of data security best practices Experience in agile environments You would benefit from having: Understanding of data storage and processing design choices Familiarity with Apache Spark or Airflow Experience with parallel computing Candidates should be able to reliably commute or plan to relocate to Coventry before starting work. The role requires a Data Scientist More ❯
NoSQL databases such as MongoDB, ElasticSearch, MapReduce, and HBase.1. Demonstrated experience working with big data processing and NoSQL databases such as MongoDB, ElasticSearch, MapReduce, and HBase. Demonstrated experience with Apache NiFi. Demonstrated experience with the Extract, Transform, and Load (ETL) processes. Demonstrated experience managing and mitigating IT security vulnerabilities using Plans of Actions and Milestones (POAMs). Demonstrated experience More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
applications and high proficiency SQL for complex querying and performance tuning. ETL/ELT Pipelines: Proven experience designing, building, and maintaining production-grade data pipelines using Google Cloud Dataflow (Apache Beam) or similar technologies. GCP Stack: Hands-on expertise with BigQuery , Cloud Storage , Pub/Sub , and orchestrating workflows with Composer or Vertex Pipelines. Data Architecture & Modelling: Ability to More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating in More ❯
SQLite, and/or Arc SDE o Web: e.g API's, GeoJson, REST Demonstrated experience with two or more cloud based technologies. o e.g. Docker containers, Jupyter Hub, Zeppelin, Apache-Spark, Centos, Jenkins, GIT Domain knowledge/IC: o Background in intelligence, defense, international relations, or public administration multi-disciplines. Demonstrated familiarity with the US Intelligence Community specifically GEOINT More ❯
data solutions. • Work with relational, NoSQL, and cloud-based data platforms (PostgreSQL, MySQL, MongoDB, Elasticsearch, AWS/GCP data services). • Support data integration and transformation using modern tools (Apache NiFi, Kafka, ETL pipelines). • Contribute to DevOps and test automation processes by supporting CI/CD pipelines, version control (Git), and containerized/cloud environments. • Perform data analysis More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
source systems into our reporting solutions. Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like Apache Spark to ensure seamless data flow. Monitoring and Failure Recovery: Implement monitoring procedures to detect failures or unusual data profiles and establish recovery processes to maintain data integrity. Azure More ❯
environments, and better development practices Excellent written and verbal communication skills Experience with DevOps frameworks Entity Framework or similar ORM. Continuous Integration, Configuration Management. Enterprise Service Bus (ESB) Management (Apache Active MQ or NIFI) Technical Writing. Past Intelligence Systems experience. Experience with Test Driven Development Some system administration experience Experience with Jira, Confluence U.S. Citizen Must be able to More ❯
Gloucester, Gloucestershire, UK Hybrid / WFH Options
CGI
change management in production environments. Strong communication skills and a positive, solution-focused mindset, with the ability to adapt to changing client needs. Desirable Skills: Hands-on experience with Apache NiFi for data flow management. Exposure to Java, JavaScript/TypeScript, and Vue for full-stack understanding. Experience with BDD frameworks like Cucumber. Background in supporting or working on More ❯
data warehouses (e.g. Snowflake, Redshift). Familiarity with data pipelining and orchestration systems (e.g. Airflow). Understanding of modern analytics architectures and data visualisation tools (we use Preset.io/Apache Superset). Exposure to CI/CD pipelines (GitLab CI preferred). Experience with advertising or media data is a plus. More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Medialab Group
data warehouses (e.g. Snowflake, Redshift). Familiarity with data pipelining and orchestration systems (e.g. Airflow). Understanding of modern analytics architectures and data visualisation tools (we use Preset.io/Apache Superset). Exposure to CI/CD pipelines (GitLab CI preferred). Experience with advertising or media data is a plus. More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Medialab Group
data warehouses (e.g. Snowflake, Redshift). Familiarity with data pipelining and orchestration systems (e.g. Airflow). Understanding of modern analytics architectures and data visualisation tools (we use Preset.io/Apache Superset). Exposure to CI/CD pipelines (GitLab CI preferred). Experience with advertising or media data is a plus. More ❯
london, south east england, united kingdom Hybrid / WFH Options
Medialab Group
data warehouses (e.g. Snowflake, Redshift). Familiarity with data pipelining and orchestration systems (e.g. Airflow). Understanding of modern analytics architectures and data visualisation tools (we use Preset.io/Apache Superset). Exposure to CI/CD pipelines (GitLab CI preferred). Experience with advertising or media data is a plus. More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Medialab Group
data warehouses (e.g. Snowflake, Redshift). Familiarity with data pipelining and orchestration systems (e.g. Airflow). Understanding of modern analytics architectures and data visualisation tools (we use Preset.io/Apache Superset). Exposure to CI/CD pipelines (GitLab CI preferred). Experience with advertising or media data is a plus. More ❯
are constantly looking for components to adopt in order to enhance our platform. What you'll do: Develop across our evolving technology stack - we're using Python, Java, Kubernetes, Apache Spark, Postgres, ArgoCD, Argo Workflow, Seldon, MLFlow and more. We are migrating into AWS cloud and adopting many services that are available in that environment. You will have the More ❯
set of ML and NLP models - Build and maintain batch and real-time feature computation pipelines capable of processing complex structured and unstructured data using technologies such as Spark, Apache Airflow, AWS SageMaker etc. - Contribute to the implementation of foundational ML infrastructure such as feature storage and engineering, asynchronous (batch) inference and evaluation - Apply your keen product mindset and More ❯
such as Snowflake. Strong expertise in SQL, including development, optimization, and performance tuning for large-scale data environments. Working knowledge of Python would be a plus. Working knowledge of Apache Iceberg is an asset. Experience with Palantir Foundry is advantageous, and knowledge of Ontology concepts is a plus. AIG, we value in-person collaboration as a vital part of More ❯
substituted for a degree) • 15+ years of relevant experience in software development, ranging from work in a DevOps environment to full stack engineering • Proficiency in the following technologies: • Java • Apache NiFi workflow configuration and deployment • Databases such as PostgreSQL and MongoDB • Python and machine learning • Docker • Kubernetes • Cloud-like infrastructure • Experience with Jenkins for pipeline integration and deployment • Familiarity More ❯
languages such as Python, Java, or C++. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Familiarity with data processing tools and platforms (e.g., SQL, Apache Spark, Hadoop). Knowledge of cloud computing services (e.g., AWS, Google Cloud, Azure) and containerization technologies (e.g., Docker, Kubernetes) is a plus. Hugging Face Ecosystem: Demonstrated experience using Hugging More ❯
Meta, Amazon , OpenAI) Proficiency with essential data science libraries including Pandas, NumPy, scikit-learn, Plotly/Matplotlib, and Jupyter Notebooks Knowledge of ML-adjacent technologies, including AWS SageMaker and Apache Airflow. Strong skills in data preprocessing, wrangling, and augmentation techniques Experience deploying scalable AI solutions on cloud platforms (AWS, Google Cloud, or Azure) with enthusiasm for MLOps tools and More ❯
also have Systems integration background or experience Experience of developing the Finance Data Strategy for large financial institutions, developing future state architecture Delivery experience in Big Data technologies and Apache ecosystem technologies such as Spark, Kafka, Hive etc and have experience building end to end data pipelines using on-premise or cloud-based data platforms. Hands-on experience with More ❯