Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with More ❯
for purpose Experience that will put you ahead of the curve Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (ApacheBeam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure SQL development skills Experience using Dataform or dbt Demonstrated strength in data modelling More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Singular Recruitment
for complex querying and performance tuning. ETL/ELT Pipelines: Proven experience designing, building, and maintaining production-grade data pipelines using Google Cloud Dataflow (ApacheBeam) or similar technologies. GCP Stack: Hands-on expertise with BigQuery , Cloud Storage , Pub/Sub , and orchestrating workflows with Composer or Vertex More ❯
Bath, England, United Kingdom Hybrid / WFH Options
Future
purposefulness. Experience that will put you ahead of the curve: Experience using Python on Google Cloud Platform for Big Data projects, including BigQuery, DataFlow (ApacheBeam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composer. SQL development skills. Experience using Dataform or dbt. Strength in data modeling, ETL More ❯
purpose. Experience that will put you ahead of the curve Experience using Python on Google Cloud Platform for Big Data projects, including BigQuery, DataFlow (ApacheBeam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composer. SQL development skills. Experience with Dataform or dbt. Strong skills in data modeling More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Axiom Software Solutions Limited
aspects Knowledge of Kafka resiliency and new features like KRAFT Experience with real-time technologies such as Spark Required Skills & Experience Extensive experience with Apache Kafka and real-time architecture including event-driven frameworks. Strong knowledge of Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink, and Beam. Experience More ❯
development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the More ❯
development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the More ❯
development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the More ❯
AWS, or Azure. Experience with CI/CD pipelines for machine learning (e.g., Vertex AI). Experience with data processing frameworks and tools, particularly ApacheBeam/Dataflow is highly desirable. Knowledge of monitoring and maintaining models in production. Proficiency in employing containerization tools, including Docker, to streamline More ❯
to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ) Proficiency in infrastructure as code (IaC) using Terraform Experience with CI/CD pipelines and related tools/frameworks Containerisation Good knowledge of … Good understating of cloud storage, networking and resource provisioning It would be great if you had... Certification in GCP "Professional Data Engineer" Certification in Apache Kafka (CCDAK) Proficiency across the data lifecycle WORKING FOR US Our focus is to ensure we are inclusive every day, building an organisation that More ❯
to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation Good … understanding of cloud storage, networking and resource provisioning. It would be great if you had Certification in GCP "Professional Data Engineer". Certification in Apache Kafka (CCDAK). Proficiency across the data lifecycle. WORKING FOR US Our focus is to ensure we are inclusive every day, building an organisation More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Lloyds Bank plc
to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation: Good … understanding of cloud storage, networking, and resource provisioning. It would be great if you had... Certification in GCP “Professional Data Engineer”. Certification in Apache Kafka (CCDAK). Proficiency across the data lifecycle. Working for us: Our focus is to ensure we are inclusive every day, building an organisation More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Lloyds Banking Group
to build data solutions, such as SQL Server/Oracle, experience with relational and dimensional data structures Experience in using distributed frameworks (Spark, Flink, Beam, Hadoop) Proficiency in infrastructure as code (IaC) using Terraform Experience with CI/CD pipelines and related tools/frameworks Containerisation Good knowledge of … Good understating of cloud storage, networking and resource provisioning It would be great if you had... Certification in GCP “Professional Data Engineer” Certification in Apache Kafka (CCDAK) Proficiency across the data lifecycle WORKING FOR US Our focus is to ensure we are inclusive every day, building an organisation that More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯