A passion for building and participating in highly effective teams and development processes. Strong debugging, testing/validation, and analytics/SQL(AO), skills Apache, Experience working with Agile methodologies (Scrum) and cross-functional teams. Desirable: Experience in CDN, datacenter, hyperscaler, media, news, and/or entertainment industry. Knowledge More ❯
Python. Develop real-time streaming features using big data tools such as Spark. SKILLS AND EXPERIENCE Extensive experience using big data tools such as Apache Spark. Experience working in and maintaining an AWS database. Strong Python coding background. Good knowledge of working with SQL. THE BENEFITS Generous Holiday plan. More ❯
of the SVOT platform to ensure high availability and accessibility. Experience & Skills : At least some commercial hands-on experience with Azure data services (e.g., Apache Spark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as More ❯
Big Data, and Cloud Technologies. Hands-on expertise in at least 2 Cloud platforms (Azure, AWS, GCP, Snowflake, Databricks) and Big Data processing (e.g., Apache Spark, Beam). Proficiency in key technologies like BigQuery, Redshift, Synapse, Pub/Sub, Kinesis, Event Hubs, Kafka, Dataflow, Airflow, and ADF. Strong ETL More ❯
Big Data, and Cloud Technologies. Hands-on expertise in at least 2 Cloud platforms (Azure, AWS, GCP, Snowflake, Databricks) and Big Data processing (e.g., Apache Spark, Beam). Proficiency in key technologies like BigQuery, Redshift, Synapse, Pub/Sub, Kinesis, Event Hubs, Kafka, Dataflow, Airflow, and ADF. Strong ETL More ❯
ensure high availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., Apache Spark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as More ❯
london, south east england, United Kingdom Hybrid / WFH Options
DATAHEAD
ensure high availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., Apache Spark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as More ❯
ll Bring: Strong experience in AWS cloud platform architecture and solving complex business issues. Proficiency in Java programming and Linux, with desirable knowledge of Apache NiFi, Node.js, JSON/XML, Jenkins, Maven, BitBucket, or JIRA. Hands-on experience with scripting (Shell, Bash, Python) and a solid understanding of Linux More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
ll Bring: Strong experience in AWS cloud platform architecture and solving complex business issues. Proficiency in Java programming and Linux, with desirable knowledge of Apache NiFi, Node.js, JSON/XML, Jenkins, Maven, BitBucket, or JIRA. Hands-on experience with scripting (Shell, Bash, Python) and a solid understanding of Linux More ❯
engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog). Familiarity with big data technologies like Apache Spark, Hadoop, or similar. ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data stores Leadership & Strategy More ❯
the user experience. Key skills: Senior Data Scientist experience Commercial experience in Generative AI and recommender systems Strong Python and SQL experience Spark/Apache Airflow LLM experience MLOps experience AWS Additional information: This role offers a strong salary of up to £95,000 (Depending on experience/skill More ❯
the user experience. Key skills: Senior Data Scientist experience Commercial experience in Generative AI and recommender systems Strong Python and SQL experience Spark/Apache Airflow LLM experience MLOps experience AWS Additional information: This role offers a strong salary of up to 95,000 (Depending on experience/skill More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Cathcart Technology
the user experience. Key skills: ** Senior Data Scientist experience ** Commercial experience in Generative AI and recommender systems ** Strong Python and SQL experience ** Spark/Apache Airflow ** LLM experience ** MLOps experience ** AWS Additional information: This role offers a strong salary of up to £95,000 (Depending on experience/skill More ❯
the user experience. Key skills: Senior Data Scientist experience Commercial experience in Generative AI and recommender systems Strong Python and SQL experience Spark/Apache Airflow LLM experience MLOps experience AWS Additional information: This role offers a strong salary of up to £95,000 (Depending on experience/skill More ❯
engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog). Familiarity with big data technologies like Apache Spark, Hadoop, or similar. ETL/ELT tools and creating common data sets across on-prem (IBM DataStage ETL) and cloud data stores. Leadership More ❯
City of London, London, Tottenham Court Road, United Kingdom Hybrid / WFH Options
Cathcart Technology
the user experience. Key skills: ** Senior Data Scientist experience ** Commercial experience in Generative AI and recommender systems ** Strong Python and SQL experience ** Spark/Apache Airflow ** LLM experience ** MLOps experience ** AWS Additional information: This role offers a strong salary of up to £95,000 (Depending on experience/skill More ❯
Product Or Domain Expertise Blend of technical expertise with 5+ years of experience, analytical problem-solving, and collaboration with cross-functional teams Azure DevOps Apache Spark, Python Strong SQL proficiency Data modeling understanding ETL processes, Azure Data Factory Azure Databricks knowledge Familiarity with data warehousing Big data technologies Data More ❯
to adopt in order to enhance our platform. What you'll do: Develop across our evolving technology stack - we're using Python, Java, Kubernetes, Apache Spark, Postgres, ArgoCD, Argo Workflow, Seldon, MLFlow and more. We are migrating into AWS cloud and adopting many services that are available in that More ❯
Technical requirements: Highly proficient in Python. Experience working with data lakes; experience with Spark, Databricks. Understanding of common data transformation and storage formats, e.g. Apache Parquet. Good understanding of cloud environments (ideally Azure), and workflow management systems (e.g. Dagster, Airflow, Prefect). Follow best practices like code review, clean More ❯
in production. Strong understanding of ML models , how they work, and when to apply them effectively. Proficiency in Python and SQL , with experience in Apache Spark & Airflow (ideal but not required). Hands-on experience with ML frameworks (TensorFlow, PyTorch, Scikit-Learn) and cloud platforms (AWS, GCP, or Azure More ❯
Dagster Good understanding of cloud environments (ideally Azure), distributed computing and scaling workflows and pipelines Understanding of common data transformation and storage formats, e.g. Apache Parquet Awareness of data standards such as GA4GH ( ) and FAIR ( ). Exposure of genotyping and imputation is highly advantageous Benefits: Competitive base salary Generous More ❯
Learning (ML): Deep understanding of machine learning principles, algorithms, and techniques. Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. Proficiency in data preprocessing, feature engineering, and model evaluation. Knowledge of ML model deployment and serving strategies, including containerization and microservices. Familiarity with More ❯
AWS, Azure, or GCP) for ML model deployment Knowledge of MLOps practices and tools for experiment tracking Experience with big data processing frameworks like Apache Spark Knowledge of NoSQL databases (MongoDB, Elasticsearch) for handling unstructured data Experience with data versioning and feature stores for machine learning Proficiency in model More ❯
Learning (ML): • Deep understanding of machine learning principles, algorithms, and techniques. • Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. • Proficiency in data preprocessing, feature engineering, and model evaluation. • Knowledge of ML model deployment and serving strategies, including containerization and microservices. • Familiarity with More ❯