its native tech stack in designing and building data & AI solutions Experience with data modeling, ETL processes, and data warehousing Knowledge of big data tools and frameworks such as Spark, Hadoop, or Kafka More ❯
Analysts Extensive experience using Python, SQL and AWS Define and enforce data architecture across mesh and domain-driven data products. Implement and govern real-time/batch processing (Kafka, Spark, Glue). Ensure strong metadata, cataloguing, and lineage practices across the enterprise. Lead teams of engineers across global hubs, mentoring and supporting high standards. Knowledge on data governance & security More ❯
ETL processes is mandatory Experience building components for enterprise data platforms (data warehouses, Operational Data Stores, API access layers, file extracts, user queries) Hands-on experience with SQL, Python, Spark, Kafka Excellent communication skills, with proficiency in verbal and written English About this Job This role involves developing and maintaining real-time data processing pipelines for enterprise customer data. More ❯
ETL processes is mandatory Experience building components for enterprise data platforms (data warehouses, Operational Data Stores, API access layers, file extracts, user queries) Hands-on experience with SQL, Python, Spark, Kafka Excellent communication skills, with proficiency in verbal and written English About this Job This role involves developing and maintaining real-time data processing pipelines for enterprise customer data. More ❯
infrastructure from the ground up. Familiarity with AWS services like S3, EMR, and technologies like Terraform and Docker. Know the ins and outs of current big data frameworks like Spark or Flink, but this is not an absolute requirement - youre a quick learner! This role is open to individuals based in or willing to relocate to London. #J More ❯
working practices CI/CD tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent team-working skills. Strong interpersonal More ❯
align tech strategy with business objectives and cost efficiency. Security & Compliance : Strong understanding of GDPR, API authentication, and observability. Big Data : Experience with data lakes, warehouses, and tools like Spark, Kafka, and Airflow. ETL Expertise : Ability to evaluate and optimize data ingestion and transformation pipelines. DevOps & CI/CD : Hands-on experience with Jenkins, GitHub Actions, Terraform, and CloudFormation. More ❯
align tech strategy with business objectives and cost efficiency. Security & Compliance : Strong understanding of GDPR, API authentication, and observability. Big Data : Experience with data lakes, warehouses, and tools like Spark, Kafka, and Airflow. ETL Expertise : Ability to evaluate and optimize data ingestion and transformation pipelines. DevOps & CI/CD : Hands-on experience with Jenkins, GitHub Actions, Terraform, and CloudFormation. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Reed.co.uk
to align technology with business objectives and cost efficiency. Security & Compliance Knowledge: GDPR, API authentication, and observability best practices. Big data processing: Understanding data lakes, warehouses, and tools like Spark, Kafka, and Airflow. ETL Pipelines: Ability to evaluate data ingestion, transformation, and cleaning processes. DevOps & CI/CD: Hands-on knowledge of Jenkins, GitHub Actions, Terraform, and CloudFormation. What More ❯
handle multiple assignments or projects. Familiarity with version control tools (e.g., Git) and Agile work practices. Exposure to cloud platforms (e.g., AWS, GCP, Azure) or big data tools (e.g., Spark) is a plus. Retail/apparel industry knowledge is advantageous More ❯
infrastructure from the ground up. Familiarity with AWS services like S3, EMR, and technologies like Terraform and Docker. Know the ins and outs of current big data frameworks like Spark or Flink, but this is not an absolute requirement - youre a quick learner! This role is open to individuals based in or willing to relocate to London Seniority level More ❯
looking for a Senior Data Scientist, to help to drive innovation and the usage of machine learning technology to support key business use cases, leveraging the Azure platform, Python, Spark, GenAI/LLMs, Databricks, SQL, DevOps & geospatial data knowledge to create end to end products to support key products. Ideally, you'll have experience from within the energy industry. More ❯
Plotly, Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting More ❯
Plotly, Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting More ❯
Plotly, Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting More ❯
solutions in Python and related ML libraries Strong background in applied machine learning, model development and data engineering Experience with cloud environments (Azure, AWS, GCP) and tools such as Spark, Hive, Redshift Demonstrated ability to lead cross-functional teams and mentor junior practitioners Ability to communicate complex technical concepts clearly to non-technical audiences Bonus Points For Participation in More ❯
Engineering using a high level language like Go, Java, JavaScript, Python Distributed Software Architecture exposure in high volume production scenarios Working with Data Mesh, BigData technologies such as EMR, Spark, Databricks Designing, tracking and testing to SLOs and Chaos Engineering to Error Budgets Implementing Business Continuity (BCP) and Disaster Recovery (DRP) plans including tracking RTO and RPO CryptoCurrency domain More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Northrop Grumman Corp. (JP)
developing & deploying scalable backend systems. Familiarity with CICD, containerisation, deployment technologies & cloud platforms (Jenkins, Kubernetes, Docker, AWS) or Familiarity with Big Data and Machine Learning technologies (NumPy, PyTorch, TensorFlow, Spark). Excellent communication, collaboration & problem solving skills, ideally with some experience in agile ways of working. Security clearance: You must be able to gain and maintain the highest level More ❯
management and monitoring. Hands-on experience with AWS Have a good grasp of IaC (Infrastructure-as-code) tools like Terraform and CloudFormation. Previous exposure to additional technologies like Python, Spark, Docker, Kubernetes is desirable. Ability to develop across a diverse technology stack and willingness and ability to take on new technologies. Demonstrated experience participating on cross functional teams in More ❯
management and monitoring. Hands-on experience with AWS Have a good grasp of IaC (Infrastructure-as-code) tools like Terraform and CloudFormation. Previous exposure to additional technologies like Python, Spark, Docker, Kubernetes is desirable. Ability to develop across a diverse technology stack and willingness and ability to take on new technologies. Demonstrated experience participating on cross functional teams in More ❯
with modern inference servers and API gateways for AI applications Nice to have: Infrastructure as Code experience with Terraform, Ansible, or CloudFormation Distributed computing experience with Databricks, Ray, or Spark for large-scale AI workloads AI safety & governance experience with model evaluation, bias detection, and responsible AI practices Multi-modal AI experience with vision-language models, speech processing, or More ❯
Research/Statistics or other quantitative fields. Experience in NLP, image processing and/or recommendation systems. Hands on experience in data engineering, working with big data framework like Spark/Hadoop. Experience in data science for e-commerce and/or OTA. We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance More ❯
survey exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We More ❯
Agile delivery Advanced knowledge of AWS data services (e.g. S3, Glue, EMR, Lambda, Redshift) Expertise in big data technologies and distributed systems Strong coding and optimisation skills (e.g. Python, Spark, SQL) Data quality management and observability Strategic thinking and solution architecture Stakeholder and vendor management Continuous improvement and innovation mindset Excellent communication and mentoring abilities Experience you'd be More ❯
or Google Cloud Platform (GCP). Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle. Experience with big data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow. Proficiency in Python and at least one other programming language More ❯