London, England, United Kingdom Hybrid / WFH Options
Harnham
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
ideally in a start-up or scale-up. - Machine learning libraries and frameworks (TensorFlow, PyTorch, scikit-learn). - Python - Big data processing tools (e.g., Spark). The role offers a salary range of between £70-100K depending on experience. The successful candidate must be able to work from more »
test cases. Assist in backlog grooming. Key Skills/Experience: Extensive experience in developing Big Data pipelines in the cloud using technologies such as Apache Spark. Expertise in performing complex data transformations using Spark SQL queries. Experience orchestrating data pipelines using Apache Airflow. Proficiency in Git-based more »
SQL Server, Sybase, Snowflake) Document databases (e.g. Mongo, ArangoDB, Couchbase, Solr) Big Data (e.g. Hadoop ecosystem, Bigtable) Data streaming (e.g. Kafka, Flink, Pulsar, Beam, Spark) Cloud databases (e.g. Snowflake, CockroachDB) Other database genres (e.g. Graph, Columnar, time series) In return, we’ll give you… A competitive basic salary … scheme A high spec laptop (of course!) Need more reasons? Here's a few more... Work with some of the most exciting new technologies Spark off co-workers who’ll challenge your thinking and help you to achieve your potential Deal openly and honestly with customers Benefit from a more »
and libraries for geospatial data analysis & modelling. Experience with cloud computing platforms such as Azure, AWS, Google Cloud, and distributed computing frameworks such as ApacheSpark, for processing large geospatial datasets. Familiarity with geospatial databases and data visualisations such as tableau, QGIS & ArcGIS. Knowledge of satellite imagery analysis more »
need broad expertise across various areas of the technology/software domain. Proficiency in AWS or Big Data, Hadoop or other SQL databases, Lucene, Spark, web app development (JavaScript, Node.js), Docker, Jenkins, Git, Python, or Ruby would be highly beneficial. Key Responsibilities: Meet with clients throughout the sales and more »
major advantage Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
London, Liverpool, Merseyside, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
rate of £250-£400, falling inside IR35 regulations. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes using AWS, Databricks, Python, Spark, and SQL. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and troubleshoot data … Glue). Hands-on experience with Databricks for data processing and analytics. Proficient in Python programming for data manipulation and automation. Solid understanding of ApacheSpark for big data processing. Strong SQL skills for data querying, transformation, and analysis. Excellent problem-solving abilities and attention to detail. Ability more »
As a Data Architect, you'll lead the development of Java and Python projects, design API integrations using Spark, and collaborate with clients and internal teams to translate business requirements into high and low-level designs. You'll also define architecture and technical designs, create data flows and integrations … users and client teams. Stay updated with the latest trends and best practices. Qualifications: Expertise in Java and Python development (Essential). Experience with Spark or Hadoop (Essential). Knowledge of Trino or Airflow (Desirable). Proven ability to design and implement scalable and secure solutions. Excellent communication and more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
London (city), London, England Hybrid / WFH Options
T Rowe Price
or similar, with 6+ years of professional experience. A good understanding of modern lakehouse architectures and corresponding technologies, such as Dremio, Snowflake, Iceberg, (Py)Spark/Glue/EMR, dbt and Airflow/Dagster. Experience with Cloud providers. Familiarity with AWS S3, ECS and EC2/Fargate would be more »
analytical abilities. Preferred Skills: Experience with cloud databases (e.g., AWS, Azure, Google Cloud Platform). Knowledge of big data tools and frameworks (e.g., Hadoop, Spark). Certification in database management (e.g., Microsoft Certified: Azure Data Engineer Associate). What We Offer: Competitive salary and comprehensive benefits package. Opportunity to more »
tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable Skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent more »
London (city), London, England Hybrid / WFH Options
T Rowe Price
or similar, with 6+ years of professional experience. A good understanding of modern lakehouse architectures and corresponding technologies, such as Dremio, Snowflake, Iceberg, (Py)Spark/Glue/EMR, dbt and Airflow/Dagster. Experience with Cloud providers. Familiarity with AWS S3, ECS and EC2/Fargate would be more »
South East London, London, United Kingdom Hybrid / WFH Options
Stepstone UK
Bachelor's degree in Computer Science or a related field (Master's degree preferred) Nice to have: experience with LLMs, Vector Databases, AWS EMR, Spark, and Python Our commitment: Equal opportunities are important to us. We believe that diversity and inclusion at The Stepstone Group are critical to our more »
analytics. The Client would also like to see experience of managing and leading a team of Data Scientists. Should have experience of SCALA/SPARK and Hadoop. Initially this is a 3 month contract assignment in Canary Wharf - with likelihood that it will go on beyond that point. Location more »
and AI models. Data Engineer Required Experience Data engineering experience (2+ years) Cloud platform proficiency (e.g., AWS, Azure, GCP) Data pipeline development (e.g., Airflow, ApacheSpark) SQL proficiency, database design Visualization tools knowledge (e.g., Tableau, PowerBI, Looker) Data Engineer Application Process This is a 1 year contract requirement more »