management, code repositories, and automation. Requirements 5+ Years experience in the Data and Analytics Domain. Previous Management experience is preferred. Strong expertise in Databricks (Spark, Delta Lake, Notebooks). Advanced knowledge of SQL development. • Familiarity with Azure Synapse for orchestration and analytics. Working experience with Power BI for reporting More ❯
elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like Ansible, Chef, Puppet, etc. Experience with More ❯
etc Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics, risk More ❯
data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of cloud security, networking, and cost optimization as it relates to data platforms. Experience in total cost More ❯
of Java and its ecosystems, including experience with popular Java frameworks (e.g. Spring, Hibernate). Familiarity with big data technologies and tools (e.g. Hadoop, Spark, NoSQL databases). Strong experience with Java development, including design, implementation, and testing of large-scale systems. Experience working on public sector projects and More ❯
Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience working with Azure data platform services, including Storage, ADLS Gen2, Azure Functions, Kubernetes. Background in cloud platforms and data architectures, such … experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage, Quality More ❯
+ 10% bonus + benefits Purpose: Build and maintain large, scalable Data Lakes, processes and pipelines Tech: Python, Iceberg/Kafka, Spark/Glue, CI/CD Industry: Financial services/securities trading Immersum continue to support a leading SaaS securities trading platform, who are hiring their first Data … Infra tooling using Terraform, Ansible and Jenkins whilst automating everything with Python Tech (experience in any listed is advantageous) Python Cloud: AWS Lake house: ApacheSpark or AWS Glue Cloud Native storage: Iceberg, RDS, RedShift, Kafka IaC: Terraform, Ansible CI/CD: Jenkins, Gitlab Other platforms such as More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Investigo
advanced visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Investigo
advanced visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature More ❯
Technology, or related field. Proficiency in software engineering with experience in Java & Spring or other major programming languages. Preferred Qualifications: Experience with Spring Boot, Spark (Big Data), and Message Bus Architecture. Familiarity with containerisation (e.g., Kubernetes), AWS Cloud, and CICD pipelines (Jenkins). If you meet the above criteria More ❯
Deep expertise in machine learning, NLP, and predictive modelling. Proficient in Python or R, cloud platforms (AWS, GCP, Azure), and big data tools (e.g. Spark). Strong business acumen, communication skills, and stakeholder engagement. If this role looks of interest, please apply here. Please note - this role cannot offer More ❯
with attention to detail and accuracy. Adept at queries, report writing, and presenting findings. Experience working with large datasets and distributed computing tools (Hadoop, Spark, etc.) Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.). Experience with data profiling tools and processes. Knowledge More ❯
governance. Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics, risk More ❯
skills. Experience working with real-world data sets and building scalable models from big data. Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow. Experience with large-scale distributed systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you More ❯
london, south east england, united kingdom Hybrid / WFH Options
Merlin Entertainments
. Experience in using cloud-native services for data engineering and analytics. Experience with distributed systems, serverless data pipelines, and big data technologies (e.g., Spark, Kafka). Ability to define and enforce data governance standards. Experience in providing architectural guidance, mentorship and leading cross-functional discussions to align on More ❯
of Java and its ecosystems, including experience with popular Java frameworks (e.g. Spring, Hibernate). Familiarity with big data technologies and tools (e.g. Hadoop, Spark, NoSQL databases). Strong experience with Java development, including design, implementation, and testing of large-scale systems. Experience working on public sector projects and More ❯
Stevenage, England, United Kingdom Hybrid / WFH Options
Tata Consultancy Services
tools (e.g., Matplotlib, Seaborn, Tableau). Ability to work independently and lead projects from inception to deployment. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure) is desirable. Rewards & Benefits TCS is consistently voted a Top Employer in the UK and globally. Our More ❯
london, south east england, united kingdom Hybrid / WFH Options
Kantar Media
technologies. Experienced in writing and running SQL and Bash scripts to automate tasks and manage data. Skilled in installing, configuring, and managing Hive on Spark with HDFS. Strong analytical skills with the ability to troubleshoot complex issues and analyze large volumes of text or binary data in Linux or More ❯
on a contract basis. You will help design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi These roles are supporting our clients team in Worcester (fully onsite), and requires active UK DV clearance. Key Responsibilities: Design, develop, and maintain … secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi Implement data ingestion, transformation, and integration processes, ensuring data quality and security Collaborate with data architects and security teams to ensure compliance with security policies and data governance standards Manage and monitor large-scale … Engineer in secure or regulated environments. Expertise in the Elastic Stack (Elasticsearch, Logstash, Kibana) for data ingestion, transformation, indexing, and visualization. Strong experience with Apache NiFi for building and managing complex data flows and integration processes. Knowledge of security practices for handling sensitive data, including encryption, anonymization, and access More ❯
extracting value from large datasets Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience with ApacheSpark/Elastic Map Reduce Experience with continuous delivery, infrastructure as code More ❯
london, south east england, united kingdom Hybrid / WFH Options
Oliver Bernard
contribute to architectural decisions. What We’re Looking For: Strong Python programming skills (5+ years preferred). Deep experience with distributed systems (e.g., Kafka, Spark, Ray, Kubernetes). Hands-on work with big data technologies and architectures. Solid understanding of concurrency, fault tolerance, and data consistency. Comfortable in a More ❯
datasets, data wrangling, and data preprocessing. Ability to work independently and lead projects from inception to deployment. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure). Preferred Skills: MSc or PhD in Computer Science, Artificial Intelligence, or related field. ADDITIONAL NOTES: Ability More ❯
/structured data handling Ability to work independently and collaboratively in cross-functional teams Nice to have: Experience with big data tools such as Spark, Hadoop, or MapReduce Familiarity with data visualisation tools like QuickSight, Tableau, or Looker Exposure to microservice APIs and public cloud ecosystems beyond AWS AWS More ❯