S3, Lambda, BigQuery, or Databricks. Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such as Spark, Hadoop, or Kafka, is a plus. Strong problem-solving skills and the ability to work in a collaborative team environment. Excellent verbal and written communication skills. Bachelor's degree in More ❯
engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands-on knowledge More ❯
work independently and as part of a team. Preferred Qualifications: Master's degree in Computer Science, Data Science, or a related field. Experience with big data technologies such as Hadoop, Spark, or Kafka. Experience with data visualization tools such as Power BI, Tableau, or Qlik. Certifications in Azure data and AI technologies. Benefits We offer a competitive, market-aligned More ❯
Milton Keynes, England, United Kingdom Hybrid / WFH Options
Santander
to interact with team members, stakeholders and end users conveying technical concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation Data Transformation DevOps - data pipelines Controls - selection and More ❯
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
functional teams . Preferred Skills High-Performance Computing (HPC) and AI workloads for large-scale enterprise solutions. NVIDIA CUDA, cuDNN, TensorRT experience for deep learning acceleration. Big Data platforms (Hadoop, Spark) for AI-driven analytics in professional services. #J-18808-Ljbffr More ❯
Oxford, England, United Kingdom Hybrid / WFH Options
Cubiq Recruitment
pipelines and distributed systems using Airflow, Dagster, or Kedro . Expertise in cloud platforms such as AWS, GCP, or Azure , along with modern data technologies like Spark, Kafka, or Hadoop . Proficiency in programming languages such as Python, Scala, or Java , with the ability to write efficient, scalable code . Experience working with AI/ML-driven platforms and More ❯
Maidenhead, England, United Kingdom Hybrid / WFH Options
BookFlowGo
and deploying real-time pricing or recommendation systems Deep technical knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch), cloud platforms (AWS, GCP, Azure), and big data tools (e.g., Spark, Hadoop) Experience designing and/or implementing business-critical operational algorithms Clear communication skills; experienced in communicating with senior stakeholders and translating complex technical solutions into business impact Excellent problem More ❯
Maidenhead, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
experience designing or delivering large-scale pricing, AI or recommendation systems. Deep technical knowledge of ML frameworks (TensorFlow, PyTorch), cloud platforms (AWS, GCP, Azure), and big data tools (Spark, Hadoop). Demonstrated success in building business-critical, real-time algorithmic solutions. Strong communication and stakeholder engagement skills – translating complexity into business value. A commercial, data-led mindset and a More ❯
expertise and technical acumen to ensure successful delivery of complex data projects on time and within budget. Key Responsibilities: Project Management: Lead and manage legacy data platform migration (Teradata, Hadoop), data lake build, and data analytics projects from initiation to completion. Develop comprehensive project plans, including scope, timelines, resource allocation, and budgets. Monitor project progress, identify risks, and implement More ❯
Azure Functions. Strong knowledge of scripting languages (e.g., Python, Bash, PowerShell) for automation and data transformation. Proficient in working with databases, data warehouses, and data lakes (e.g., SQL, NoSQL, Hadoop, Redshift). Familiarity with APIs and web services for integrating external systems and applications into orchestration workflows. Hands-on experience with data transformation and ETL (Extract, Transform, Load) processes. More ❯
familiarity with DevOps tools and concepts – e.g. Kubernetes , Git-based CI/CD , cloud infrastructure (AWS/GCP/Azure). Bonus: Exposure to tools like Elasticsearch/Kibana , Hadoop/HBase , OpenSearch , or VPN/proxy architectures. Strong grasp of software security principles , system performance optimisation, and infrastructure reliability. Experience working on large-scale , production-grade systems with More ❯
security principles, system performance optimisation, and infrastructure reliability. Experience working on large-scale, production-grade systems with distributed architectures. Nice to Have: Exposure to tools like Elasticsearch/Kibana, Hadoop/HBase, OpenSearch, or VPN/proxy architectures. Ideal Candidate will: Bring technical vision, initiative, and a passion for exploring and implementing emerging technologies. Are a natural technical leader More ❯
Reading, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
excellent project management skills and technical experience to ensure successful delivery of complex data projects on time and within budget. Responsibilities: Lead and manage legacy data platform migration (Teradata, Hadoop), data lake build, and data analytics projects from initiation to completion. Develop comprehensive project plans, including scope, timelines, resource allocation, and budgets. Monitor project progress, identify risks, and implement More ❯
.NET code Experience working on distributed systems Strong knowledge of Kubernetes and Kafka Experience with Git, and Deployment Pipelines Having worked with at least one of the following stacks: Hadoop, Apache Spark, Presto Experience profiling performance issues in database systems Ability to learn and/or adapt quickly to complex issues Happy to collaborate with a wide group of More ❯
often (in days) to receive an alert: Create Alert Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are amongst the fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
backend focus Proficiency in Java (required) and at least one other server-side language Solid hands-on experience with microservices and distributed systems Familiarity with data technologies like MySQL, Hadoop, or Cassandra Experience working with AWS services (RDS, EC2, Step Functions, Kinesis) is a plus Strong background in testing, KPIs/SLOs, and performance optimization Prior exposure to compliance More ❯
services. We serve clients in various sectors including Financial Services, Manufacturing, Life Sciences, Healthcare, Technology, Telecom, Retail, and Public Services. Our revenue exceeds $13 billion. Location: London Skills: ApacheHadoop We seek open-source contributors to Apache projects who have deep understanding of the Apache ecosystem, experience with Cloudera or similar distributions, and extensive knowledge of big data technologies. … Requirements: Platform engineering and application engineering experience (hands-on) Design experience of open-source platforms based on ApacheHadoop Experience integrating Infra-as-Code in platforms (from scratch) Design and architecture experience for Apache platforms in hybrid cloud environments Ability to debug and fix code within the Apache ecosystem; individual contribution to open source projects Job description: We need … source contributors, capable of troubleshooting complex issues, and supporting the migration and debugging of critical applications like RiskFinder. They must be experts in Big Data platform development using ApacheHadoop and supporting Hadoop implementations in various environments. #J-18808-Ljbffr More ❯