e.g., Kubernetes). Preferred Skills: Experience with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model explainability techniques (e.g., SHAP More ❯
e.g., Kubernetes). Preferred Skills: Experience with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model explainability techniques (e.g., SHAP More ❯
functional teams . Preferred Skills High-Performance Computing (HPC) and AI workloads for large-scale enterprise solutions. NVIDIA CUDA, cuDNN, TensorRT experience for deep learning acceleration. Big Data platforms (Hadoop, Spark) for AI-driven analytics in professional services. Pls share CV at payal.c@hcltech.com More ❯
science Proven experience as a Data Scientist, with a focus on AI and machine learning, including hands-on experience with Gen AI technologies. Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of cloud platforms (e.g., AWS, Azure, GCP). Knowledge of financial instruments, markets, and risk management. Excellent problem-solving skills and attention to detail. Strong communication More ❯
e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Experience in Computer Science, Engineering, Mathematics, or a related field and expertise in technology disciplines Exposure to big data frameworks (Spark, Hadoop etc.) used for scalable distributed processing Ability to collaborate effectively with Data Scientists to translate analytical insights into technical solutions Preferred Qualifications, Capabilities, And Skills Familiarity with No SQL More ❯
Server, MySQL) and non-relational databases (e.g., MongoDB, Cassandra) Experience with AWS S3 and other AWS services related to big data solutions Hands-on experience with big data tooling (Hadoop, Spark, etc.) for processing large datasets In-depth understanding of data security best practices, including encryption, access controls, and compliance standards Familiarity with ETL frameworks and the ability to More ❯
Milton Keynes, England, United Kingdom Hybrid / WFH Options
Santander
to interact with team members, stakeholders and end users conveying technical concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation Data Transformation DevOps - data pipelines Controls - selection and More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
1 month ago Be among the first 25 applicants Description As a Lead Data Engineer or architect at Made Tech, you'll play a pivotal role in helping public sector organisations become truly data-lead, by equipping them with robust More ❯
orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some More ❯
evaluating exciting new technologies to design and build scalable real time data applications. Spanning the full data lifecycle and experience using mix of modern and traditional data platforms (e.g. Hadoop, Kafka, GCP, Azure, Teradata, SQL server) you'll get to work building capabilities with horizon-expanding exposure to a host of wider technologies and careers in data. Helping in … and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation Good knowledge of containers ( Docker, Kubernetes etc More ❯
evaluating exciting new technologies to design and build scalable real time data applications. Spanning the full data lifecycle and experience using mix of modern and traditional data platforms (e.g. Hadoop, Kafka, GCP, Azure, Teradata, SQL server) you'll get to work building capabilities with horizon-expanding exposure to a host of wider technologies and careers in data. Helping in … and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ) Proficiency in infrastructure as code (IaC) using Terraform Experience with CI/CD pipelines and related tools/frameworks Containerisation Good knowledge of containers ( Docker, Kubernetes etc) Cloud Experience More ❯
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector More ❯
Guildford, Surrey, United Kingdom Hybrid / WFH Options
Actica Consulting Limited
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector More ❯
Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with BI tools , API integrations , and graph databases Why Join Us? Work with large-scale More ❯
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
Telford, Shropshire, West Midlands, United Kingdom Hybrid / WFH Options
JLA Resourcing Ltd
solving and communication skills, including the ability to convey complex concepts to non-technical stakeholders. Desirable (but not essential): Experience with SSIS, AWS or Azure Data Factory. Familiarity with Hadoop, Jenkins, or DevOps practices including CI/CD. Cloud certifications (Azure or AWS). Knowledge of additional programming languages or ETL tools. This is a fantastic opportunity to take More ❯
West Bromwich, England, United Kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
both on-premise and cloud-based data systems Clear communicator, able to translate complex data concepts to cross-functional teams Bonus points for experience with: Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage AI/LLM tools to More ❯
Qualifications: PhD degree in Computer Science, Engineering, Mathematics, Physics or a related field. Hands-on experience with LLMs, RAG, LangChain, or LlamaIndex. Experience with big data technologies such as Hadoop, Spark, or Kafka. The estimated total compensation range for this position is $75,000 - $90,000 ( USD base plus bonus). Actual compensation for the position is based on More ❯
Cambridge, England, United Kingdom Hybrid / WFH Options
Bit Bio
AWS. Working with a variety of stakeholders and cross-functional teams, performing analysis of their data requirements and documenting it. Big data tools and stream-processing systems such as: Hadoop, Spark, Kafka, Storm, Spark-Streaming. Relational SQL and NoSQL databases, including Postgres and Cassandra. Experience designing and implementing knowledge graphs for data integration and analysis. Data pipeline and workflow More ❯
Greater Bristol Area, United Kingdom Hybrid / WFH Options
LHH
oriented with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of machine learning algorithms and data science workflows Proven ability to More ❯
Newport, Wales, United Kingdom Hybrid / WFH Options
JR United Kingdom
oriented with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of machine learning algorithms and data science workflows Proven ability to More ❯
application Deep understanding in software architecture, object-oriented design principles, and data structures Extensive experience in developing microservices using Java, Python Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark. Good experience in Test driven development and automating test cases using Java/Python Experience in SQL/NoSQL (Oracle, Cassandra) database design Demonstrated ability to be More ❯
the following architectural frameworks (TOGAF, ZACHMAN, FEAF) Cloud Experience: AWS or GCP preferred, particularly around migrations and cloud architecture Good technical knowledge and understanding of big data frameworks like Hadoop, Cloudera etc. Deep technical knowledge of database development, design and migration Experience of deployment in cloud using Terraform or CloudFormation Automation or Scripting experience using languages such as Python … monitoring of hybrid on-premise and cloud data solutions Working with a variety of enterprise level organisations to understand and analyse existing on-prem environments such as Oracle, Teradata & Hadoop etc., and be able to design and plan migrations to AWS or GCP Deep understanding of high and low level designs and architecture solutions Developing database scripts to migrate More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
CMSPI
with innovative ideas or examples of coding challenges or competitions. Highly desirable skills: Familiarity with Agile practices in a collaborative team environment. Exposure to big data tools, such as Hadoop and Spark for handling large-scale datasets. Experience with cloud platforms like Microsoft Azure. Benefits Comprehensive, payments industry training by in-house and industry experts. Excellent performance-based earning More ❯
visualization tools such as Tableau, Power BI, or similar to effectively present validation results and insights. Nice-to-Have Requirements Familiarity with big data tools and technologies, such as Hadoop, Kafka, and Spark. Familiarity with data governance frameworks and validation standards in the energy sector. Knowledge of distributed computing environments and model deployment at scale. Strong communication skills, with More ❯