MLflow for ML lifecycle management and model versioning Hands-on experience with Databricks Model Serving for production ML deployments Proficiency with GenAI frameworks/tools and technologies such as Apache Airflow, Spark, Flink, Kafka/Kinesis, Snowflake, and Databricks. Demonstrable experience in parameter-efficient fine-tuning, model quantization, and quantization-aware fine-tuning of LLM models Hands-on knowledge More ❯
Lincolnshire, England, United Kingdom Hybrid / WFH Options
Circle Recruitment
performance, backups, and reporting (SSRS) Ensure database integrity and security using SSMS Handle SSL certificate management and implement web security best practices Deploy and maintain applications on Linux (Ubuntu, Apache, Nginx) and Windows (IIS) servers Provide occasional IT support alongside the wider team Benefits : 24 days annual leave + bank holidays, increasing with service Healthcare scheme including optical and More ❯
Lincolnshire, East Midlands, United Kingdom Hybrid / WFH Options
Circle Group
performance, backups, and reporting (SSRS) Ensure database integrity and security using SSMS Handle SSL certificate management and implement web security best practices Deploy and maintain applications on Linux (Ubuntu, Apache, Nginx) and Windows (IIS) servers Provide occasional IT support alongside the wider team Benefits : 24 days annual leave + bank holidays, increasing with service Healthcare scheme including optical and More ❯
end languages and libraries (e.g. HTML/CSS, JavaScript - React) Proficient in Python programming language and its libraries Familiarity with databases (e.g. Snowflake, MongoDB, AWS DynamoDB), web servers (e.g. Apache) and UI/UX design Develop & deploy applications on AWS Serverless services (Lambda, Step Functions, APIGateway) Excellent communication and teamwork skills Great attention to detail Organizational skills An analytical More ❯
tool sets including BitBucket, Jenkins, Jfrog Artifactory, Ansible Tower, Python, Bash scripting, etc. Agile tools (Bitbucket) Desired skills: 7+ years of strong experience with Middleware including Web sphere, JBOSS, Apache, IBM HTTP Experience with build and deploy support in a large, corporate environment - preferably within financial industry Pay range: $55.06 - $63.06 Only candidates available and ready to work directly More ❯
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
Springfield, Virginia, United States Hybrid / WFH Options
SecureVision
wide asynchronous messaging capability deployed across multiple security domains. The environment supports multiple tenants with a variety of different use cases. Specific Duties and Responsibilities: • O&M of existing Apache Pulsar services hosted on Red Hat Openshift across multiple security domains in both Cloud and datacenter (vSphere) environments. • Support deployment using Red Hat Openshift, Keycloak, Gitlab, Gitlab CI, GitOps More ❯
through data science projects Awareness of data security best practices Experience in agile environments You would benefit from having: Understanding of data storage and processing design choices Familiarity with Apache Spark or Airflow Experience with parallel computing Candidates should be able to reliably commute or plan to relocate to Coventry before starting work. The role requires a Data Scientist More ❯
Data Explorer . • Experience working with data in a variety of structured and unstructured formats. • Experience with data visualization tools, computing platforms, and applications such as: Juptyer, Elasticsearch, DataBricks, Apache Zeppelin, Kibana, and/or Tableau • Experience supporting the development of AI/ML algorithms, such as natural language processing in a production environment • Experience configuring and utilizing data More ❯
as Python, Java, C/C++, Ruby, and JavaScript - Experience with distributed storage technologies such as NFS, HDFS, Ceph, and Amazon S3, as well as dynamic resource management frameworks (Apache Mesos, Kubernetes, Yarn) - Proactive approach to identifying problems, performance bottlenecks, and areas for improvement Agile/Scrum experience. More ❯
recommendations, and overseeing all aspects of project delivery and risk in lieu of a degree Nice If You Have: Experience in designing and developing ETL workflows using tools, including Apache Spark or AWS Glue Experience with different data storage technologies and databases, including Amazon S3 or Amazon Redshift Experience with supporting the IC- and national-level system security initiatives More ❯
and user needs Qualifications 5+ years of hands-on experience with Python, Java and/or C++ Development of distributed systems Kubernetes (K8s) AWS (SQS, DynamoDB, EC2, S3, Lambda) Apache Spark Performance testing Bonus Search system development (indexing/runtime/crawling) MLOps development and/or operations The cash compensation range for this role is More ❯
/ML technologies (e.g., TensorFlow, PyTorch, Scikit-learn, Keras) and understanding of deep learning and natural language processing (NLP). • Strong understanding of big data platforms such as Hadoop, Apache Spark, and data lakes. • Hands-on experience with cloud platforms (Azure, AWS, or Google Cloud) for building scalable data solutions. • Proficiency in data modeling, data warehousing, and ETL processes. More ❯
/ML technologies (e.g., TensorFlow, PyTorch, Scikit-learn, Keras) and understanding of deep learning and natural language processing (NLP). - Strong understanding of big data platforms such as Hadoop, Apache Spark, and data lakes. - Hands-on experience with cloud platforms (Azure, AWS, or Google Cloud) for building scalable data solutions. - Proficiency in data modeling, data warehousing, and ETL processes. More ❯
/ML technologies (e.g., TensorFlow, PyTorch, Scikit-learn, Keras) and understanding of deep learning and natural language processing (NLP). • Strong understanding of big data platforms such as Hadoop, Apache Spark, and data lakes. • Hands-on experience with cloud platforms (Azure, AWS, or Google Cloud) for building scalable data solutions. • Proficiency in data modeling, data warehousing, and ETL processes. More ❯
/ML technologies (e.g., TensorFlow, PyTorch, Scikit-learn, Keras) and understanding of deep learning and natural language processing (NLP). • Strong understanding of big data platforms such as Hadoop, Apache Spark, and data lakes. • Hands-on experience with cloud platforms (Azure, AWS, or Google Cloud) for building scalable data solutions. • Proficiency in data modeling, data warehousing, and ETL processes. More ❯