oxford district, south east england, united kingdom
Ellison Institute of Technology
challenges and advance biology engineering. EIT fosters a culture of collaboration, innovation, and resilience, valuing diverse expertise to drive sustainable solutions to humanity's enduring challenges. The High-Performance Computing (HPC) Engineer within GBI will play a pivotal role in designing, building, and maintaining advanced computational infrastructure to accelerate biological and biomedical discovery and translational research. Working within the … Scientific Computing Facility, the HPC Engineer will design, deploy, and optimise systems that enable large-scale data processing, AI-driven analytics, and simulation workloads across. For example deploying Kubernetes and Slurm to enable real-time data analysis from instruments, MLOps, or scientific workflow managers. We will be hiring either at the regular or senior level, depending on the applicant … position requires technical expertise in HPC system architecture, coupled with the ability to collaborate closely with data scientists, bioinformaticians, and software engineers to ensure seamless, high-performance access to computing resources that support GBI's research mission. At the senior level: This position requires deep technical expertise in HPC system architecture, coupled a proven track record to collaborate closely More ❯
Abingdon, Oxfordshire, United Kingdom Hybrid/Remote Options
NES Fircroft
implementation decisions. â Embrace Agile/Scrum methodologies, delivering and demonstrating working solutions at the end of each sprint â Stay current with emerging technologies and trends in geophysical computing and software development. Required Education and Skills â BS or MS degree in computer science, Geoscience, Applied Mathematics, or a related engineering discipline. â Minimum of 10 years of … â Knowledge with geoscience software tools and formats: o SEG-Y, Landmark seismic BRICK, CMP, OpenVDS o DSG, Petrel, Kingdom, GeoFrame, or PaleoScan â Familiarity with cloud platforms and distributedcomputing: o Restful API design and implementation o AWS and Azure o Tools for scalable data processing: Kubernetes, Spark â Experience with Java 2D graphics and 3D OpenGL … programming. â Experience with scientific computing libraries and frameworks: o Python: NumPy, SciPy, Pandas, TensorFlow (for ML/AI) o C Java: CUDA (for GPU acceleration) o Angular or React o Microservice: Quarkus, Spring Boot, AWS API Gateway o Docker, Kubernetes With over 90 years' combined experience, NES Fircroft (NES) is proud to be the world's leading engineering More ❯
scale. You'll work across ClickHouse, Kafka, OpenSearch, and Kubernetes environments — ensuring everything runs smoothly, securely, and efficiently. If you enjoy solving complex technical challenges and optimizing large-scale distributed systems, this is the perfect opportunity for you. About Intapp: Intapp, based in Silicon Valley, is a leader in Vertical AI SaaS company, collaborating with over 2,550 professional … You Will Do: Manage, monitor, and optimize ClickHouse clusters in production environments — including schema design, query tuning, replication setup, and capacity planning. Operate and maintain Kafka, OpenSearch, and other distributed systems, ensuring high performance, scalability, and reliability. Deploy, configure, and manage containerized applications and stateful workloads on Kubernetes, following best practices for security and resource efficiency. Implement and maintain … deployments through automation and version control. Design and operate comprehensive monitoring, logging, and alerting systems to enable proactive issue detection and fast resolution. Conduct performance analysis and optimization across distributed systems to enhance resilience and meet SLA targets. Develop and maintain clear technical documentation, runbooks, and operational procedures, collaborating closely with engineering teams to ensure smooth, reliable operations. What More ❯
markets and quantitative modeling Preferred: Experience in front-office roles or collaboration with trading desks Familiarity with financial instruments across asset classes (equities, FX, fixed income, derivatives) Experience with distributedcomputing frameworks (e.g., Spark, Dask) and cloud-native ML pipelines Exposure to LLMs, graph learning, or other advanced AI methods Strong publication record or open-source contributions in More ❯
data architecture. Required Skills & Experience Technical Expertise Extensive experience with Azure Data Services: Data Factory, Databricks, Synapse, Data Lake, Azure SQL. Strong understanding of data modeling, data warehousing, and distributed computing. Proficiency in Python, SQL, and Spark for data engineering tasks. Financial Services Domain Proven track record of delivering data solutions within banking, insurance, or investment sectors. Familiarity with More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
Synapse/SQL Pools Azure Key Vault Strong programming skills in Python and SQL. Experience building scalable, production-grade data pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow More ❯
teams. Excellent analytical , problem-solving , and communication skills. NICE TO HAVE: Hands-on experience with LLMs and Natural Language Processing (NLP) , including fine-tuning or prompt engineering. Familiarity with distributedcomputing or parallel processing (Ray, Spark, etc.). Experience deploying models in production environments (Docker, cloud services). Exposure to data engineering or working alongside data pipeline teams. More ❯
for robotics 7+ years experience as a Senior Software Engineer or ML engineer Demonstrated proficiency in Python, Julia, or R, and related frameworks (PyTorch, Tensorflow, Pandas, Numpy) Knowledge of distributedcomputing & big data technologies for large datasets (e.g., Spark) and building data pipelines Knowledge of Deep Learning methodologies specific to Computer Vision like YOLO Experience with Annotation tools More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Reed
AAD, Machine Learning). Candidate Profile Essential Skills & Experience: Strong programming skills in C++. Solid understanding of numerical methods such as Monte Carlo simulations and optimisation algorithms. Experience with: Distributedcomputing and inter-process communication Multi-threading programming Microsoft Office, VC++, VBA SQL databases (Access, Oracle) Web technologies (XML, XSLT) Proven ability to work independently and as part More ❯
About this Job Scopeworker's software engineers are developing a next generation, enterprise platform. We are looking for engineers who bring fresh ideas from all areas, including information retrieval, distributedcomputing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile. As a software engineer, you will work on a More ❯
About this Job Scopeworker's software engineers are developing a next generation, enterprise platform. We are looking for engineers who bring fresh ideas from all areas, including information retrieval, distributedcomputing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile. As a software engineer, you will work on a More ❯
experience Additionally, it would be nice to have: Hands-on experience in implementing database internals Experience with abstract syntax trees, lock-free programming & structures, interpreters and compilers, template metaprogramming, distributedcomputing Solid understanding of graph theory About Memgraph Memgraph is an open-source graph database built for streaming and compatible with Neo4j. Being in-memory and built with More ❯
frequency market data, and alternative data. Implement data validation, monitoring, and access layers for research/production use. Scale up and productionize research model pipelines into large-scale, reliable distributed compute jobs. Manage distributed compute workflows with Dask, Ray, and other frameworks. Develop Jupyter tooling, templates, and widgets for strategy prototyping and parameter tuning. Implement performance attribution, PnL … An understanding of crypto or TradFi markets and trading concepts. Nice to Have Experience with crypto exchanges and market microstructure. Hands-on with Bokeh or other interactive viz libraries. Distributed compute (Dask, Ray) experience. ML Stack (JAX, Pytorch, Tensorflow, XGBoost, etc.) experience. Experience with compilers and code generation. Benefits International environment (English is the main language) Pension 100% health More ❯