data solutions that support diverse trading activities. Take full ownership of data products, guiding them from initial concept through to stable production. Design, build, and maintain systems for both batchprocessing and real-time streaming of time series datasets, ensuring high data quality and reliability. Develop APIs and data access methods for fast, intuitive retrieval of historical and … data solutions that support diverse trading activities. Take full ownership of data products, guiding them from initial concept through to stable production. Design, build, and maintain systems for both batchprocessing and real-time streaming of time series datasets, ensuring high data quality and reliability. Develop APIs and data access methods for fast, intuitive retrieval of historical and … Lake, Apache Iceberg), and relational databases Have a Bachelors or advanced degree in Computer Science, Mathematics, Statistics, Physics, Engineering, or equivalent work experience For more information about DRW's processing activities and our use of job applicants' data, please view our Privacy Notice at . More ❯
teams • Strong hands-on expertise in cloud platforms (GCP, AWS, or Azure) with emphasis on cloud-native architectures • Deep understanding of data engineering - ETL/ELT, real-time/batchprocessing, data lakes/warehouses, and data governance • Track record of delivering complex, scalable systems that drive measurable business impact • Strong software engineering foundation across different languages and More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid / WFH Options
Atom Bank
sustainable, and agile data analytics platform for a data driven digital bank. You will be using a range of database technologies and a combination of event driven techniques and batchprocessing methodologies. In addition you will be expanding and optimizing our cloud-based data & data pipelines as part of a cross functional team. As a Data Engineer at More ❯
systems are written in Elixir, but where necessary, we use small amounts of Python and Java where vendor SDKs require it. Designing, developing and maintaining realtime data streaming, and batchprocessing workloads. Providing on-call support as part of our teamwide rotation. Our on-call rotation is split across US and UK time zones, ensuring coverage whilst keeping … algorithms. Analysis of concurrency and parallelism for speed/space performance tradeoffs. Bonus Experience: Exchange traded financial instruments. Problem-solving and proof construction. For more information about DRW's processing activities and our use of job applicants' data, please view our Privacy Notice at . More ❯
imaging specialists, building robust data pipelines that power AI workflows and integrate seamlessly with our robotic systems Key Responsibilities Design and implement scalable data pipelines for real-time and batchprocessing Build and maintain data lakes, warehouses, and streaming platforms Develop architectures that support AI workflows and integration with clinical/robotic software Data Operations and Quality Ensure More ❯
team with suggestions on how to improve and optimize processes Responsibilities: Responsible for the support & integration of Guidewire application to business estates Identify opportunities and propose integration solutions including batchprocessing and message queues Manage and provide technical for Guidewire applications in production. Liaise & interact with 3rd party vendors Engage with technical teams and business analysts to achieve More ❯
Chester, Cheshire West and Chester, Cheshire, United Kingdom
Ascendion
5. Support internal and external audits, ensuring compliance and documentation. 6. Collaborate with business and operations teams to prioritize defect fixes and enhancements. 7. Handle event-driven and scheduled batch processes, ensuring smooth operations. 8. Troubleshoot technical issues related to Java, Middleware, Unix/Windows, and databases. 9. Participate in on-call support for real-time issue triaging and More ❯
solutions for medium-to-high complexity projects Analyse data flows across integrated systems and ensure changes are properly assessed and coordinated Build and maintain Apex, Visualforce, Lightning Components, and batch processes with scalability in mind Lead integrations with external systems via REST and SOAP APIs Optimise and refactor existing code for performance and maintainability Contribute to DevOps, version control More ❯
performance spikes and deadlocks Develop BI reports and semantic models (Power BI/DAX) Govern data assets in Azure and ensure compliance (GDPR) Tech Environment: High-availability, distributed deployment Batchprocessing + real-time inserts/updates .NET applications using ORM and stored procedures Azure SQL + Storage Containers Looking for someone who thrives in fast-paced environments More ❯
applications both in Production and UAT with primary focus on Portfolio Management, OMS and Bespoke investment applications, all trade flow and pricing of Insight's investment positions, including overnight batch processing. As well as this you will support associated applications used by the Front Office, Middle Office, Performance, Client Services and Risk to manage Insight's book of OTC More ❯
working with modern tools in a fast-moving, high-performance environment. Your responsibilities may include: Build and maintain scalable, efficient ETL/ELT pipelines for both real-time and batch processing. Integrate data from APIs, streaming platforms, and legacy systems, with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. … Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and security measures in line with best practices and regulatory standards. Develop observability and anomaly detection tools to support Tier … maintaining scalable ETL/ELT pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical More ❯
system changes, coordinate UAT, and ensure solutions meet objectives. Support system updates, patches, and upgrades with minimal disruption. Administer applications, maintaining security, integrity, and performance, and monitor integrations and batch processes. Promote knowledge sharing within the team to prevent single points of failure. Keep technical skills up to date and support ad hoc initiatives as needed. Qualifications & Experience: Proven More ❯
full-time, remote UK-based role. ***Unfortunately we can't consider candidates that require sponsorship*** Key Responsibilities: Research and deploy LLM-based solutions (e.g., LangChain, Mastra.ai, Pydantic) for document processing, summarization, and clinical Q&A systems. Develop and optimize predictive models using scikit-learn, PyTorch, TensorFlow, and XGBoost. Design robust data pipelines using tools like Spark and Kafka for … real-time and batch processing. Manage ML lifecycle with tools such as Databricks , MLflow , and cloud-native platforms (Azure preferred). Collaborate with engineering teams to ensure scalable, secure ML infrastructure aligned with compliance standards (e.g., ISO27001). Ensure data governance, particularly around sensitive healthcare data. Share best practices and stay current with developments in AI, ML, and LLMs. More ❯
full-time, remote UK-based role. ***Unfortunately we can't consider candidates that require sponsorship*** Key Responsibilities: Research and deploy LLM-based solutions (e.g., LangChain, Mastra.ai, Pydantic) for document processing, summarization, and clinical Q&A systems. Develop and optimize predictive models using scikit-learn, PyTorch, TensorFlow, and XGBoost. Design robust data pipelines using tools like Spark and Kafka for … real-time and batch processing. Manage ML lifecycle with tools such as Databricks , MLflow , and cloud-native platforms (Azure preferred). Collaborate with engineering teams to ensure scalable, secure ML infrastructure aligned with compliance standards (e.g., ISO27001). Ensure data governance, particularly around sensitive healthcare data. Share best practices and stay current with developments in AI, ML, and LLMs. More ❯