plymouth, south west england, united kingdom Hybrid / WFH Options
JammJar
features in Next.js/TypeScript. Working with AI integrations (OpenAI, Anthropic etc...). Making architecture decisions that shape the future of the product. Jumping on AWS infra tasks (EC2, S3, RDS, IAM, CI/CD). Shipping fast. Shipping well. Shipping often. Why Join Us Ownership from day one. Your code, your impact, your wins. £60–70k salary with More ❯
Cambridgeshire, England, United Kingdom Hybrid / WFH Options
KDR Talent Solutions
ll Need: Proven, deep commercial experience with Databricks. You must have hands-on expertise with Delta Lake and the Lakehouse paradigm. Strong expertise in the AWS data ecosystem (e.g., S3, AWS Glue, Kinesis, IAM) and a deep understanding of how to build, secure, and optimise a Databricks platform within it. Expert-level Python and SQL skills, specifically for data More ❯
ML About the candidate: 1-2 years of hands-on experience in data science or applied machine learning in an enterprise setting Strong understanding of AWS services, particularly SageMaker, S3, and Bedrock Proficiency in Python with experience using NumPy, pandas, scikit-learn, and one deep learning framework (PyTorch or TensorFlow) Experience working with structured and unstructured data, using SQL More ❯
of highly technically skilled engineers, who are enhancing Creditsafes data offerings with high throughput and scalability as primary goals. The data delivery platform is built upon AWS Redshift and S3 cloud storage. The platform manages billions of objects, with daily increments in the 10s of millions. Our ETL processes are built in Python and Rust, efficiently transforming incoming data … for high throughput, low latency delivery from S3 via transactional and bulk APIs. Job Profile Join us to take the above project of redesigning the Creditsafe platform into the cloud space. You will be expected to work with technologies such as Python, Rust, Linux, EC2, ECS, S3, Glue, Athena, Lambda, Step Functions, API Gateway, DynamoDB, RDS, Terraform, CI … Actively contribute to the codebase and participate in peer reviews. Design and build metadata driven, event based distributed data processing platform using technologies such as Python, Rust, EC2, ECS, S3, Glue, Athena, Lambda and Step Functions. Work collaboratively in the design, development, testing and deployment of our business-critical system. Building and scaling Creditsafe APIs to securely support over More ❯
suppliers, consultants. Knowledge and Skills: Knowledge - Broad data management technical knowledge so as to be able to work across full data cycle. - Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD - Coding experience in Apache Spark, Iceberg or Python (Pandas) - Experience in change and release management. - Experience in Database … Essentials: - Expertise across data warehouse and ETL/ELT development in AWS preferred with experience in the following: - Strong experience in some of the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation, Code Build, CI/CD, GitHub, IAM, SQS, SNS, Aurora DB - Good experience with DBT, Apache Iceberg, Docker, Microsoft BI stack (nice More ❯
deep understanding of the language and its core libraries Spring Boot – Expert in building RESTful APIs and microservices using Spring Boot AWS – Strong experience with core AWS services (EC2, S3, Lambda, IAM, etc.) Kafka/Messaging Systems – Expertise in building event-driven architectures SQL – Strong experience with relational databases and SQL query optimization REST/JSON APIs – Designing, developing … for local and cloud environments Excellent communication skills to communicate complex ideas effectively Nice-to-Have Skills JavaScript & ReactJS – Experience in building or integrating with front-end applications Amazon Aurora/RDS – Familiarity with managed database solutions on AWS JUnit & Testing – Unit and integration testing best practices GDS More ❯
of the wider organization. They must be an expert with experience with continuous integration, automated deployment, testing and relevant tooling (Git/GitLab, Jenkins, Ansible, Terraform, Linux, AWS EC2, S3 and EKS are essential) Key responsibilities include: Design and implement automated build and deployment solution for Java based micro service applications utilizing Atlassian Jira/Gillam/Jenkins/… to create reusable pipelines across projects Understanding of the Linux Operating System, standard network protocols and security hardening. Proven experience using AWS Cloud Solutions and services such as Ec2, S3, Lambda, EKS, API Gateway, ALB, API Gateway, Autoscaling etc In-depth knowledge in Infrastructure-as-code tools (such as Ansible, Terraform, etc.) Mandatory experience in creating automation framework using More ❯
of the wider organization. They must be an expert with experience with continuous integration, automated deployment, testing and relevant tooling (Git/GitLab, Jenkins, Ansible, Terraform, Linux, AWS EC2, S3 and EKS are essential) Key responsibilities include: Design and implement automated build and deployment solution for Java based micro service applications utilizing Atlassian Jira/Gillam/Jenkins/… to create reusable pipelines across projects Understanding of the Linux Operating System, standard network protocols and security hardening. Proven experience using AWS Cloud Solutions and services such as Ec2, S3, Lambda, EKS, API Gateway, ALB, API Gateway, Autoscaling etc In-depth knowledge in Infrastructure-as-code tools (such as Ansible, Terraform, etc.) Mandatory experience in creating automation framework using More ❯
platform. You'll work closely with Strategy Managers and Researchers to develop tools & frameworks that facilitate the day-to-day operation of trading pipelines. They use AmazonS3 for data storage, Python libraries (e.g. Pandas) for data manipulation, Kafka for event-based data transformations, which are deployed using Octopus Deploy. Familiarity with some or all of these More ❯
analysis to support launch of a data science driven, model-based decisioning pilot test to optimize email marketing. Mine and process complex, high-volume data from AmazonS3, OneLake and Snowflake to support data pipelines, analytics and model development. Conduct in-depth analysis of business processes to gather requirements for data-driven decision-making across marketing campaigns More ❯
Better Placed Ltd - A Sunday Times Top 10 Employer!
Responsibilities Data Engineering & SQL Development Own and develop the data architecture, ensuring performance and scalability. Design, develop, and optimise SQL queries, stored procedures, and ETL pipelines. Work with AWS S3 data lakes and ERP systems to extract, transform, and load data. Ensure data accuracy and integrity across multiple sources. Integrate services via RESTful APIs and manage structured/unstructured More ❯
5+ years' experience building ETL/ELT pipelines using Python and pandas within a financial environment. Strong knowledge of relational databases and SQL. Familiarity with various technologies, such as S3, Kafka, Airflow, Iceberg. Proficiency working with large financial datasets from various vendors. A commitment to engineering excellence and pragmatic technology solutions. A desire to work in an operational role More ❯
and master data management . Track record of delivering data analytics and AI/ML-enabling solutions across complex environments. Hands-on experience with cloud data platforms , ideally AWS (S3, Kinesis, Glue, Redshift, Lambda, EMR). Experience with Azure technologies (ADF, Synapse, Fabric, Azure Functions) is also valued. Strong understanding of modern data lakehouse architectures , such as Databricks , Snowflake More ❯
and master data management . Track record of delivering data analytics and AI/ML-enabling solutions across complex environments. Hands-on experience with cloud data platforms , ideally AWS (S3, Kinesis, Glue, Redshift, Lambda, EMR). Experience with Azure technologies (ADF, Synapse, Fabric, Azure Functions) is also valued. Strong understanding of modern data lakehouse architectures , such as Databricks , Snowflake More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
role is ideal for someone with strong experience in data migration, ETL (both batch and real-time), and data warehouse development. What you'll need: DataStage Redshift QuickSight AWS S3 Java SQL Relational databases GitHub/lab experience Nice to have: Data quality expertise XML knowledge AWS Data Speciality certification Reasonable Adjustments: Respect and equality are core values to More ❯
PostgreSQL . Key Responsibilities: Design and maintain data models that meet business requirements, ensuring scalability, consistency, and accuracy. Build and manage ETL/ELT pipelines to integrate data from S3 and various sources into the analytical layer. Administer and optimise MSSQL Server and PostgreSQL databases for performance and reliability. Collaborate with internal teams to improve data availability, reliability, and More ❯
PostgreSQL . Key Responsibilities: Design and maintain data models that meet business requirements, ensuring scalability, consistency, and accuracy. Build and manage ETL/ELT pipelines to integrate data from S3 and various sources into the analytical layer. Administer and optimise MSSQL Server and PostgreSQL databases for performance and reliability. Collaborate with internal teams to improve data availability, reliability, and More ❯
Reston, Virginia, United States Hybrid / WFH Options
ALTA IT Services
Do Design and define AI/ML solution architecture, including LLMs, vector databases, and prompt engineering strategies. Select and integrate the right AWS services (e.g., Bedrock, SageMaker, Kendra, Aurora, S3, Athena) for scalable AI workloads. Collaborate with cross-functional teams to align solutions with enterprise standards. Stay ahead of Generative and Agentic AI trends and recommend practical applications. Serve More ❯
business. Data Engineering & SQL Development Own and develop the data architecture, ensuring performance and scalability. Design, develop, and optimise SQL queries, stored procedures, and ETL pipelines. Work with AWS S3 data lakes and ERP systems to extract, transform, and load data. Ensure data accuracy and integrity across multiple sources. Integrate services via RESTful APIs and manage structured/unstructured More ❯