london, south east england, united kingdom Hybrid/Remote Options
LocalStack
on experience with cloud data platforms such as Snowflake, Redshift, Athena, or BigQuery, including optimization techniques and custom parsers/transpilers. Practical knowledge of distributed and analytical engines (e.g., Apache Spark, Trino, PostgreSQL, DuckDB) with skills in query engines, performance tuning, and integration in local and production environments. Experience building developer tooling such as CLI tools, SDKs, and database More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Arc IT Recruitment
. AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT design and data orchestration, specifically with Apache Airflow. SQL Mastery: Strong SQL skills with significant experience in query tuning and performance optimisation. Programming Proficiency: Proficiency in Python and Bash (for data processing, scripting, and automation). More ❯
and Responsibilities While in this position your duties may include but are not limited to: Support the design, development, and maintenance of scalable data pipelines using tools such as Apache Airflow, dbt, or Azure Data Factory. Learn how to ingest, transform, and load data from a variety of sources, including APIs, databases, and flat files. Assist in the setup More ❯
at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like Apache Flink, Apache Kafka/· Good knowledge of log management, monitoring, and analytics More ❯
scalable pipelines, data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and Apache Spark (PySpark). Ensure performance, scalability, and compliance. Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD, code reviews, and testing. Research and More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Additional Resources Ltd
of Kubernetes, Docker, and cloud-native data ecosystems. Demonstrable experience with Infrastructure as Code tools (Terraform, Ansible). Hands-on experience with PostgreSQL and familiarity with lakehouse technologies (e.g. Apache Parquet, Delta Tables). Exposure to Spark, Databricks, and data lake/lakehouse environments. Understanding of Agile development methods, CI/CD pipelines, GitHub, and automated testing. Practical experience More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE.... Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE.... Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Lorien
data storytelling and operational insights. Optimise data workflows across cloud and on-prem environments, ensuring performance and reliability. Skills & Experience: Strong experience in ETL pipeline development using tools like Apache Airflow, Informatica, or similar. Advanced SQL skills and experience with large-scale relational and cloud-based databases. Hands-on experience with Tableau for data visualisation and dashboarding. Exposure to More ❯
. Solid understanding of DevOps principles and agile delivery. Excellent problem-solving skills and a proactive, team-oriented approach. Confident client-facing communication skills. Desirable Skills & Experience Experience with Apache NiFi and Node.js . Familiarity with JSON, XML, XSD, and XSLT . Knowledge of Jenkins, Maven, BitBucket, and Jira . Exposure to AWS and cloud technologies. Experience working within More ❯
teams to build scalable data pipelines and contribute to digital transformation initiatives across government departments. Key Responsibilities Design, develop and maintain robust data pipelines using PostgreSQL and Airflow or Apache Spark Collaborate with frontend/backend developers using Node.js or React Implement best practices in data modelling, ETL processes and performance optimisation Contribute to containerised deployments (Docker/Kubernetes More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Harnham - Data & Analytics Recruitment
Proven experience designing and implementing end-to-end MLOps processes in a production environment. Cloud ML Stack: Expert proficiency with Databricks and MLflow . Big Data/Coding: Expert Apache Spark and Python engineering experience on large datasets. Core Engineering: Strong experience with GIT for version control and building CI/CD/release pipelines. Data Fundamentals: Excellent SQL More ❯
including monitoring, alerting, and automated checks. Optimise data workflows for performance, cost-efficiency, and maintainability , using platforms such as Azure Data Factory, AWS Data Pipeline, Glue, Lambda, Databricks, and Apache Spark. Support the integration of transformed data into visualisation and analytical platforms , including Power BI, ServiceNow, and Amazon QuickSight. Ensure compliance with data governance, security, and privacy standards across More ❯
of containerisation and orchestration (e.g., Docker , Kubernetes , OpenShift ). Experience with CI/CD pipelines (e.g., Jenkins, TeamCity, Concourse). Familiarity with web/application servers such as NGINX, Apache, or JBoss. Exposure to monitoring and logging tools (ELK, Nagios, Splunk, DataDog, New Relic, etc.). Understanding of security and identity management (OAuth2, SSO, ADFS, Keycloak, etc.). Experience More ❯
to learn new technologies IT WOULD BE NICE FOR THE DATA ENGINEER TO HAVE.... Cloud based architectures Microservice architecture or server-less architecture Messaging/routing technologies such as Apache Nifi/RabbitMQ TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can make myself More ❯
integration of software IT WOULD BE NICE FOR THE BIG DATA ENGINEER TO HAVE.... Cloud based architectures Microservice architecture or server-less architecture Messaging/routing technologies such as Apache Nifi/RabbitMQ Experience of DevSecOps automated deployment tools such as Jenkins, Ansible, Docker TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For More ❯
monitoring processes to maintain data integrity and reliability. * Optimise data workflows for performance, cost-efficiency, and maintainability using tools such as Azure Data Factory, AWS Data Pipeline, Databricks, or Apache Spark . * Integrate and prepare data for Tableau dashboards and reports , ensuring optimal performance and alignment with business needs. * Collaborate with visualisation teams to develop, maintain, and enhance Tableau More ❯
for hardware and Linux based systems Strong AWS experience – management and deployment Experience working in a SaaS environment, ideally CRM Full Software Development Lifecycle experience DevOps skills with Linux, Apache and MySQL Agile project methodologies The role can pay up to c85k basic plus a bonus of c5k. Max package will be 90k. There is parking onsite. 25 days More ❯
Fareham, Hampshire, South East, United Kingdom Hybrid/Remote Options
Richmond Square Consulting Limited
both desk-side and remotely). EUD provisioning, maintenance, patching and hardening SSO concepts and operation Understanding of working in protectively marked environments Any experience with DevOps, Docker, Kubernetes, Apache (Nifi, Kafka) would be a nice to have, but non essential Current and active SC Clearance (willingness to undergo DV at some point may be advantageous) If you feel More ❯
tree protocol, link aggregation for performance (MTU settings) and reliability requirements.- Linux administration skills, including Bash scripting, ACL, users, groups, filesystems, packages management, common demons (SSH-Servers, NTP, SSSD, Apache, Ngnix, HAproxy, MySQL, PostgreSQL), etcDesirable CriteriaAs with the essential skillset the below list is desirable for the right candidate to know about an be proficient about a few of More ❯
london, south east england, united kingdom Hybrid/Remote Options
YLD
following skills and experience: Experience with modern frameworks across the stack (FastAPI, Gin, Express, etc.) Experience building modern data pipelines using dbt, Kafka, Spark, AWS Kinesis, AWS Lambda, and Apache Airflow (or similar); Experience working with data lakes; experience with Spark or Databrick; Comprehensive testing experience (unit, integration, e2e, security) and performance optimization Scalable system design patterns, load balancing … and high-availability architectures Database design and Data modelling (both relational and NoSQL) Understanding of common data transformation and storage formats, e.g. Apache Parquet API development (REST, GraphQL, gRPC) and event-driven architectures Caching strategies and message queue systems (Redis, Kafka, RabbitMQ) Cloud platforms (AWS, Azure, GCP) with containerisation (Docker, Kubernetes) CI/CD pipelines and Infrastructure as Code More ❯
Azure, AWS, GCP) Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery More ❯
clustering, upgrades, installation, and scripting Windows Server administration and Microsoft enterprise software. Database management: DB2, PostgreSQL IBM Middleware: WebSphere Application Server, MQ (Nice to Have) Open-source software configuration: Apache and similar packages Cloud platforms: Azure, AWS, GCP ? About Us ? Responsiv build distinctive business solutions that are simple and effective. Our expertise spans cloud computing, digital transformation, business process More ❯