London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and More ❯
frameworks, and clear documentation within your pipelines Experience in the following areas is not essential but would be beneficial: Data Orchestration Tools: Familiarity with modern workflow management tools like Apache Airflow, Prefect, or Dagster Modern Data Transformation: Experience with dbt (Data Build Tool) for managing the transformation layer of the data warehouse BI Tool Familiarity : An understanding of how More ❯
Amazon EKS, Amazon S3, AWS Glue, Amazon RDS, Amazon DynamoDB, Amazon Aurora, Amazon SageMaker, Amazon Bedrock (including LLM hosting and management). Expertise in workflow orchestration tools such as Apache Airflow Experience implementing DataOps best practices and tooling, including DataOps.Live Advanced skills in data storage and management platforms like Snowflake Ability to deliver insightful analytics via business intelligence tools More ❯
teams to build scalable data pipelines and contribute to digital transformation initiatives across government departments. Key Responsibilities Design, develop and maintain robust data pipelines using PostgreSQL and Airflow or Apache Spark Collaborate with frontend/backend developers using Node.js or React Implement best practices in data modelling, ETL processes and performance optimisation Contribute to containerised deployments (Docker/Kubernetes More ❯
AWS) to join a contract till April 2026. Inside IR35 SC cleared Weekly travel to Newcastle Around £400 per day Contract till April 2026 Skills: - Python - AWS Services - Terraform - Apache Spark - Airflow - Docker More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom
Opus Recruitment Solutions Ltd
SC cleared Software developers (Python & AWS) to join a contract till April 2026.Inside IR35SC clearedWeekly travel to Newcastle Around £400 per dayContract till April 2026Skills:- Python- AWS Services- Terraform- Apache Spark- Airflow- Docker More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
data solutions using the Databricks platform. Key Responsibilities: . Lead the migration of existing AWS-based data pipelines to Databricks. . Design and implement scalable data engineering solutions using Apache Spark on Databricks. . Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. . Optimize performance and cost-efficiency of Databricks workloads. . … for data governance, security, and access control within Databricks. . Provide technical mentorship and guidance to junior engineers. Must-Have Skills: . Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). . Proven track record of building and optimizing data pipelines in cloud environments. . Experience with AWS services such as S3, Glue, Lambda, Step Functions More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
services and deep expertise in monitoring, diagnostics, and performance optimization. Key Responsibilities: . Design and implement observability solutions across web applications, Servers, and network infrastructure. . Monitor and support ApacheHTTPServer, Linux/UNIX systems, and web Servers. . Collaborate with IT operations, support, and security teams to ensure system reliability and compliance. . Administer infrastructure components including … NAC, and network security tools. . Develop and maintain telemetry pipelines for Real Time insights and alerting. . Support application performance and troubleshoot issues across environments. Required Skills: . ApacheHTTPServer, Linux, UNIX, Web Server & Web Application Support . Infrastructure & Server Administration . IT Operations, Support & Security . Network Access Control & Security . System Administration & Software Development . More ❯