Permanent 'Apache' Job Vacancies

326 to 350 of 357 Permanent Apache Jobs

Senior Data Engineer

slough, south east england, united kingdom
Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
Posted:

Senior Data Engineer

london (city of london), south east england, united kingdom
Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
Posted:

Databricks Data Architectx2 UK Wide Hybrid Working

England, United Kingdom
Hybrid / WFH Options
Adecco
an Azure and Databrick focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, Apache Spark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Duties Design and build high-performance … data platforms: Utilize Databricks and Apache Spark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Design and oversee the delivery of secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Abilty to Design, Build and deploy AI/ML models … to ensure successful data platform implementations. Your Skills and Experience Solid experience as a Data Architect with experience in designing, developing and implementing Databricks solutions Proven expertise in Databricks, Apache Spark, and data platforms with a strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks, and Azure Data More ❯
Employment Type: Permanent
Salary: GBP 80,000 - 90,000 Annual
Posted:

Databricks Data Architectx2 UK Wide Hybrid Working

Nationwide, United Kingdom
Hybrid / WFH Options
Adecco
an Azure and Databrick focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, Apache Spark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Duties Design and build high-performance … data platforms: Utilize Databricks and Apache Spark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Design and oversee the delivery of secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Abilty to Design, Build and deploy AI/ML models … to ensure successful data platform implementations. Your Skills and Experience Solid experience as a Data Architect with experience in designing, developing and implementing Databricks solutions Proven expertise in Databricks, Apache Spark, and data platforms with a strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks, and Azure Data More ❯
Employment Type: Permanent
Salary: £80000 - £90000/annum + Benefits
Posted:

AI/Data Developer (SC Cleared)

newport, wales, united kingdom
Experis UK
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
Posted:

AI/Data Developer (SC Cleared)

Greater Bristol Area, United Kingdom
Experis UK
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
Posted:

AI/Data Developer (SC Cleared)

bath, south west england, united kingdom
Experis UK
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
Posted:

AI/Data Developer (SC Cleared)

bradley stoke, south west england, united kingdom
Experis UK
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
Posted:

AI/Data Developer (SC Cleared)

Bristol, Gloucestershire, United Kingdom
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
Posted:

Databricks Engineer

Glasgow, Scotland, United Kingdom
Capgemini
optimizing scalable data solutions using the Databricks platform. YOUR PROFILE Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … within Databricks. Provide technical mentorship and guidance to junior engineers Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain More ❯
Posted:

Databricks Engineer

milton, central scotland, united kingdom
Capgemini
optimizing scalable data solutions using the Databricks platform. YOUR PROFILE Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … within Databricks. Provide technical mentorship and guidance to junior engineers Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain More ❯
Posted:

Databricks Engineer

paisley, central scotland, united kingdom
Capgemini
optimizing scalable data solutions using the Databricks platform. YOUR PROFILE Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … within Databricks. Provide technical mentorship and guidance to junior engineers Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain More ❯
Posted:

Python Automation lead

Austin, Texas, United States
LTI Mindtree
Data Engineering (Python Automation Lead) Work Location- Austin, TX Job Description: Seeking a Senior Specialist with 7 to 11 years of experience in Python and data technologies including Flask Apache Spark Scala and Nginx to design and implement scalable data driven solutions Develop and maintain high performance data processing applications using Apache Spark and Scala Build and deploy … rigorous testing and best practices. Stay updated with the latest trends and advancements in data engineering and Python ecosystems. Lead the design and development of data processing pipelines leveraging Apache Spark and Scala. Architect and implement backend services and APIs using Flask to support data driven applications. Manage deployment and configuration of Nginx servers to ensure high availability and More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Spark/Scala Developer

London Area, United Kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Posted:

Spark/Scala Developer

City of London, London, United Kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Posted:

Spark/Scala Developer

london, south east england, united kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Posted:

Spark/Scala Developer

slough, south east england, united kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Posted:

Spark/Scala Developer

london (city of london), south east england, united kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Posted:

Scala Developer

Northampton, England, United Kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. YOUR PROFILE Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Posted:

Scala Developer

kettering, midlands, united kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. YOUR PROFILE Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Posted:

Scala Developer

milton keynes, south east england, united kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. YOUR PROFILE Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Posted:

Senior Application Developer

United Kingdom
Leap29
XML and related technologies. Proficiency in XML, XSLT, JSON, CSV, and EDI data formats. Strong experience with DataStage Designer or other data integration platforms like Talend, Informatica, SSIS, or Apache Nifi. Expertise in data mapping, data modelling (conceptual and logical), and data transformation. Excellent problem-solving and analytical skills, with a strong attention to detail. Strong communication and collaboration … with cross-functional teams. Fluent in English with excellent verbal and written communication skills. Nice to Have: Experience with ETL tools or data integration platforms (e.g., Datastage, Talend, Informatica, Apache Nifi, Boomi). Proficiency in data mapping documentation (Excel-based, BPMN, UML). More ❯
Posted:

Junior Data Scientist

London, South East, England, United Kingdom
Hybrid / WFH Options
Robert Half
monitor machine learning models for anomaly detection and failure prediction. Analyze sensor data and operational logs to support predictive maintenance strategies. Develop and maintain data pipelines using tools like Apache Airflow for efficient workflows. Use MLflow for experiment tracking, model versioning, and deployment management. Contribute to data cleaning, feature engineering, and model evaluation processes. Collaborate with engineers and data … science libraries (Pandas, Scikit-learn, etc.). Solid understanding of machine learning concepts and algorithms . Interest in working with real-world industrial or sensor data . Exposure to Apache Airflow and/or MLflow (through coursework or experience) is a plus. A proactive, analytical mindset with a willingness to learn and collaborate. Why Join Us Work on meaningful More ❯
Employment Type: Full-Time
Salary: £30,000 - £50,000 per annum
Posted:

Principal Data Architect

Central London, London, United Kingdom
Aker Systems Limited
guidance to cross-functional teams, ensuring best practices in data architecture, security and cloud computing Proficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata systems Utilise Apache Flink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput data Ensure all data solutions comply with industry standards and government … but not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a plus In-depth knowledge and hands-on experience with Apache Flink for real-time data processing Proven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environment Strong ability to engage More ❯
Employment Type: Permanent
Posted:

Principal Data Architect

united kingdom, united kingdom
Aker Systems
guidance to cross-functional teams, ensuring best practices in data architecture, security and cloud computi ngProficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata syste msUtilise Apache Flink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput da taEnsure all data solutions comply with industry standards and government … but not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a p lusIn-depth knowledge and hands-on experience with Apache Flink for real-time data process ingProven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environm entStrong ability to engage More ❯
Posted:
Apache
10th Percentile
£48,875
25th Percentile
£60,188
Median
£90,000
75th Percentile
£120,000
90th Percentile
£135,000