Data Pipeline Jobs in Kettering

2 of 2 Data Pipeline Jobs in Kettering

Data AWS Engineer

kettering, midlands, united kingdom
Hybrid / WFH Options
Atrium (EMEA)
Contract Role – Data AWS Engineer – Northampton/Hybrid – 6 months – Inside IR35 7+ years of experienced in designing, building, and maintaining data pipelines and architectures on the Amazon Web Services (AWS) cloud platform. Skilled in scalable, reliable, and efficient data solutions, often using AWS services like S3, Redshift, EMR, Glue, and Kinesis. This involves … designing ETL processes, ensuring data security, and collaborating with other teams for data analysis and business requirements Role Overview: Job Title: Data AWS Engineer Location: Northampton (hybrid: 2-3 days in office) Contract Type: Contract Duration: 6 months Key Responsibilities: Designing and Building Data Pipelines: Creating and implementing data pipelines … to move data between different systems and applications on AWS. Data Warehouse Management: Designing, building, and maintaining data warehouses using AWS services like Redshift. ETL Process Development: Developing and maintaining Extract, Transform, Load (ETL) processes to move and transform data. Data Governance and Security: Implementing data governance and security policies More ❯
Posted:

Scala Developer

kettering, midlands, united kingdom
Capgemini
unlock the value of technology and build a more sustainable, more inclusive world. YOUR ROLE We are looking for a skilled Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will … work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. YOUR PROFILE Develop, optimize, and maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with … cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and reliability. Write clean, maintainable, and well-documented code. Participate in code reviews, design discussions, and agile ceremonies. Implement data quality and governance best More ❯
Posted: