London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
Robert Half Technology are assisting a cutting edge AI organisation to recruit a DataEngineer on a contract basis - remote working - UK based A JuniorDataEngineer who's excited to help build the data foundations that power cutting-edge AI solutions. You'll join a high-impact team working at the intersection of … data, analytics, and machine learning - designing pipelines and infrastructure that make innovation possible at scale. Role Design, build, and maintain scalable data pipelines that fuel AI and analytics initiatives. Partner closely with data scientists, analysts, and engineers to deliver clean, structured, and reliable data. Develop robust data transformations in Python and SQL, ensuring performance and accuracy. … Work hands-on with Snowflake to model, optimise, and manage data flows. Continuously improve data engineering practices - from automation to observability. Bring ideas to the table: help shape how data is collected, processed, and leveraged across the business. Profile The DataEngineer will ideally have 2-5 years of experience in data engineering or More ❯
Energy Domain is a must. Under 7 years of experience only. Permanent. Key Responsibilities Design, develop, and maintain data ingestion pipelines using open-source frameworks and tools Build and optimise ETL/ELT processes to handle small to large-scale data processing requirements Develop data models and schemas that support analytics, business intelligence and product needs Monitor … troubleshoot, and optimise data pipeline performance and reliability Collaborate with stakeholders, analysts and product team to understand data requirements Implement data quality checks and validation processes to ensure data integrity Participate in architecture decisions and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with … experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git More ❯
Energy Domain is a must. Under 7 years of experience only. Permanent. Key Responsibilities Design, develop, and maintain data ingestion pipelines using open-source frameworks and tools Build and optimise ETL/ELT processes to handle small to large-scale data processing requirements Develop data models and schemas that support analytics, business intelligence and product needs Monitor … troubleshoot, and optimise data pipeline performance and reliability Collaborate with stakeholders, analysts and product team to understand data requirements Implement data quality checks and validation processes to ensure data integrity Participate in architecture decisions and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with … experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git More ❯
Energy Domain is a must. Under 7 years of experience only. Permanent. Key Responsibilities Design, develop, and maintain data ingestion pipelines using open-source frameworks and tools Build and optimise ETL/ELT processes to handle small to large-scale data processing requirements Develop data models and schemas that support analytics, business intelligence and product needs Monitor … troubleshoot, and optimise data pipeline performance and reliability Collaborate with stakeholders, analysts and product team to understand data requirements Implement data quality checks and validation processes to ensure data integrity Participate in architecture decisions and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with … experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git More ❯
london (city of london), south east england, united kingdom
Vallum Associates
Energy Domain is a must. Under 7 years of experience only. Permanent. Key Responsibilities Design, develop, and maintain data ingestion pipelines using open-source frameworks and tools Build and optimise ETL/ELT processes to handle small to large-scale data processing requirements Develop data models and schemas that support analytics, business intelligence and product needs Monitor … troubleshoot, and optimise data pipeline performance and reliability Collaborate with stakeholders, analysts and product team to understand data requirements Implement data quality checks and validation processes to ensure data integrity Participate in architecture decisions and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with … experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git More ❯