and the broader Azure ecosystem. Requirements Proven experience as a Data Engineer working with Microsoft Fabric or related Azure data services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft More ❯
and the broader Azure ecosystem. Requirements Proven experience as a Data Engineer working with Microsoft Fabric or related Azure data services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft More ❯
and the broader Azure ecosystem. Requirements Proven experience as a Data Engineer working with Microsoft Fabric or related Azure data services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft More ❯
and the broader Azure ecosystem. Requirements Proven experience as a Data Engineer working with Microsoft Fabric or related Azure data services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft More ❯
and the broader Azure ecosystem. Requirements Proven experience as a Data Engineer working with Microsoft Fabric or related Azure data services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft More ❯
and the broader Azure ecosystem. Requirements Proven experience as a Data Engineer working with Microsoft Fabric or related Azure data services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft More ❯
and the broader Azure ecosystem. Requirements Proven experience as a Data Engineer working with Microsoft Fabric or related Azure data services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft More ❯
Bedford, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
Proficiency in Azure data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, and Azure Data Factory. Spark/Pyspark or Python skills a bonus or a willingness to develop these skills. Experience with monitoring and failure recovery in data pipelines. Excellent problem-solving More ❯
added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
Python, SQL) Data modelling and data warehousing Extract-Transform-Load (ETL) Creating data pipelines Ethics in data and AI Cloud and big data technologies (PySpark, AWS) Working with unstructured data Consulting Skills: Effective teamwork Business acumen Time management Stakeholder management Presentations Agile methodology Being a brand ambassador Being a More ❯
Familiarity with AWS and cloud-based development or at least some professional exposure Knowledge of data engineering tools and technologies such as MySQL, Snowflake, Pyspark, Pandas and data warehouses/data lakes. Understanding of software design principles and best practices Strong problem-solving and analytical skills Excellent communication and More ❯
Brentwood, Essex, United Kingdom Hybrid / WFH Options
Harvey Nash Group
line operations and maintain platform health Document support processes and provide technical guidance where needed Required Skills & Experience Strong experience in Python 3 and PySpark Hands-on with Elasticsearch, including tuning and data discovery Proficient with distributed datasets (HDFS, CEPHFS) Frontend development with React, NextJS, MUI Familiarity with Scala More ❯
Cambridge, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯
Chelmsford, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯
Stevenage, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯