slough, south east england, united kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯
logic SQL (PostgreSQL/SQL Server) for data querying and pipelines React (TypeScript) for intuitive, modern UIs Exposure to cloud platforms (AWS/Azure), Docker, or streaming tools (Kafka, Spark, etc.) is a plus Ideal Profile: 1-2 years' experience in a commercial software or data engineering role Strong coding skills in Python, SQL, and React Keen to work More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Deloitte LLP
solutions from structured and unstructured data. Build data pipelines, models, and AI applications, using cloud platforms and frameworks such as Azure AI/ML Studio, AWS Bedrock, GCP Vertex, Spark, TensorFlow, PyTorch, etc. Build and deploy production grade fine-tuned LLMs and complex RAG architectures. Create and manage the complex and robust prompts across the GenAI solutions. Communicate effectively More ❯
Cambridge, Cambridgeshire, United Kingdom Hybrid / WFH Options
Deloitte LLP
solutions from structured and unstructured data. Build data pipelines, models, and AI applications, using cloud platforms and frameworks such as Azure AI/ML Studio, AWS Bedrock, GCP Vertex, Spark, TensorFlow, PyTorch, etc. Build and deploy production grade fine-tuned LLMs and complex RAG architectures. Create and manage the complex and robust prompts across the GenAI solutions. Communicate effectively More ❯
Milton Keynes, Buckinghamshire, United Kingdom Hybrid / WFH Options
Deloitte LLP
solutions from structured and unstructured data. Build data pipelines, models, and AI applications, using cloud platforms and frameworks such as Azure AI/ML Studio, AWS Bedrock, GCP Vertex, Spark, TensorFlow, PyTorch, etc. Build and deploy production grade fine-tuned LLMs and complex RAG architectures. Create and manage the complex and robust prompts across the GenAI solutions. Communicate effectively More ❯
Guildford, Surrey, United Kingdom Hybrid / WFH Options
Deloitte LLP
solutions from structured and unstructured data. Build data pipelines, models, and AI applications, using cloud platforms and frameworks such as Azure AI/ML Studio, AWS Bedrock, GCP Vertex, Spark, TensorFlow, PyTorch, etc. Build and deploy production grade fine-tuned LLMs and complex RAG architectures. Create and manage the complex and robust prompts across the GenAI solutions. Communicate effectively More ❯
Data Engineer Sr - Informatica ETL Expert page is loaded Data Engineer Sr - Informatica ETL Expert Apply locations Two PNC Plaza (PA374) Birmingham - Brock (AL112) Dallas Innovation Center - Luna Rd (TX270) Strongsville Technology Center (OH537) time type Full time posted on More ❯
experience deploying and maintaining machine learning models (transformer based models in production is a plus), including identifying the right KPIs and objective functions Experience working with big data systems (Spark, EMR, S3, Airflow) and programming languages (Java, Python, C++) Experience building in-production NLU and/or ASR systems Bachelor Degree required. MS in Computer Science or a Ph.D. More ❯
Relevant experience in delivery of AI design, build, deployment or management Proficiency or certification in Microsoft Office tools, as well as relevant technologies such as Python, TensorFlow, Jupiter Notebook, Spark, Azure Cloud, Git, Docker and/or any other relevant technologies Strong analytical and problem-solving skills, with the ability to work on complex projects and deliver actionable insights More ❯
Relevant experience in delivery of AI design, build, deployment or management Proficiency or certification in Microsoft Office tools, as well as relevant technologies such as Python, TensorFlow, Jupiter Notebook, Spark, Azure Cloud, Git, Docker and/or any other relevant technologies Strong analytical and problem-solving skills, with the ability to work on complex projects and deliver actionable insights More ❯
Oversee pipeline performance, address issues promptly, and maintain comprehensive data documentation. What Youll Bring Technical Expertise: Proficiency in Python and SQL; experience with data processing frameworks such as Airflow, Spark, or TensorFlow. Data Engineering Fundamentals: Strong understanding of data architecture, data modelling, and scalable data solutions. Backend Development: Willingness to develop proficiency in backend technologies (e.g., Python with Django … to support data pipeline integrations. Cloud Platforms: Familiarity with AWS or Azure, including services like Apache Airflow, Terraform, or SageMaker. Data Quality Management: Experience with data versioning and quality assurance practices. Automation and CI/CD: Knowledge of build and deployment automation processes. Experience within MLOps A 1st class Data degree from one of the UKs top 15 Universities More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
ability to explain technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala (minimum of 2). Extensive Data Engineering hands-on experience (coding, configuration, automation, delivery, monitoring, security). ETL Tools such as Azure Data Fabric (ADF … live in the UK, and you MUST have the Right to Work in the UK long-term without the need for Company Sponsorship. KEYWORDS Senior Data Engineer, Coding Skills, Spark, Java, Python, PySpark, Scala, ETL Tools, Azure Data Fabric (ADF), Databricks, HDFS, Hadoop, Big Data, Cloudera, Data Lakes, Azure Data, Delta Lake, Data Lake, Databricks Lakehouse, Data Analytics, SQL More ❯
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
Liverpool, Merseyside, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
ability to explain technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant … you MUST have the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
leading innovative technical projects. As part of this role, you will be responsible for some of the following areas: Design and build distributed data pipelines using languages such as Spark, Scala, and Java Collaborate with cross-functional teams to deliver user-centric solutions Lead on the design and development of relational and non-relational databases Apply Gen AI tools … scale data collection processes Support the deployment of machine learning models into production To be successful in the role you will have: Creating scalable ETL jobs using Scala and Spark Strong understanding of data structures, algorithms, and distributed systems Experience working with orchestration tools such as Airflow Familiarity with cloud technologies (AWS or GCP) Hands-on experience with Gen More ❯