or CPG with a focus on NLP and AI Advanced and hands on experience using: Python, Databricks, Azure ML, Azure Cognitive Service, Ads Data Hub, BIQuery, SAS, R, SQL, PySpark, Numpy, Pandas, Scikit Learn, TensorFlow, PyTorch, AutoTS, Prophet, NLTK Experience with Azure Cloud technologies including Azure DevOps, Azure Synapse, MLOps, GitHub Solid experience working with large datasets and developing More ❯
multiple workstreams towards a scalable, reliable, and high-quality data platform Strong knowledge of data compliance frameworks Proven experience migrating from legacy SQL-based systems to modern technologies like PySpark, Databricks, Terraform, and Pandas Hands-on ability to analyse and understand complex data sets Ideally, experience with multi-cloud environments (Databricks on Azure and AWS) A product-minded leader More ❯
Eastleigh, Hampshire, United Kingdom Hybrid / WFH Options
Spectrum IT Recruitment
operations and performance Work with Azure DevOps to manage and track project work About You Essential Skills & Experience: Proficiency in cloud-based tools (ADF, Synapse, S3, Lambda) Experience using PySpark for ELT pipelines Strong analytical and problem-solving mindset Able to work independently and collaboratively across teams Confident communicator with strong documentation skills Experience in a data engineering role More ❯
environment. Experience in the Utilities sector. Experience leading technical projects. Skills & Technologies required: Proficiency in cloud-based data engineering tools (ADF, Synapse Analytics, S3, Lambda). Proficiency in using PySpark notebooks for ELT processes. Ability to foster and cultivate a culture of best practices. Strong analytical and problem-solving skills. Ability to work independently and within cross-functional teams. More ❯
Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables • Solid understanding of data warehousing principles, ETL/ELT processes, data … data engineering, and cloud technologies to continuously improve tools and approaches Technologies: AI AWS Azure CI/CD Cloud Databricks DevOps ETL GCP Support Machine Learning Power BI PythonPySpark SQL Spark Terraform Unity GameDev Looker SAP More: NETCONOMY has grown over the past 20 years from a startup to a 500-people team working across 10 European locations More ❯
Experience in Cloud Data Pipelines Building cloud data pipelines involves using Azure native programming techniques such as PySpark or Scala and Databricks. These pipelines are essential for tasks like sourcing, enriching, and maintaining structured and unstructured data sets for analysis and reporting. They are also crucial for secondary tasks such as flow pipelines, streamlining AI model performance, and enhancing More ❯
Execution & Transformation - Data Acquisition Team at a leading Investment Bank. You'll work across regulatory and transformation initiatives that span multiple trading desks, functions, and stakeholders. You'll build PySpark and SQL queries to interrogate, reconcile and analyse data, contribute to Hadoop data architecture discussions, and help improve reporting processes and data quality. You'll be hands-on across More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
McGregor Boyall
Execution & Transformation - Data Acquisition Team at a leading Investment Bank. You'll work across regulatory and transformation initiatives that span multiple trading desks, functions, and stakeholders. You'll build PySpark and SQL queries to interrogate, reconcile and analyse data, contribute to Hadoop data architecture discussions, and help improve reporting processes and data quality. You'll be hands-on across More ❯
a separate team to release all development through Azure DevOps pipelines, maintaining a strong understanding of Git and code release best practises. Technology Requirements: - Proficient in Python 3 and Pyspark 3/4. - Experience with Python Behave for Behaviour Driven Development and testing. - Familiarity with Python Coverage for code coverage analysis. - Strong knowledge of Databricks, specifically with delta parquet More ❯
understanding of AI/ML/DL and Statistics, as well as coding proficiency using related open source libraries and frameworks. Significant proficiency in SQL and languages like Python, PySpark and/or Scala. Can lead, work independently as well as play a key role in a team. Good communication and interpersonal skills for working in a multicultural work More ❯
and complexity projects• Bachelor's degree in Computer Science or related field (or 4 additional years of relevant SWE experience)• Strong background in analytics development• Proficiency in Pig and PySpark Desired:• Experience with patch management and IAVA tracking• Programming skills in Python, Java, or Scala• Familiarity with NiFi and Ansible• Experience working in Agile environments Security Clearance Required: TS More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
South Yorkshire, England, United Kingdom Hybrid / WFH Options
Erin Associates
SQL Server. Understanding of applying master data management principles, data quality frameworks and data governance best practices. Understanding of Azure Data Factory, Fabric and similar technologies Tech Stack – Python, PySpark, SQL, Xpath, XML, Azure-based Data Science tools, BI tools, Data Visualisation, Agile. The company have an excellent reputation within their sector and have shown consistent growth year-on More ❯
Sheffield, South Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Erin Associates
SQL Server. Understanding of applying master data management principles, data quality frameworks and data governance best practices. Understanding of Azure Data Factory, Fabric and similar technologies Tech Stack Python, PySpark, SQL, Xpath, XML, Azure-based Data Science tools, BI tools, Data Visualisation, Agile. The company have an excellent reputation within their sector and have shown consistent growth year-on More ❯
Data Scientist, you will work using data engineering, statistical, and ML/AI approaches to uncover data patterns and build models. We use Microsoft tech stack, including Azure Databricks (Pyspark, python), and we are expanding our data science capabilities. To be successful in the role, you will need to have extensive experience in data science projects and have built More ❯
tooling, and engineering standards. We'd love to talk to you if you have: Proven experience delivering scalable, cloud-based data platforms using tools such as Databricks, Spark or PySpark, and services from AWS, Azure, or GCP. Experience in line management and people development. You've supported engineers with regular 1:1s, development planning, and performance conversations, and are More ❯
tooling, and engineering standards. We'd love to talk to you if you have: Proven experience delivering scalable, cloud-based data platforms using tools such as Databricks, Spark or PySpark, and services from AWS, Azure, or GCP. Experience in line management and people development. You've supported engineers with regular 1:1s, development planning, and performance conversations, and are More ❯
experience in data science, machine learning, and business analytics. Extensive experience with generative AI and machine learning frameworks (GANs, VAEs, transformer-based architectures, LLMs). Programming experience with Python, PySpark, microservices, and data modeling. Proficient in LLM orchestration on platforms such as OpenAI on Azure, AWS Bedrock, GCP Vertex AI, or Gemini AI. Architecture & Deployment: Designing, deploying, and scaling More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
Days per Week in Office) I am currently seeking a Contract AWS Data Engineer for a scale-up company with several upcoming greenfield projects. Tech Stack: AWS Databricks Lakehouse PySpark SQL ClickHouse/MySQL/DynamoDB If you are interested, please click "Apply" with an updated copy of your CV, and I will contact you to discuss further. More ❯
I am currently on the lookout for a Contract AWS Data Engineer with a scale-up who have number of greenfield projects coming up. Tech Stack: AWS Databricks Lakehouse PySpark SQL ClickHouse/MySQL/DynamoDB If you are interested in this position, please click apply with an updated copy of you CV and I will call you to More ❯
Office) I am currently on the lookout for a Contract AWS Data Engineer with a scale-up who have number of greenfield projects coming up.Tech Stack: AWS Databricks Lakehouse PySpark SQL ClickHouse/MySQL/DynamoDB If you are interested in this position, please click apply with an updated copy of you CV and I will call you to More ❯
understanding of data integration, data quality, and data governance. Extensive experience in working with big data technology tools and platforms such as Microsoft Azure Data Factory, Databricks, Unity Catalog, PySpark, Power BI, Synapse, SQL Server, Cosmos Db, Python. Understanding and application of cloud architectures and microservices in big data solutions. Understanding of commodities industry. Rate/Duration More ❯
WE NEED THE PYTHON/DATA ENGINEER TO HAVE.... Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test and More ❯
WE NEED THE PYTHON/DATA ENGINEER TO HAVE. Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test and More ❯
Inside IR35 Start Date: ASAP Key Skills Required: Azure Data Factory Azure Functions SQL Python Desirable: Experience with Copilot Studio Experience designing, developing, and deploying AI solutions Familiarity with PySpark, PyTorch, or other ML frameworks Exposure to M365, D365, and low-code/no-code Azure AI tools If interested, please send a copy of your most recent CV More ❯