London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
IR35 Start Date: ASAP Key Skills Required Azure Data Factory Azure Functions SQL Python Desirables- Experience Copilot or Copilot Studio Experience designing, developing, and deploying AI solutions Familiarity with PySpark, PyTorch, or other ML frameworks Exposure to M365, D365, and low-code/no-code Azure AI tools If interetsed please send a copy of your most recent CV More ❯
prompt engineering, vector databases, or RAG pipelines Proven experience with A/B testing, experimentation design, or causal inference to guide product decisions Exposure to Databricks, MLflow, AWS, and PySpark (or similar technologies) is a plus Excitement about Ophelos' mission to support households and businesses in breaking the vicious debt cycle About Our Team Ophelos launched in June of More ❯
UK role (WFH) - 6 months initial contract - Top rates - Major consultancy urgently requires a Microsoft Fabric specialist with in-depth experience of MS Fabric (tech stack is Microsoft Fabric, PySpark/RSpark and Github) for an initial 6 months contract (WFH) who is passionate about building new capabilities from the ground up and want to help explore the full More ❯
East London, London, England, United Kingdom Hybrid / WFH Options
Fusion People Ltd
UK role (WFH) - 6 months initial contract - Top rates - Major consultancy urgently requires a Microsoft Fabric specialist with in-depth experience of MS Fabric (tech stack is Microsoft Fabric, PySpark/RSpark and Github) for an initial 6 months contract (WFH) who is passionate about building new capabilities from the ground up and want to help explore the full More ❯
a bonus (but not essential) if you bring experience in some of the following areas: Cloud analytics: Exposure to cloud-based data platforms such as Microsoft Azure or Databricks. PySpark: Any hands-on experience using PySpark for data processing. Azure services: Familiarity with Azure tools like Data Factory. Data fundamentals: Awareness of data structures, algorithms, data quality, governance … skills as you advance in your career with us. What success would look like: Building Reliable Data Pipelines: Consistently delivering well-tested and robust data pipelines using Python and PySpark on Databricks, adhering to established coding standards and software engineering best practices. Growing Technical Proficiency: Rapidly developing your skills in our core technologies (Python, PySpark, Databricks, SQL, Git More ❯
related field. • Demonstrated experience with intermediate to advanced full stack development. • Demonstrated experience with the ability to consume complex REST APIs. • Demonstrated experience with tools, languages, and frameworks: Python, PySpark, JavaScript, Vue, Nuxt.js, and Viz.js • Demonstrated experience with AWS services: S3 and EC2. • Demonstrated experience with databases: relational, graph (Neo4J/Graph-Tool), and NoSQL/document (MongoDB). More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
WüNDER TALENT
Senior Data Engineer| AWS/Databricks/PySpark | London/Glasgow (Hybrid) | August Start Role: Senior Data Engineer Location: This is a hybrid engagement represented by 2 days/week onsite, either in Central London or Glasgow. Start Date: Must be able to start mid-August. Salary: £80k-£90k (Senior) | £90k-£95k (Lead) About The Role Our partner is … decisions, peer reviews and solution design. Requirements Proven experience as a Data Engineer in cloud-first environments. Strong commercial knowledge of AWS services (e.g. S3, Glue, Redshift). Advanced PySpark and Databricks experience (Delta Lake, Unity Catalog, Databricks Jobs etc). Proficient in SQL (T-SQL/SparkSQL) and Python for data transformation and scripting. Hands-on experience with … engagement represented by 2 days/week onsite, either in Central London or Glasgow. You must be able to start in August. Senior Data Engineer| AWS/Databricks/PySpark | London/Glasgow (Hybrid) | August Start More ❯
teams as part of a wider trading project. The initial work on the project will involve abstracting code from these product teams into a shared, common python library leveraging PySpark/dataframes. You will then be serving as an extension of these product teams building microservices and libraries to solve the common needs. Skills: • Experience with Unit Testing • Preferably More ❯
teams as part of a wider trading project. The initial work on the project will involve abstracting code from these product teams into a shared, common python library leveraging PySpark/dataframes. You will then be serving as an extension of these product teams building microservices and libraries to solve the common needs. Skills: • Experience with Unit Testing • Preferably More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Vermelo RPO
ideally with a focus in Motor Experience and detailed technical knowledge of GLMs/Elastic Nets, GBMs, GAMs, Random Forests, and clustering techniques Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Proficient at communicating results in a concise manner both verbally and written Behaviours: Motivated by technical excellence Team player Self-motivated with a drive to learn More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
ideally with a focus in Motor Experience and detailed technical knowledge of GLMs/Elastic Nets, GBMs, GAMs, Random Forests, and clustering techniques Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Proficient at communicating results in a concise manner both verbally and written Behaviours: Motivated by technical excellence Team player Self-motivated with a drive to learn More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
Proven experience as a Programme or Delivery Manager on data-centric programmes Solid understanding of data ingestion processes and Snowflake data warehousing Familiarity with AWS Glue, S3, DBT, SnapLogic, PySpark (not hands-on, but able to converse technically) Strong governance and delivery background in a data/tech environment Excellent communication and stakeholder management skills (must be assertive) Pharma More ❯
across various platforms. This position is essential for ensuring the integrity, reliability, and accessibility of our data, which supports critical business decisions and drives insights. **Required Skills** - **Proficiency in PySpark and AWS:** You should have a strong command of both PySpark for data processing and AWS (Amazon Web Services) for cloud-based solutions. - **ETL Pipeline Development:** Demonstrated experience … ETL (Extract, Transform, Load) pipelines is crucial. You will be responsible for moving and transforming data from various sources to data warehouses. - **Programming Expertise:** A solid understanding of Python, PySpark, and SQL is required to manipulate and analyze data efficiently. - **Knowledge of Spark and Airflow:** In-depth knowledge of Apache Spark for big data processing and Apache Airflow for More ❯
Birmingham, West Midlands, England, United Kingdom Hybrid / WFH Options
Client Server Ltd
Data Software Engineer (PythonPySpark) Remote UK to £95k Are you a data savvy Software Engineer with strong Python coding skills? You could be progressing your career in a senior, hands-on Data Software Engineer role as part of a friendly and supportive international team at a growing and hugely successful European car insurance tech company as they expand … on your location/preferences. About you: You are degree educated in a relevant discipline, e.g. Computer Science, Mathematics You have a software engineering background with advanced Python and PySpark coding skills You have experience in batch, distributed data processing and near real-time streaming data pipelines with technologies such as Kafka You have experience of Big Data Analytics More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
Key Responsibilities - Azure Data Engineer: Design, build and maintain scalable and secure data pipelines on the Azure platform. Develop and deploy data ingestion processes using Azure Data Factory, Databricks (PySpark), and Azure Synapse Analytics. Optimise ETL/ELT processes to improve performance, reliability and efficiency. Integrate multiple data sources including Azure Data Lake (Gen2), SQL-based systems and APIs. … GDPR and ISO standards). Required Skills & Experience - Azure Data Engineer: Proven commercial experience as a Data Engineer delivering enterprise-scale solutions in Azure Azure Data Factory Azure Databricks (PySpark) Azure Synapse Analytics Azure Data Lake Storage (Gen2) SQL & Python Understanding of CI/CD in a data environment, ideally with tools like Azure DevOps. Experience working within consultancy More ❯
In details, the position encompasses duties and responsibilities as follows: An experienced Data Engineer is required for the Surveillance IT team to develop ingestion pipelines and frameworks across the application portfolio, supporting Trade Surveillance analysts with strategy and decision-making. More ❯
mathematic models, methods, and/or techniques (e.g. algorithm or development) to study issues and solve problems, engineering (electrical or computer), and/or high performance computing Preferred Python & pySpark experience More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to a high level More ❯
Employment Type: Permanent
Salary: £75000 - £80000/annum Pension, Good Holiday, Healthcare
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2) Extensive Data Engineering and Data Analytics hands-on experience Significant AWS hands-on experience Technical Delivery Manager skills Geospatial Data experience (including QGIS … support your well-being and career growth. KEYWORDS Principal Geospatial Data Engineer, Geospatial, GIS, QGIS, FME, AWS, On-Prem Services, Software Engineering, Data Engineering, Data Analytics, Spark, Java, Python, PySpark, Scala, ETL Tools, AWS Glue. Please note, to be considered for this role you MUST reside/live in the UK, and you MUST have the Right to Work More ❯
The Software Engineer will run build and work on enterprise grade software systems using a modern tech stack including PySpark with Databricks for data engineering tasks, infrastructure as code with AWS CDK and GraphQL. As a Software Engineer, you are expected to work with architects to design clean decoupled solutions; create automated tests in support of continuous delivery; adopt … scientific degree or equivalent professional experience. Some level of professional working experience. More if no relevant degree. OO and functional programming experience, design patterns, SOLID principles. Experience in Python, PySpark and/or SQL is preferred. Experience with scrum, TDD, BDD, Pairing, Pull Requests, Continuous Integration & Delivery. Continuous Integration tools - Github, Azure DevOps, Jenkins or similar. Infrastructure as code More ❯
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
Cardiff, South Glamorgan, Wales, United Kingdom Hybrid / WFH Options
Octad Recruitment Consultants (Octad Ltd )
and clients Required Skills & Experience Must-Haves: 3+ years of hands-on Azure engineering experience (IaaS ? PaaS), including Infra as Code. Strong SQL skills and proficiency in Python or PySpark . Built or maintained data lakes/warehouses using Synapse , Fabric , Databricks , Snowflake , or Redshift . Experience hardening cloud environments (NSGs, identity, Defender). Demonstrated automation of backups, CI … their Azure data lake using Synapse , Fabric , or an alternative strategy. Ingest data from core platforms: NetSuite , HubSpot , and client RFP datasets. Automate data pipelines using ADF , Fabric Dataflows , PySpark , or SQL . Publish governed datasets with Power BI , enabling row-level security (RLS). By Year-End: Deliver a production-ready lakehouse powering BI and ready for AI More ❯