and data stores to support organizational requirements. Skills/Qualifications: 4+ years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines 3+ years of proficiency in working with Snowflake or similar cloud More ❯
West London, London, England, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
on knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating in a cloud-native environment (e.g. Fabric, AWS, GCP, or Azure). Excellent stakeholder management and communication skills. A strategic mindset, with a practical More ❯
on knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating in a cloud-native environment (eg Fabric, AWS, GCP, or Azure). Excellent stakeholder management and communication skills. A strategic mindset, with a practical More ❯
Champion clean code, data lifecycle optimisation, and software engineering best practices What We're Looking For Proven hands-on experience with Databricks platform and orchestration Strong skills in Python, PySpark, and SQL, with knowledge of distributed data systems Expertise in developing full lifecycle data pipelines across ingestion, transformation, and serving layers Experience with data lakehouse architecture, schema design, and More ❯
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured, maintainable systems. Strong communication skills More ❯
development) Strong experience with CI/CD tools and pipelines for data science Solid understanding of AWS services (e.g. EC2, S3, Lambda, Glue) and CDK Proficient in Python and PySpark; SQL fluency Experience with MLflow or other model lifecycle tools Effective communicator and trainer - able to help others upskill Comfortable building internal tools and documentation Nice to Have: Experience More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, and Azure Data Factory. Understanding of data extraction from vendor REST APIs. Spark/Pyspark or Python skills a bonus or a willingness to develop these skills. Experience with monitoring and failure recovery in data pipelines. Excellent problem-solving skills and attention to detail. More ❯
Experience with Agile/Scrum Framework. Excellent problem-solving and analytical skills. Excellent communication skills, both at a deep technical level and stakeholder level. Data Expert experience with Databricks (PySpark). Experience building and maintaining complex ETL Projects, end-to-end (ingestion, processing, storage). Expert knowledge and experience with data modelling, data access, and data storage techniques. Experience More ❯
Azure. Especially Synapse, ADF and Power BI (Datasets and Reports). Ideally SSIS, SSRS, SSAS with some understanding of Power App design and delivery Expert in SQL and Python (PySpark) languages, any other object orientated language skills would be a benefit Expert in data modelling and data architecture concepts Experience of setup and management of code management & deployment tools More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
MYO Talent
Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, Delta Lake Data Warehousing ETL CDC Stream Processing Database Design ML Python/PySpark Azure Blob Storage Parquet Azure Data Factory Desirable: Any exposure working in a software house, consultancy, retail or retail automotive sector would be beneficial but not essential. More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Amtis Professional Ltd
Troubleshoot issues and continuously improve data infrastructure Explore AI-driven enhancements to boost data accuracy and productivity Requirements: Strong experience with: Azure Databricks, Data Factory, Blob Storage Python/PySpark SQL Server, Parquet, Delta Lake Deep understanding of: ETL/ELT, CDC, stream processing Lakehouse architecture and data warehousing Scalable pipeline design and database optimisation A proactive mindset, strong More ❯
Atherstone, Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Aldi Stores
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯
Leeds, West Yorkshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
architecture Ensuring best practices in data governance, security, and performance tuning Requirements: Proven experience with Azure Data Services (ADF, Synapse, Data Lake) Strong hands-on experience with Databricks (including PySpark or SQL) Solid SQL skills and understanding of data modelling and ETL/ELT processes Familiarity with Delta Lake and lakehouse architecture A proactive, collaborative approach to problem-solving More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
Strong analytical and troubleshooting skills. Desirable Skills Familiarity with state management libraries (MobX, Redux). Exposure to financial data or market analytics projects. Experience with data engineering tools (DuckDB, PySpark, etc.). Knowledge of automated testing frameworks (Playwright, Cypress). Experience of WebAssembly. Python programming experience for data manipulation or API development. Use of AI for creating visualisations. Soft More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
term data strategy with a strong focus on data integrity and GDPR compliance To be successful in the role you will have Hands-on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, Delta Lake, and Azure Synapse Strong More ❯
term data strategy with a strong focus on data integrity and GDPR compliance To be successful in the role you will have Hands-on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, Delta Lake, and Azure Synapse Strong More ❯
data modeling, and software architecture Data Science Library Knowledge: Deep understanding of key Data Science and Machine Learning libraries (e.g., pandas, NumPy, scikit-learn, TensorFlow), with a preference for PySpark experience Model Productionisation: Experience in taking Machine Learning models from development to production CI/CD and MLOps Experience : Familiarity with Continuous Integration and Continuous Deployment pipelines, especially in More ❯
with cross-functional teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
with cross-functional teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
work wimulti-functionalnal teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
work wimulti-functionalnal teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
work wimulti-functionalnal teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
classifiers, deep learning, or large language models Experience with experiment design and conducting A/B tests Experience building shared or platform-style ML systems Experience with Databricks and PySpark Experience working with AWS or another cloud platform (GCP/Azure) Additional Information Health + Mental Wellbeing PMI and cash plan healthcare access with Bupa Subsidised counselling and coaching More ❯