Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Radley James
working in cloud-native environments (AWS preferred) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding More ❯
machine learning, with a strong portfolio of relevant projects. Proficiency in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Peaple Talent
Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's in it for More ❯
System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. More ❯
our ambitious data initiatives and future projects. IT Manager - Microsoft Azure- The Skills You'll Need to Succeed: Mastery of Data bricks, Python/PySpark and SQL/SparkSQL. Experience in Big Data/ETL (Spark and Data bricks preferred). Expertise in Azure. Proficiency with versioning control (Git More ❯
london, south east england, United Kingdom Hybrid / WFH Options
83zero
similar tools is valuable (Collibra, Informatica Data Quality/MDM/Axon etc.) Data Architecture experience is a bonus Python, Scala, Databricks Spark and Pyspark with Data Engineering skills Ownership and ability to drive implementation/solution design More ❯
practices in data management and governance, guide in structuring cloud environments, and support data initiatives and future projects. Qualifications: Proficiency in Databricks, Python/PySpark, and SQL/SparkSQL. Experience with Big Data/ETL processes, preferably Spark and Databricks. Expertise in Azure cloud platform. Knowledge of version control More ❯
valuable insights Desirable Experience working with enterprise data warehouse or data lake platforms Experience working with a cloud platform such as AWS Have used PySpark for data manipulation Previous exposure to game or IoT telemetry events and how such data is generated Knowledge of best practices involving data governance More ❯
good communicator with your team. Key Skills Python in the software engineering level, including unit and integration test experience. Distributed computing knowledge covered by PySpark or Scala, can debug things in SparkUI and knows how to optimise for this purpose. AWS experience Good understanding of data modelling, change data More ❯
london, south east england, United Kingdom Hybrid / WFH Options
La Fosse
Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong More ❯
coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques More ❯
london (city of london), south east england, United Kingdom
Wilson Brown
key role in building their scalable data capabilities; designing ingestion pipelines, high-performance APIs, and real-time data processing systems. Key Responsibilities Stack: Python, PySpark, Linux, PostgreSQL, SQL Server, Databricks, Azure, AWS Design and implement large-scale data ingestion, transformation, and analysis solutions Model datasets and develop indicators to More ❯
queries for huge datasets. Has a solid understanding of blockchain ecosystem elements like DeFi, Exchanges, Wallets, Smart Contracts, mixers and privacy services. Databricks and PySpark Analysing blockchain data Building and maintaining data pipelines Deploying machine learning models Use of graph analytics and graph neural networks If this sounds like More ❯
skills: Typical Data Engineering Experience required (3+ years): Strong knowledge and experience: Azure Data Factory and Synapse data solution provision Azure Devops PowerBi PythonPySpark (Preference will be given to those who hold relevant certifications) Proficient in SQL. Knowledge of Terraform Ability to develop and deliver complex visualisation, reporting More ❯
Engineering Experience required ACTIVE SC is mandatory Essential requirement: Azure Data Factory and Synapse data solution provision Azure DevOps Microsoft Azure PowerBi Python misson Pyspark Dimension Data Model Semantic Data Models, including integration to Power BI Data Engineering Capabilities Business analysis to understand service needs and and documents accurately More ❯
Newbury, Berkshire, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
effectiveness, including Azure DevOps. Considerable experience designing and building operationally efficient pipelines, utilising core Azure components, such as Azure Data Factory, Azure Databricks and Pyspark etc. Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use. Strong More ❯
in Computer Science, Statistics, Data Science, or a related field. Extensive experience in clinical data programming and Data Visualization. Strong experience with Python programming (PySpark, Databricks, pandas, Numpy) is mandatory for this role. Good SDTM Knowledge. Strong SAS/R Programming and SQL expertise. Proficiency in CDISC SDTM standards. More ❯
Knowledge of Data Warehouse/Data Lake architectures and technologies. Strong working knowledge of a language for data analysis and scripting, such as Python, Pyspark, R, Java, or Scala. Experience with any of the following would be desirable but not essential; Microsoft's Fabric data platform, Experience with ADF More ❯
Analyst: Must hold experience with data analytic techniques, ideally applied to real-life objects. Must hold a year's professional experience using Python/Pyspark/Pandas. Experience with processing data and working with databases/datalakes (SQL). Strong understanding of data manipulation, analysis and processing. Ability to More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Trust In SODA
following areas: Data Warehousing (Databricks) Data Modelling (Medallion Arch, Facts/Dimensions) Azure Data Stack (DataFactory/Synapse) Visualisation (PowerBI) Coding Best Practice (Python, PySpark) Real-Time Processing (SQL) Insurance/Finance Experience Startup experience Leadership/Line Management experience. What’s in it for you? Remote First Working More ❯
sprint planning sessions. Monitor data pipeline executions and investigate test failures or anomalies. Document test results, defects, and quality metrics. Preferred qualifications: Experience with PySpark or notebooks in Databricks. Exposure to Azure DevOps, Unit Testing frameworks, or Great Expectations for data testing. Knowledge of data warehousing or medallion architecture More ❯
data technologies Experience in designing, managing and overseeing task assignment for technical teams. Mentoring data engineers Strong Exposure to SQL, Azure Data Factory, Databricks & PySpark is a must have. Experience in Medallion Silver Layer modelling Experience in Agile project environment Insurance experience – Policy and Claims Understanding of DevOps, continuous More ❯
Tunbridge Wells, Kent, United Kingdom Hybrid / WFH Options
ADLIB
record of delivering machine learning or AI projects end-to-end Hands-on skills in Python, with frameworks like Scikit-learn, TensorFlow, PyTorch, or PySpark Deep understanding of data science best practices, including MLOps Strong stakeholder communication skills-able to translate complex insights into business impact Experience working in More ❯
Kent, South East, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
record of delivering machine learning or AI projects end-to-end Hands-on skills in Python, with frameworks like Scikit-learn, TensorFlow, PyTorch, or PySpark Deep understanding of data science best practices, including MLOps Strong stakeholder communication skillsable to translate complex insights into business impact Experience working in cross More ❯