Banking and Financial Services sector is advantageous. • Deep knowledge or experience with using as much of the following: Azure Cloud Data Components | Databricks | Python | PySpark | Terraform | APIs | Lakehouse | Data Mesh | Nosql DBs | GitHub. Requirements: Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Data more »
Penrith, Cumbria, United Kingdom Hybrid / WFH Options
Computer Futures
Knowledge on Spark architecture and modern Datawarehouse/Data-Lake/Lakehouse techniques Build transformation tables using SQL. Moderate level knowledge of Python/PySpark or equivalent programming language. PowerBI Data Gateways and DataFlows, permissions. Creation, utilisation, optimisation and maintenance of Relational SQL and NoSQL databases. Experienced working with more »
. Broad knowledge of different technical areas (DevOps, Engineering, Architecture, Various Datastores, etc). Strong proficiency in SQL & Python . Experience working with Spark (Pyspark) , data streaming, and big data solutions. Experience with Databricks would be great. Designing and building data warehouses & data lakes in BigQuery, including data modeling more »
Data governance (Purview, Unity Catalogue) Databricks Delta Lake Storage Azure Dev OPS DESIRED SKILLS Advanced Analytics Data Technologies Databricks, Delta Lake, Synapse Spark SQL, Pyspark Azure Data Explorer Logic Apps, Key Vault Semi structured data processing Integration Runtime Coding experience: Python, C#, Java for Data analysis purpose One of more »
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Damia Group Ltd
over 150 PB of data. As a Spark Scala Engineer, you will have the responsibility to refactor Legacy ETL code, for example DataStage into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Your responsibilities; As a Spark Scala Engineer more »
Azure Search, Azure Stream Analytics Delta Lake and Data Lakes Apache Spark Pools, SQL Pools (dpools and spools) Experience in Python, C# coding, Spark, PySpark, Unix shell/Perl scripting experience. Experience in API data sourcing using REST, Soap, and other API methodologies. Experience working with structured and unstructured more »
Guildford, Surrey, South East, United Kingdom Hybrid / WFH Options
Allianz Insurance Plc
issues. Familiarity with monitoring tools to track model performance, resource utilization, and system health. Proficiency in programming languages such as Python, and knowledge of PySpark and Spark pool clusters as well as ML libraries and frameworks. Proficiency with observability tools, such as: Prometheus and Grafana. Infrastructure-as-Code(IaC more »
Preferred Experience in Microsoft Azure services and Databricks Spark, Redshift, Hadoop Map-Reduce or other Big Data frameworks Code management tools (Git, Sbt, Maven) Pyspark, Scala or other functional programming languages Analytics tools such as R or SPSS Any understanding of Web Analytics data such as Adobe Analytics or more »
Knutsford, England, United Kingdom Hybrid / WFH Options
Capgemini
warehousing solutions. Familiarity with DBT (Data Build Tool) for managing transformations in the data pipeline. Strong programming skills in technologies like Python, Scala, Spark, PySpark or Ab Initio, Glue, Starburst, Snowflake, Redshift, Starburst, Hybrid(on-prem to cloud) for building data pipelines and ETL processes. Experience with data pipeline more »
Northamptonshire, England, United Kingdom Hybrid / WFH Options
Capgemini
warehousing solutions. Familiarity with DBT (Data Build Tool) for managing transformations in the data pipeline. Strong programming skills in technologies like Python, Scala, Spark, PySpark or Ab Initio, Glue, Starburst, Snowflake, Redshift, Starburst, Hybrid(on-prem to cloud) for building data pipelines and ETL processes. Experience with data pipeline more »
GIS) Experience with cloud infrastructure Understanding of NLP algorithms and techniques Experience with Large Language Models (fine tuning, RAG, agents) Our technology stack PythonPySpark for processing big data AWS: EMR, ECS, Athena, etc. DevOps: Terraform, Docker, Airflow, MLFlow Additional Information Why should you jump on board? We pay more »
considering applicants with experience in Java, Scala, C# or C++. Practical experience with distributed processing of large amounts of data. Hands-on experience with PySpark/Spark. Experience working according to DevOps best-practises (CI/CD, testing, familiarity with Github/Gitlab). Good knowledge of at least more »
Banking and Financial Services sector is advantageous. Deep knowledge or experience with using as much of the following: Azure Cloud Data Components | Databricks | Python | PySpark | Terraform | APIs | Lakehouse | Data Mesh | Nosql DBs | GitHub Person Specification Self motivator with a desire to learn new skills and embrace new technologies in more »
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Viqu Limited
a Senior Data Engineer with a strong focus on Databricks. Proficiency in Python and SQL for data processing and analysis. SparkPythonAPI/PySpark Hands-on experience with AWS services related to data storage and processing (e.g., S3, Redshift, Glue). In-depth knowledge of Databricks Delta Lake more »
role is Inside IR35 and will be a couple of days per month onsite in London. Skills Required: Essential experience in Databricks, ADF, SQL, PySpark, CI/CD. Strong design and coding skills (e.g. Python, Scala, JavaScript). Experience with Microsoft or AWS data stack e.g. Microsoft Azure Data more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Blatchford
with various databases e.g. MS SQL, Azure Cosmos DB. Skilled at optimizing large and more complicated SQL statements. Proficiency in Python and experience with PySpark Experience using: CI/CD, Microsoft Azure, and Azure Dev Ops in an agile environment. Knowledge of Azure ETL services, i.e. Data Factory, Synapse more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
experience Experienced with AWS and services like S3. Experienced with Kafka for data streaming. Familiarity with BI reporting tools Good working experience with Airflow, PySpark and Apache Beam Worked with Java for building data applications – Advantageous Worked within the commodities space – Advantageous Not quite right for you? Refer a more »
cross functional teams entrusted with business-critical platforms.Desirable skills & experienceWorking to an Agile methodology and familiarity with Azure DevOpsDeep automation knowledge with PythonSkilled in Pyspark and SynapseExperience with data modelling and visualisation in Power BI (or alternative)A strong understanding of architecting data platforms, BI, MI, or analytics solutionsStrong more »
Exeter, Devon, South West, United Kingdom Hybrid / WFH Options
Staffworx Limited
Ability to optimise workflows and analysis for map reduce processing Experience with BI software (Power BI, Tableau, Qlik Sense) Any experience with data engineering, PySpark, Databricks, Delta Lakes beneficial Confident presenting complex problems in ways suitable to target audience Experience leading or managing a small analysis team Familiarity working more »
data warehouse design. Cloud data products such as: Data factory Events Hubs Data Lake Synapse Azure SQL Server Experience developing Databricks and coding with PySpark and Spark SQL. Proficient in ETL coding standards Data encryption techniques and standards Knowledge of relevant legislation such as: Data Protection Act, EU Procurement more »
Saffron Walden, Essex, South East, United Kingdom Hybrid / WFH Options
EMBL-EBI
expertise and requirements. You have A BSc or MSc in computer science or related fields. Expertise in Python, including popular Python libraries: NumPy, Pandas, PySpark and frameworks: Django, Django Rest Framework, FastAPI Hands-on experience with both relational (e.g. PostgreSQL) and non-relational databases (e.g. Elasticsearch, Redis). Strong more »
workflows, DLT pipelines, Unity Catalog 3+ years working with data warehouses, relational databases and query languages 2+ years building data pipelines in databricks using pyspark, scala and/or spark SQL and ability to work across structured, semi-structured and unstructured data 2+ years data modeling (e.g., data vault more »
London (Hybrid working) and is paying up to £100,000 per annum Key Skills Required Tools - Data Factory/Data Bricks/Python/PySpark/SQL/Power BI Commercial experience of Kimball/Inmon Datamodelling Knowledge of London Market Insurance is Highly Desirable Experience of Synapse beneficial more »