that suit you. About you: You are an experienced Data Engineer with advanced Python coding skills You have experience with Spark and/or PySpark You have experience of working for a technology/SaaS/software product company You have a strong understanding of RDBMS, Data Warehousing, Data more »
Chelmsford, Essex, United Kingdom Hybrid / WFH Options
Senitor Associates Ltd
pools. Develop, maintain, and optimize semantic data models using Azure Synapse Analytics/Fabric, Spark notebooks. Ensure data model accuracy, scalability, and performance. Use PySpark within Azure notebooks to extract, transform, and load (ETL/ELT) data from raw formats (e.g. Delta, Parquet, CSV) stored in ADLS Gen2. Implement more »
Chelmsford, England, United Kingdom Hybrid / WFH Options
Senitor Associates
pools. Develop, maintain, and optimize semantic data models using Azure Synapse Analytics/Fabric, Spark notebooks. Ensure data model accuracy, scalability, and performance. Use PySpark within Azure notebooks to extract, transform, and load (ETL/ELT) data from raw formats (e.g. Delta, Parquet, CSV) stored in ADLS Gen2. Implement more »
Newton, England, United Kingdom Hybrid / WFH Options
ARM
PhD degree in a relevant field, such as computer science, physics, statistics, maths etc Experience working with imbalanced datasets. Experience with distributed compute using pyspark Experience with Databases (no-sql, sql) Experience serving models through a service orientated architecture Experience with Serving ML models in a streaming environment. Experience more »
City Of London, England, United Kingdom Hybrid / WFH Options
Premier Group Recruitment
data modelling Technical Skills Proficient in SQL Server and relational database management Experience with cloud platforms like Azure (Synapse, Data Lake) Programming in Python, PySpark, and T-SQL Additional Skills Familiarity with data analysis tools and workflows Strong communication and collaboration skills Experience in fast-paced, team-oriented environments more »
SR3, New Silksworth, Sunderland, Tyne & Wear, United Kingdom Hybrid / WFH Options
Avanti Recruitment
record in data warehousing, ETL processes, and data modelling Azure certifications in Database Administration or Data Engineering Nice to have: Experience with Python/PySpark and MySQL on Azure Additional Azure certifications (e.g., Fabric Analytics Engineer) Knowledge of data visualization platforms (Tableau/PowerBi) This is an exciting opportunity more »
data integration pipelines, transformations, pipeline scheduling, Ontology, and applications in Palantir Foundry Design, develop and deploy data solutions in Palantir with excellent skills in PySpark and Spark SQL for data transformations Experience in designing and building interactive data applications working with Ontology, actions, functions, object views, automate, indexing, data more »
Liverpool, England, United Kingdom Hybrid / WFH Options
Maxwell Bond
level database development and admin in both Azure cloud and on prem. Proven working experience with Azure Data Warehouse, Databricks, SQL and Python/PySpark . ETL & ETL Frameworks Experience of consuming and combining data for near real time answers. This position does not offer Visa Sponsorship , please refrain more »
Requirements Excellent SQL and Python scripting skills Experience designing and developing data warehouses and data lakes/lakehouses Experience designing solutions involving databricks and PySpark Experience with Azure technologies including Data Lake, Data Factory, and Synapse Experience with data visualisation tools such as Power BI Knowledge of Agile methodology more »
City of London, London, United Kingdom Hybrid / WFH Options
Nigel Frank International
Requirements Excellent SQL and Python scripting skills Experience designing and developing data warehouses and data lakes/lakehouses Experience designing solutions involving databricks and PySpark Experience with Azure technologies including Data Lake, Data Factory, and Synapse Experience with data visualisation tools such as Power BI Knowledge of Agile methodology more »
and Synapse SQL pools. Build and maintain dimensional and relational models based on the business requirements. Ensure data model accuracy, scalability, and performance. Use PySpark within Azure Synapse notebooks to extract, transform, and load (ETL/ELT) data from raw formats (e.g., Delta, Parquet, CSV) stored in ADLS Gen2. more »
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom Hybrid / WFH Options
Nigel Frank International
Working with and modelling data warehouses. Skills & Qualifications Strong technical expertise using SQL Server and Azure Data Factory for ETL Solid experience with Databricks, PySpark etc. Understanding of Agile methodologies including use of Git Experience with Python and Spark Benefits £60,000 - £65,000 This role is an urgent more »
lifecycle management using Azure Databricks. Good to have skills in containerization like Docker and ACR High-level expertise in the Python programming language including PySpark, Proficiency in utilising machine learning libraries and frameworks like PyTorch, ONNX, and XGBoost. Strong understanding of software testing and CI/CD principles and more »
Liverpool, England, United Kingdom Hybrid / WFH Options
Maxwell Bond
level database development and admin in both Azure cloud and on prem. Proven working experience with Azure Data Warehouse, Databricks, SQL and Python/PySpark . Experience of consuming and combining data for near real time answers. This position does not offer Visa Sponsorship , please refrain from applying if more »
cycle management using Azure Databricks. Good to have skills in containerization like Docker and ACR High-level expertise in the Python programming language including PySpark Proficiency in utilising machine learning libraries and frameworks like PyTorch, ONNX, and XGBoost. Strong understanding of software testing and CI/CD principles and more »
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Bowerford Associates
less experienced members of the team. Essential skills: ETL Tools - Azure Data Fabric, Azure Data Factory and Databricks or similar. Languages - Python, Spark, Java, PySpark, Spark SQL or similar. Lakes - Azure Data, Delta Lake, Data Lake or Databricks Lakehouse. SQL and/or data warehousing design patterns and implementation. … Data Architectures, Architecture Patterns, Cloud Data Migration, Data Governance, ETL, Databricks, Azure Data Fabric, Azure Data Factory, Azure Data, Delta Lake, Databricks Lakehouse, Python, PySpark, Spark SQL, SQL, Data Warehousing. Please note that due to a high level of applications, we can only respond to applicants whose skills and more »
Spark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark, Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of more »
techniques, Data Connection, and Security setup Proficiency in developing data integration pipelines, transformations, pipeline scheduling, Ontology, and applications in Palantir Foundry Excellent skills in PySpark and Spark SQL for data transformations Experience in designing and building interactive data applications and developing parameterized, interactive dashboards in Quiver Desirable skills include more »
techniques, Data Connection, and Security setup Proficiency in developing data integration pipelines, transformations, pipeline scheduling, Ontology, and applications in Palantir Foundry Excellent skills in PySpark and Spark SQL for data transformations Experience in designing and building interactive data applications and developing parameterized, interactive dashboards in Quiver Desirable skills include more »
Leeds, England, United Kingdom Hybrid / WFH Options
Fruition IT
technical stakeholders. Experience Required: Extensive experience in training, deploying and maintaining Machine Learning models. Data Warehousing and ETL tools. Python and surrounding ML tech; PySpark, Snowflake, Scikit Learn, TensorFlow, PyTorch etc. Infrastructure as Code - Terraform, Ansible. Stakeholder Management - Tech and Non-Technical. The Offer: Base Salary more »
between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW’s Radar software is preferred Proficient at more »
GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW’s Radar software is preferred Proficient at more »
Leeds, England, United Kingdom Hybrid / WFH Options
Mastek
OBIEE, Workato and PL/SQL. Design and build data solutions on Azure, leveraging Databricks, Data Factory, and other Azure services. Utilize Python and PySpark for data transformation, analysis, and real-time streaming. Collaborate with cross-functional teams to gather requirements, design solutions, and deliver insights. Implement and maintain … Technologies: Databricks, Data Factory: Expertise in data engineering and orchestration. DevOps, Storage Explorer, Data Studio: Competence in deployment, storage management, and development tools. Python, PySpark: Advanced coding skills, including real-time data streaming through Autoloader. Development Tools: VS Code, Jira, Confluence, Bitbucket. Service Management: Experience with ServiceNow. API Integration more »
Dubai? Senior Data Engineer – Relocate to Dubai! – Full Time – Hybrid Working Are you a skilled Senior Data Engineer with expertise in Python, SQL, and PySpark ? Are you excited to work in a fast-paced financial sector with opportunities for growth and cutting-edge projects? This is your chance to … high-impact data projects and opportunities for professional advancement. What We’re Looking For: Experience: Data Engineer, with proven expertise in Python, SQL , and PySpark . Technical Skills: Strong experience in building and maintaining data pipelines and ETL processes. Familiarity with cloud-based environments (AWS, Azure, GCP) is a more »
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »