SR3, New Silksworth, Sunderland, Tyne & Wear, United Kingdom Hybrid / WFH Options
Avanti Recruitment
record in data warehousing, ETL processes, and data modelling Azure certifications in Database Administration or Data Engineering Nice to have: Experience with Python/PySpark and MySQL on Azure Additional Azure certifications (e.g., Fabric Analytics Engineer) Knowledge of data visualization platforms (Tableau/PowerBi) This is an exciting opportunity more »
Sunderland, Tyne and Wear, Tyne & Wear, United Kingdom
Nigel Frank International
Working with and modelling data warehouses. Skills & Qualifications Strong technical expertise using SQL Server and Azure Data Factory for ETL Solid experience with Databricks, PySpark etc. Understanding of Agile methodologies including use of Git Experience with Python and Spark Benefits £60,000 - £65,000 To apply for this role more »
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom Hybrid / WFH Options
Nigel Frank International
Working with and modelling data warehouses. Skills & Qualifications Strong technical expertise using SQL Server and Azure Data Factory for ETL Solid experience with Databricks, PySpark etc. Understanding of Agile methodologies including use of Git Experience with Python and Spark Benefits £60,000 - £65,000 This role is an urgent more »
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom
Nigel Frank International
To be successful in the role you will have. Azure Data Platform experience Understanding of Databricks is beneficial Coding experience with both Python/PySpark and SQL Experience working with Azure Data Factory for creating ETL solutions This is just a brief overview of the role. For the full more »
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom
Nigel Frank International
To be successful in the role you will have. Strong Azure Data Platform experience Strong understanding of Databricks Coding experience with both Python/PySpark and SQL Experience working with Azure Data Factory for creating ETL solutions This is just a brief overview of the role. For the full more »
will have. Strong experience in a Data Engineering capacity working with SQL Server and Azure Databricks experience Experience working with Python/Spark/PySpark Experience creating data pipelines with Azure Data Factory This is just a brief overview of the role. For the full information, simply apply to more »
Sunderland, Tyne and Wear, Tyne & Wear, United Kingdom
Nigel Frank International
will have. Strong experience in a Data Engineering capacity working with SQL Server and Azure Databricks experience Experience working with Python/Spark/PySpark Experience creating data pipelines with Azure Data Factory This is just a brief overview of the role. For the full information, simply apply to more »
Leeds, England, United Kingdom Hybrid / WFH Options
Fruition IT
technical stakeholders. Experience Required: Extensive experience in training, deploying and maintaining Machine Learning models. Data Warehousing and ETL tools. Python and surrounding ML tech; PySpark, Snowflake, Scikit Learn, TensorFlow, PyTorch etc. Infrastructure as Code - Terraform, Ansible. Stakeholder Management - Tech and Non-Technical. The Offer: Base Salary more »
GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW’s Radar and Emblem software is preferred more »
science Experience and detailed technical knowledge of GLMs/Elastic Nets, GBMs, GAMs, Random Forests, and clustering techniques Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Proficient at communicating results in a concise manner both verbally and written Behaviours: Motivated by technical excellence Team player Self-motivated more »
predictive modelling techniques; Logistic Regression, GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in programming languages (e.g. Python, PySpark, SAS, SQL) A good quantitative degree in, but not limited to: Mathematics, Statistics, Engineering, Physics, Computer Science Proficient at communicating results in a concise more »
have the following skills and experience: Ability and experience interacting with key stakeholders Cloud experience - Azure preferred Containerisation experience - Kubernetes preferred Prior experience with Pyspark Understanding of IaC/Terraform THE BENEFITS: You will receive a salary, dependent on experience. Salary is up to £80,000 On top of more »
data and BI software tools and systems Designing, coding, testing and documenting all new or modified applications, and programs using languages such as (python, pyspark, SQL, Java and other industry standard tools) Analysing user requirements and, based on findings, design functional specifications for front-end applications. Modelling a collaborative … Extensive knowledge with integration and transformation within Cloud Data Warehouses (e.g. Azure, AWS, GCP) Experience Essential Experience with programming languages such as Python, SQL, PySpark, Java Experience of using data engineering expertise to architect the most appropriate solution design to suit a particular requirement or set of requirements Experience more »
Leeds, England, United Kingdom Hybrid / WFH Options
Mastek
OBIEE, Workato and PL/SQL. Design and build data solutions on Azure, leveraging Databricks, Data Factory, and other Azure services. Utilize Python and PySpark for data transformation, analysis, and real-time streaming. Collaborate with cross-functional teams to gather requirements, design solutions, and deliver insights. Implement and maintain … Technologies: Databricks, Data Factory: Expertise in data engineering and orchestration. DevOps, Storage Explorer, Data Studio: Competence in deployment, storage management, and development tools. Python, PySpark: Advanced coding skills, including real-time data streaming through Autoloader. Development Tools: VS Code, Jira, Confluence, Bitbucket. Service Management: Experience with ServiceNow. API Integration more »
Lead Data Engineer: We need some strong Lead data engineer profiles… they need good experience with Python, SQL, ADF and preferably Azure Databricks experience Job description: Building new data pipelines and optimizing data flows using the Azure cloud stack. Building more »
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Blackpool, Lancashire, Marton, United Kingdom Hybrid / WFH Options
Department of Work & Pensions
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Newcastle upon Tyne, Tyne and Wear, High Heaton, Tyne & Wear, United Kingdom Hybrid / WFH Options
Department of Work & Pensions
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Sheffield, South Yorkshire, United Kingdom Hybrid / WFH Options
Department of Work & Pensions
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Leeds, West Yorkshire, Richmond Hill, United Kingdom Hybrid / WFH Options
Department of Work & Pensions
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
and over 150 PB of data. As a Spark Architect, you will have the responsibility to refactor Legacy ETL code for example DataStage into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. The End Client Account is looking for … an enthusiastic Spark Architect with deep component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Also able to analyse Spark code failures through Spark Plans and make correcting recommendations; able to review PySpark and Spark SQL jobs and make performance improvement … a Spark architect, who can demonstrate deep knowledge of how Cloudera Spark is set up and how the run time libraries are used by PySpark code. Your benefits As the Spark architect , you will have the opportunity to work with one of the biggest IT landscapes in the world. more »
You need to have the below skills. · At least 12+ Years of IT Experience with Deep understanding of component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. · Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting … explain the architecture you have been a part of and why any particular tool/technology was used. · Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations. · Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand … tools such as Grafana to see whether there are Cluster level failures. · Cloudera (CDP) Spark and how the run time libraries are used by PySpark code. · Prophecy – High level understanding of Low-Code No-Code prophecy set up and its use to generate PySpark code. · Ready to work more »