modeling) YOUR SKILLS AND EXPERIENCE: Degree in Statistics, Maths, Physics, Economics, or a similar field. Strong programming skills in Python and SQL; experience with PySpark is recommended. Proficiency in analytical techniques and technology. Experience in connecting data science work directly to customer experience, making a tangible impact. Logical thinking more »
to 4 years of experience required Degree in Statistics, Maths, Physics, Economics or similar field Programming skills (Python and SQL are a must have, Pyspark is recommended) Analytical Techniques and Technology Experience with and passion for connecting your work directly to the customer experience, making a real and tangible more »
Lead Data Engineer: We need some strong Lead data engineer profiles… they need good experience with Python, SQL, ADF and preferably Azure Databricks experience Job description: Building new data pipelines and optimizing data flows using the Azure cloud stack. Building more »
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
DWP Digital
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
City of London, London, United Kingdom Hybrid / WFH Options
Nigel Frank International
Requirements Excellent SQL and Python scripting skills Experience designing and developing data warehouses and data lakes/lakehouses Experience designing solutions involving databricks and PySpark Experience with Azure technologies including Data Lake, Data Factory, and Synapse Experience with data visualisation tools such as Power BI Knowledge of Agile methodology more »
Requirements Excellent SQL and Python scripting skills Experience designing and developing data warehouses and data lakes/lakehouses Experience designing solutions involving databricks and PySpark Experience with Azure technologies including Data Lake, Data Factory, and Synapse Experience with data visualisation tools such as Power BI Knowledge of Agile methodology more »
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom Hybrid / WFH Options
Nigel Frank International
Working with and modelling data warehouses. Skills & Qualifications Strong technical expertise using SQL Server and Azure Data Factory for ETL Solid experience with Databricks, PySpark etc. Understanding of Agile methodologies including use of Git Experience with Python and Spark Benefits £60,000 - £65,000 This role is an urgent more »
stakeholders to translate insights into actionable strategies. Create clear, insightful reports and dashboards for operational teams. Key Skills Proficient in Azure Databricks, SQL, Python, PySpark, and Power BI. Strong data analysis, statistical modelling, and machine learning skills. Experience in supply chain optimization and automation is a plus. If interested more »
Blackpool, Lancashire, North West, United Kingdom Hybrid / WFH Options
DWP Digital
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Sheffield, South Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
DWP Digital
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
DWP Digital
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
DWP Digital
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
to a MOD cloud data platform. We are looking for a candidate who understands and knows how to code using Apache Spark technology using pySpark as a minimum They are looking for you to have experience in the following: Previous experience as a data engineer or in a similar more »
Manchester, North West, United Kingdom Hybrid / WFH Options
DWP Digital
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Brentford, England, United Kingdom Hybrid / WFH Options
DiverseJobsMatter
years’ experience in developing machine learning models Knowledge and experience of data science libraries such as pandas, scikit-learn, matplotlib, seaborn, TensorFlow, Torch, pySpark ML, MLFlow or other model lifecycle management tools. Experience in developing ML models and understanding the key concepts including regression, clustering, classification and architectures such more »
Power BI Create clear, actionable reports and dashboards for stakeholders Key Skills Expertise in pricing models and optimization Proficient with Azure, Databricks, SQL, Python, PySpark, and Power BI Strong data analysis, modelling, and presentation skills If interested in this role, please do apply (latest 20th September) and submit your more »
understanding of the links between data and the empirical world. Explaining technical concepts to non-technical audiences. Machine learning experience. Analysing large databases using PySpark from various sources of data. Agile management of data science projects. Day to Day Developing solutions using a range of analytical techniques Working with more »
data and BI software tools and systems Designing, coding, testing and documenting all new or modified applications, and programs using languages such as (python, pyspark, SQL, Java and other industry standard tools) Analysing user requirements and, based on findings, design functional specifications for front-end applications. Modelling a collaborative … Extensive knowledge with integration and transformation within Cloud Data Warehouses (e.g. Azure, AWS, GCP) Experience Essential Experience with programming languages such as Python, SQL, PySpark, Java Experience of using data engineering expertise to architect the most appropriate solution design to suit a particular requirement or set of requirements Experience more »
** Software Engineer - Alternatives Technology - Buy-Side - Hybrid Perm Opportunity ** My client is a leading global investment management business looking to hire a Software Engineer within their Alternatives Technology team. The team is responsible for the development and maintenance of applications more »
You need to have the below skills. · At least 12+ Years of IT Experience with Deep understanding of component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. · Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting … explain the architecture you have been a part of and why any particular tool/technology was used. · Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations. · Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand … tools such as Grafana to see whether there are Cluster level failures. · Cloudera (CDP) Spark and how the run time libraries are used by PySpark code. · Prophecy – High level understanding of Low-Code No-Code prophecy set up and its use to generate PySpark code. · Ready to work more »
Requirements: Advanced expertise in SQL across multiple platforms (e.g., Snowflake, MSSQL, Databricks) Exceptional technical ability with proficiency with DBT and Python libraries such as PySpark, Pandas, and Snowpark. Experience with cloud-based data warehousing, particularly Snowflake, and ELT tools like Fivetran. Strong knowledge of API integration and JSON transformation more »
occur Contribute to the continuous improvement of the team Required skills: Proven experience within a Lead Data Engineering role Excellent understanding of Databricks and Pyspark Strong knowledge of Azure Cloud Services Excellent understanding of SQL Good exposure to Azure Data Lake technologies such as ADF, HDFS and Synapse Good more »
oriented team culture. Assist in developing reporting dashboards and applications as needed. Experience: At least 4 years' experience with Python, SQL and/or PySpark for data solutions, ideally in insurance. BSc/MSc in a numerical field, preferably Computer Science or Data Engineering, or equivalent experience. Proven expertise more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting/Pricing & AB testing. Leadership Expertise in more »
team of top-notch data scientists, software engineers, and research scientists They love what we do and the team has an exciting tech stack: Pyspark, Databricks, Python, and so on Corporate Social Responsibility They are incredibly proud of their contribution to their society and working with them gives you more »