Manchester, North West, United Kingdom Hybrid / WFH Options
DWP Digital
currently on-prem, but the direction of travel is cloud engineering, and you'll be executing code across the following tech stack: Azure, Databricks, PySpark and Pandas. DWP Digital is a great place to work, we offer a supportive and inclusive environment where you can grow your career and … make a real difference. Essential criteria: Commercial experience of Databricks, PySpark and Pandas Commercial experience of Azure data engineering tools such as Azure Data Factory, dedicated SQL pools and ADSL Gen 2 Experience of working with data lakes An understanding of dimensional modelling Details. Wages. Perks. You'll join more »
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Damia Group Ltd
over 150 PB of data. As a Spark Scala Engineer, you will have the responsibility to refactor Legacy ETL code, for example DataStage into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Your responsibilities; As a Spark Scala Engineer more »
City of London, London, United Kingdom Hybrid / WFH Options
TALENT INTERNATIONAL UK LTD
Azure services such as Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Create and optimize data processing workflows in Databricks using PySpark and Spark SQL. Ensure ETL coding standards are met, including self-documenting code and reliable testing. Apply best practice data encryption techniques and standards … design. Extensive experience with Azure data products including Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Proficient in developing with Databricks, PySpark, and Spark SQL. Strong understanding of ETL coding standards, including standardized, self-documenting code and reliable testing. Knowledge of data encryption techniques and standards. more »
solutions in a production setting. Knowledge of developing real-time data stream systems (ideally Kafka). Proven track record in developing data systems using PySpark and Apache Spark for batch processing. Capable of managing data intake from various sources, including data streams, unstructured data, relational databases, and NoSQL databases. more »
role is Inside IR35 and will be a couple of days per month onsite in London. Skills Required: Essential experience in Databricks, ADF, SQL, PySpark, CI/CD. Strong design and coding skills (e.g. Python, Scala, JavaScript). Experience with Microsoft or AWS data stack e.g. Microsoft Azure Data more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Blatchford
with various databases e.g. MS SQL, Azure Cosmos DB. Skilled at optimizing large and more complicated SQL statements. Proficiency in Python and experience with PySpark Experience using: CI/CD, Microsoft Azure, and Azure Dev Ops in an agile environment. Knowledge of Azure ETL services, i.e. Data Factory, Synapse more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
experience Experienced with AWS and services like S3. Experienced with Kafka for data streaming. Familiarity with BI reporting tools Good working experience with Airflow, PySpark and Apache Beam Worked with Java for building data applications – Advantageous Worked within the commodities space – Advantageous Not quite right for you? Refer a more »
South Harting, England, United Kingdom Hybrid / WFH Options
Adecco
degrees (MSc or PhD) are preferred. Technical Skills: Expertise in applied machine learning, probability, statistics, and quantitative risk modelling. Strong proficiency in Python, SQL, PySpark, and Databricks. Experience with big data technologies and tools. Familiarity with modern NLP techniques and tools. Proven ability to create and manage data quality more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
that are comfortable with terms like; Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs, Python, Pyspark, and Azure. Flexible Working : The role is 60% work from home Sponsorship : Sadly sponsorship isn't available for these roles. About You: Bachelors Degree more »
a complete greenfield re-architecture from the ground up in Microsoft Azure. The Tech you'll be playing with: Azure Data Factory Azure Databricks PySpark SQL DBT What you need to bring: 1-3 year's experience in building Data Pipelines SQL experience in data warehousing Python experience would more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
MRJ Recruitment
10+ years experience in a Lead Data Engineer Educated to degree level in a QS top 100 university Proven experience delivering scalable data pipelines PySpark SQLDevOps/DataOps/CICD Expertise in designing, constructing, administering, and maintaining data warehouses and data lakes Data Modelling/Data Architecture Data Migration more »
customer modelling but not required Candidates should be looking to work in a fast paced startup feel environment Tech across: Python, SQL, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Apply below more »
on very complex systems Strong experience with Computer vision Longevity in their previous roles Experience with Remote Sensing highly desirable Stack: Python, PyTorch, AirFlow, PySpark (equivalent tools are fine more »
experience as a Data Engineer. Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
I have placed quite a few candidates with this organisation now, all have said glowing reviews about it, they're the leader in legal representation comparison and have rather interesting, unique data to work with. They are using modern Azure more »
months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Skills: Spark Architecture – component understanding around Spark … Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able … are Cluster level failures. Cloudera (CDP) – Knowledge of understanding how Cloudera Spark is set up and how the run time libraries are used by PySpark code. Prophecy – High level understanding of Low-Code No-Code prophecy set up and its use to generate PySpark code. more »
Preferred Experience in Microsoft Azure services and Databricks Spark, Redshift, Hadoop Map-Reduce or other Big Data frameworks Code management tools (Git, Sbt, Maven) Pyspark, Scala or other functional programming languages Analytics tools such as R or SPSS Any understanding of Web Analytics data such as Adobe Analytics or more »
and industry standards for the organization. Strong experience on Azure cloud services like Azure, ADF, ADLS, Synapse Proficiency in querying languages such as SQL, Pyspark, Python and familiarity with data visualization tools (e.g. Power BI). Strong communication skills to gather the business requirements from stakeholder and propose best more »
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management NEXT STEPS: If this role looks of interest, please reach out to Joseph Gregory. more »
experience-related problems such as workforce management, demand forecasting, or root cause analysis Strong visualisation skills including experience with Tableau Familiarity with Databricks and PySpark for data manipulation and analysis Familiarity with Git-based source control methodologies, including branching and pull requests A self-starter, passionate about converting data more »
Greater London, England, United Kingdom Hybrid / WFH Options
Agora Talent
come with scaling a company • The ability to translate complex and sometimes ambiguous business requirements into clean and maintainable data pipelines • Excellent knowledge of PySpark, Python and SQL fundamentals • Experience in contributing to complex shared repositories. What’s nice to have: • Prior early-stage B2B SaaS experience involving client more »
experience in PowerBi would also be useful. You will be a Engineer with past experience with java, data, and infrastructure (devOps). Java, Python, PySpark Mechanisms: MongoDB, Redshift, AWS S3 Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Platforms: Creating data pipelines within Databricks or equivalent more »
vision to convert data requirements into logical models and physical solutions. with data warehousing solutions (e.g. Snowflake) and data lake architecture, Azure Databricks/Pyspark & ADF. retail data model standards - ADRM. communication skills, organisational and time management skills. to innovate, support change and problem solve. attitude, with the ability more »