Leicester, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
I have placed quite a few candidates with this organisation now, all have said glowing reviews about it, they're the leader in legal representation comparison and have rather interesting, unique data to work with. They are using modern Azure more »
Banking and Financial Services sector is advantageous. Deep knowledge or experience with using as much of the following: Azure Cloud Data Components | Databricks | Python | PySpark | Terraform | APIs | Lakehouse | Data Mesh | Nosql DBs | GitHub Person Specification Self motivator with a desire to learn new skills and embrace new technologies in more »
end data solutions, delivering best-in-class experiences for their external clients. Technical Background: SAS SAS Base Azure or AWS or GCP Python/PySpark Proficiency in SQL and/or similar data technologies Familiarity with data pipeline tools and ETL processes Knowledge of cloud platforms and data architecture more »
Saffron Walden, Essex, South East, United Kingdom Hybrid / WFH Options
EMBL-EBI
expertise and requirements. You have A BSc or MSc in computer science or related fields. Expertise in Python, including popular Python libraries: NumPy, Pandas, PySpark and frameworks: Django, Django Rest Framework, FastAPI Hands-on experience with both relational (e.g. PostgreSQL) and non-relational databases (e.g. Elasticsearch, Redis). Strong more »
solutions in a production setting. Knowledge of developing real-time data stream systems (ideally Kafka). Proven track record in developing data systems using PySpark and Apache Spark for batch processing. Capable of managing data intake from various sources, including data streams, unstructured data, relational databases, and NoSQL databases. more »
role is Inside IR35 and will be a couple of days per month onsite in London. Skills Required: Essential experience in Databricks, ADF, SQL, PySpark, CI/CD. Strong design and coding skills (e.g. Python, Scala, JavaScript). Experience with Microsoft or AWS data stack e.g. Microsoft Azure Data more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Blatchford
with various databases e.g. MS SQL, Azure Cosmos DB. Skilled at optimizing large and more complicated SQL statements. Proficiency in Python and experience with PySpark Experience using: CI/CD, Microsoft Azure, and Azure Dev Ops in an agile environment. Knowledge of Azure ETL services, i.e. Data Factory, Synapse more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
that are comfortable with terms like; Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs, Python, Pyspark, and Azure. Flexible Working : The role is 60% work from home Sponsorship : Sadly sponsorship isn't available for these roles. About You: Bachelors Degree more »
a complete greenfield re-architecture from the ground up in Microsoft Azure. The Tech you'll be playing with: Azure Data Factory Azure Databricks PySpark SQL DBT What you need to bring: 1-3 year's experience in building Data Pipelines SQL experience in data warehousing Python experience would more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
MRJ Recruitment
10+ years experience in a Lead Data Engineer Educated to degree level in a QS top 100 university Proven experience delivering scalable data pipelines PySpark SQLDevOps/DataOps/CICD Expertise in designing, constructing, administering, and maintaining data warehouses and data lakes Data Modelling/Data Architecture Data Migration more »
customer modelling but not required Candidates should be looking to work in a fast paced startup feel environment Tech across: Python, SQL, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Apply below more »
Strong experience in data pipelines and deploying ML models Preference for experience in retail/marketing but not required Tech across: Python, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Apply below more »
experience as a Data Engineer. Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
I have placed quite a few candidates with this organisation now, all have said glowing reviews about it, they're the leader in legal representation comparison and have rather interesting, unique data to work with. They are using modern Azure more »
months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Skills: Spark Architecture – component understanding around Spark … Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able … are Cluster level failures. Cloudera (CDP) – Knowledge of understanding how Cloudera Spark is set up and how the run time libraries are used by PySpark code. Prophecy – High level understanding of Low-Code No-Code prophecy set up and its use to generate PySpark code. more »
Senior Databricks Migration Consultant Remote This role demands in-depth knowledge of data engineering, cloud technologies (preferably AWS), and a successful record in enterprise level data migrations into Databricks. You will ensure the efficient and secure transition of our data more »
Manchester, North West, United Kingdom Hybrid / WFH Options
Viqu Limited
Engineer/Data Engineer with a strong focus on Databricks. Proficiency in Python and SQL for data processing and analysis. SparkPythonAPI/PySpark Hands-on experience with AWS services related to data storage and processing (e.g., S3, Redshift, Glue). In-depth knowledge of Databricks Delta Lake more »
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Viqu Limited
a Senior Data Engineer with a strong focus on Databricks. Proficiency in Python and SQL for data processing and analysis. SparkPythonAPI/PySpark Hands-on experience with AWS services related to data storage and processing (e.g., S3, Redshift, Glue). In-depth knowledge of Databricks Delta Lake more »
North West London, London, United Kingdom Hybrid / WFH Options
Viqu Limited
a Senior Data Engineer with a strong focus on Databricks. Proficiency in Python and SQL for data processing and analysis. SparkPythonAPI/PySpark Hands-on experience with AWS services related to data storage and processing In-depth knowledge of Databricks Delta Lake and Light Tables. Familiarity with more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline and CodeBuild more »