cloud data products like Data Factory, Event Hubs, Data Lake, Synapse, Azure SQL server. Knowledge in developing in Databricks and experience in coding with PySpark, Spark SQL. Experience in design and development of complex data and analytics solutions in an iterative manner for large enterprise business/data warehouse more »
a complete greenfield re-architecture from the ground up in Microsoft Azure. The Tech you'll be playing with: Azure Data Factory Azure Databricks PySpark SQL DBT What you need to bring: 1-3 year's experience in building Data Pipelines SQL experience in data warehousing Python experience would more »
and industry standards for the organization. Strong experience on Azure cloud services like Azure, ADF, ADLS, Synapse Proficiency in querying languages such as SQL, Pyspark, Python and familiarity with data visualization tools (e.g. Power BI). Strong communication skills to gather the business requirements from stakeholder and propose best more »
Manchester, North West, United Kingdom Hybrid / WFH Options
Viqu Limited
Engineer/Data Engineer with a strong focus on Databricks. Proficiency in Python and SQL for data processing and analysis. SparkPythonAPI/PySpark Hands-on experience with AWS services related to data storage and processing (e.g., S3, Redshift, Glue). In-depth knowledge of Databricks Delta Lake more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline and CodeBuild more »
City of London, London, United Kingdom Hybrid / WFH Options
Develop
Modeling within a cloud-based data platform Strong experience with SQL Server Azure data engineering stack, including Azure Synapse and Azure Data Lake Python, PySpark and T-SQL In return you will be offered a competitive salary and benefits package, remote working options and an opportunity to work with more »
end data solutions, delivering best-in-class experiences for their external clients. Technical Background: SAS SAS Base Azure or AWS or GCP Python/PySpark Proficiency in SQL and/or similar data technologies Familiarity with data pipeline tools and ETL processes Knowledge of cloud platforms and data architecture more »
of Python Experience developing in the cloud (AWS preferred) Solid understanding of libraries like Pandas and NumPy Experience in data warehousing tools like Snowflake, Pyspark, Databricks Commercial experience with performant database programming in SQL Capability to solve complex technical issues, comprehending risks prior to the circumstance. Please apply today more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
MRJ Recruitment
10+ years experience in a Lead Data Engineer Educated to degree level in a QS top 100 university Proven experience delivering scalable data pipelines PySpark SQLDevOps/DataOps/CICD Expertise in designing, constructing, administering, and maintaining data warehouses and data lakes Data Modelling/Data Architecture Data Migration more »
experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a 6 month initial contract with a trusted client of ours. more »
for data engineering ie Azure Functions * Core skills in coding with SQL, Python and Spark * Proven experience using DataBricks ie lakehouse, delta live tables, Pyspark etc more »
data integration pipelines, transformations, pipeline scheduling, Ontology, and applications in Palantir Foundry Design, develop and deploy data solutions in Palantir with excellent skills in PySpark and Spark SQL for data transformations Experience in designing and building interactive data applications working with Ontology, actions, functions, object views, automate, indexing, data more »
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management NEXT STEPS: If this role looks of interest, please reach out to Joseph Gregory. more »
Leicester, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
Nottingham, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
customer modelling but not required Candidates should be looking to work in a fast paced startup feel environment Tech across: Python, SQL, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Apply below more »
experience-related problems such as workforce management, demand forecasting, or root cause analysis Strong visualisation skills including experience with Tableau Familiarity with Databricks and PySpark for data manipulation and analysis Familiarity with Git-based source control methodologies, including branching and pull requests A self-starter, passionate about converting data more »
Leeds, England, United Kingdom Hybrid / WFH Options
Damia Group
over 150 PB of data. As a Spark Scala Engineer, you will have the responsibility to refactor Legacy ETL code, for example DataStage into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Your responsibilities; As a Spark Scala Engineer more »
Bournemouth, Dorset, United Kingdom Hybrid / WFH Options
Whitehall Resources Ltd
a secure, stable, and scalable manner. Skills and Experience: Formal training or certification on software engineering concepts and advanced applied experience Advanced in Python & PySpark Experience in API development and exposure to Python Application development Hands-on practical experience delivering system design, application development, testing, and operational stability Proficient more »
City Of London, England, United Kingdom Hybrid / WFH Options
RJC Group
experience Data access methods (SQL, GraphQL, APIs) Beneficial Requirements Experience around data science tools and algorithms Manipulation technologies (e.g., WebSockets, Kafka, Spark) TensorFlow, Pandas, pySpark and scikit-learn would be great Salary up to £75K + 20% bonus and benefits package We have interview slots lined up for later more »
Greater London, England, United Kingdom Hybrid / WFH Options
Agora Talent
come with scaling a company • The ability to translate complex and sometimes ambiguous business requirements into clean and maintainable data pipelines • Excellent knowledge of PySpark, Python and SQL fundamentals • Experience in contributing to complex shared repositories. What’s nice to have: • Prior early-stage B2B SaaS experience involving client more »
start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
preferably GCP | Expertise in event-driven data integrations and click-stream ingestion | Proven ability in stakeholder management and project leadership | Proficiency in SQL, Python, PySpark | Solid background in data pipeline orchestration, data access, and retention tooling | Demonstrable impact on infrastructure scalability and data privacy initiatives | Collaborative spirit | Innovative problem more »
experience in PowerBi would also be useful. You will be a Engineer with past experience with java, data, and infrastructure (devOps). Java, Python, PySpark Mechanisms: MongoDB, Redshift, AWS S3 Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Platforms: Creating data pipelines within Databricks or equivalent more »