Glasgow, Scotland, United Kingdom Hybrid/Remote Options
NLB Services
years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. · 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines · 3+ years of proficiency in working with Snowflake or similar cloud-based data warehousing solutions · 3+ years of experience in data development and More ❯
both business partners and with technical staff. Minimum Experience: Azure/AWS/GCP 3+ years (Azure Preferred) o Azure Cloud Services Kubernetes(AKS,ACA), Kafta, Apache Sparko CosmosDB, Databricks GraphQLo Service Mesh/Orchestration o Security Vaults, Tokens, Okta, IAM, Azure App Service (Easy Auth) Java 1-3+ years Oracle/Postgres 1-3 years NoSQL More ❯
ownership/stewardship, data quality, data security, and data architecture. Experience in the energy trading sector or similarly data-rich environments. Experience with data platforms and tools (e.g., Azure, Databricks, MSSQL, Kafka). Hands-on experience developing conceptual, logical, and physical data models. Interest in the latest technologies and automation, with a curiosity to research and innovate Person Specification Taking More ❯
design and architecting solutions. • Hand on experience in technology consulting, enterprise and solutions architecture and architectural frameworks, Data Modelling, Experience on ERWIN Modelling. • Hands on experience in ADF, Azure Databricks, Azure Synapse, Spark, PySpark, Python/Scala, SQL. • Hands on Experience in Designing and building Data Lake from Multiple Source systems/Data Providers. • Experience on Data Modelling, architecture, implementation More ❯
design and architecting solutions. • Hand on experience in technology consulting, enterprise and solutions architecture and architectural frameworks, Data Modelling, Experience on ERWIN Modelling. • Hands on experience in ADF, Azure Databricks, Azure Synapse, Spark, PySpark, Python/Scala, SQL. • Hands on Experience in Designing and building Data Lake from Multiple Source systems/Data Providers. • Experience on Data Modelling, architecture, implementation More ❯
7+ years of experience in technology consulting, enterprise and solutions architecture and architectural frameworks, Data Modelling, Experience on ERWIN Modelling. 7+ years of hands-on experience in ADF, Azure Databricks, Azure Synapse, Spark, PySpark, Python/Scala, SQL. Hands on Experience in Designing and building Data Lake from Multiple Source systems/Data Providers. Experience on Data Modelling, architecture, implementation More ❯
7+ years of experience in technology consulting, enterprise and solutions architecture and architectural frameworks, Data Modelling, Experience on ERWIN Modelling. 7+ years of hands-on experience in ADF, Azure Databricks, Azure Synapse, Spark, PySpark, Python/Scala, SQL. Hands on Experience in Designing and building Data Lake from Multiple Source systems/Data Providers. Experience on Data Modelling, architecture, implementation More ❯
environment like on docker). Testing tools such as JUnit, Spock or Clojure.test. Scripting skills in Bash, Ruby, or Python. Some technologies and processes we use: Clojure, Go, Python Databricks, PySPark Kafka, Kafka Streams Relational DB, Document Stores (e.g. ElasticSearch) GCP, AWS Kubernetes, Cloud Native CI/CD, CircleCI, GitOps, FluxCD More ❯
GCP/Azure). Hands-on experience with Unix-based command line and DevOps tools (Git, Docker, Kubernetes). Hands-on experience with big data technologies (e.g. Spark, Hadoop, Databricks). Experience with coaching/mentoring other engineers. Prior experience in Management Consulting is a strong plus. Willingness to travel and work at local and international clients. Fluency in spoken More ❯
with cloud based infrastructures (AWS/GCP/Azure). Knowledge of Unix command line and DevOps tools (Git, Docker, Kubernetes). Experience with big data technologies (Spark, Hadoop, Databricks). Experience coaching/mentoring other engineers. Prior experience in management consulting is a strong plus. Willingness to travel and work at local and international clients. Fluency in English and More ❯
languages such as Python or R, with extensive experience with LLMs, ML algorithms, and models. Experience with cloud services like Azure ML Studio, Azure Functions, Azure Pipelines, MLflow, Azure Databricks, etc., is a plus. Experience working in Azure/Microsoft environments is considered a real plus. Proven understanding of data science methods for analyzing and making sense of research data More ❯
GCP/Azure). Hands-on experience with Unix-based command line and DevOps tools (Git, Docker, Kubernetes). Hands-on experience with big data technologies (e.g. Spark, Hadoop, Databricks). Experience with coaching/mentoring other engineers. Prior experience in Management Consulting is a strong plus. Willingness to travel and work at local and international clients. Fluency in spoken More ❯
GCP/Azure). Hands-on experience with Unix-based command line and DevOps tools (Git, Docker, Kubernetes). Hands-on experience with big data technologies (e.g. Spark, Hadoop, Databricks). Experience with coaching/mentoring other engineers. Prior experience in Management Consulting is a strong plus. Willingness to travel and work at local and international clients. Fluency in spoken More ❯
Reigate, Surrey, England, United Kingdom Hybrid/Remote Options
esure Group
Skilled in team collaboration and exhibits good interpersonal skills. Able to prioritise, multi-task, and deliver at pace with an iterative mindset. Experience with modern data platforms such as Databricks or Snowflake is advantageous; LLM API experience is a bonus. Additional Information What’s in it for you?: Competitive salary that reflects your skills, experience and potential. Discretionary bonus scheme More ❯
with Python data science stack Knowledge of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling and insurance are a plus. Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow, k8s, FastAPI etc are desirable More ❯
disciplines : Cloud Engineering Data Engineering (not building pipelines but designing and building the framework) DevOps MLOps/LLMOps Often work with the following technologies : Azure, AWS, GCP Airflow, dbt, Databricks, Snowflake, etc. GitHub, Azure DevOps and related developer tooling and CI/CD platforms, Terraform or other Infra-as-Code MLflow, AzureML or similar for MLOps; LangSmith, Langfuse and similar More ❯
design, access patterns, query performance optimization, etc. Experience with data pipeline technologies like AWS Glue, Airflow, Kafka, or other cloud based equivalence. Experience with ETL and data warehousing like Databricks, Snowflake, or equivalent. Container-based deployment experience using Docker and Kubernetes. Strong verbal and written communication skills. NICE TO HAVE: Experience working with or data modeling for graph databases like More ❯
etc.) 5+ years leveraging statistical or algorithmic modeling methods to answer questions and solve measurement and prediction problems. Experience working with modern data lakehouse software and structures such as Databricks and/or Snowflake Ability to breakdown multiple complex asks and deliver solutions with agility Experience designing and deploying data products as part of a modern analytics cloud platform Experience More ❯
At Databricks, our core principles are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone. We aim to inspire our customers to make informed decisions that push their business forward. We provide a user-friendly and … relationships with clients throughout your assigned territory to provide technical and business value in collaboration with an Account Executive and a Senior Solutions Architect. Gain excitement from clients about Databricks through hands-on evaluation and Spark programming, integrating with the wider cloud ecosystem and 3rd party applications. Contribute to building the Databricks technical community through engagement at workshops, seminars, and … exposure to advanced proofs-of-concept and an understanding of a major public cloud platform. Experience diving deeper into solution architecture and Data Engineering. Fluent Dutch and English About DatabricksDatabricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 - rely on the Databricks Data More ❯
passionate about clean design, collaboration, and the pursuit of data excellence. Role and Responsibility In this role, you'll develop and maintain robust data models and transformation pipelines using Databricks, Azure, and Power BI to turn complex datasets into reliable, insight-ready assets. You'll apply strong skills in SQL, Python, and PySpark to build efficient ELT workflows and ensure More ❯
SQL Development capacity Highly Desirable - Experience of Insolvency or Financial Services sectors. Technical Competencies Required: Advanced T-SQL MS SQL Server ETL and Migration Development SSIS Azure Data Factory Databricks Microsoft Fabric Advanced Microsoft Excel Highly Desirable: Azure Devops GIT Source control Jira Confluence mySQL Desirable: SSRS & Power BI SSAS SQL Utilities such as Redgate Other development experience includes C# More ❯
passionate about clean design, collaboration, and the pursuit of data excellence. Role and Responsibility In this role, you'll develop and maintain robust data models and transformation pipelines using Databricks, Azure, and Power BI to turn complex datasets into reliable, insight-ready assets. You'll apply strong skills in SQL, Python, and PySpark to build efficient ELT workflows and ensure More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Robert Half
technologies in data engineering, and continuously improve your skills and knowledge. Profile The Data Engineer will have mastery of data management and processing tools, including Power BI, Data Factory, Databricks, SQL Server, and Oracle. Proficient in SQL and experienced in database administration. Familiarity with cloud platforms such as Azure and AWS. Excellent problem-solving and analytical skills, with strong attention More ❯
computer programming, data modeling, and performance optimization techniques to model and access all data types (structured and unstructured data). Experience with large-scale data sets, semantic layer solutions (Databricks, Snowflake, or similar), AWS, and Azure. Masters degree (or foreign equivalent) in computer science, electronic engineering, or related field; plus 5 years of experience in job offered or similar occupation. More ❯
OneLake). Advanced proficiency in Power BI, including DAX, Power Query (M), and data modelling. Deep understanding of data warehousing, ETL, and data lakehouse concepts. Strong working knowledge of Databricks, including Delta Lake and notebooks. Strong interpersonal skills with the ability to influence and communicate complex data topics clearly. Excellent analytical, organisational, and problem-solving abilities. Experience leading or mentoring More ❯