to architectural decisions, and support the migration of reporting services to Azure. Key Responsibilities: Design, build, and maintain ETL pipelines using Azure Data Factory , Azure Data Lake , Synapse , and Databricks . Design and build a greenfield Azure data platform to support business-critical data needs. Collaborate with stakeholders across the organization to gather and define data requirements. Assist in the More ❯
databases including PostgreSQL . Integrate and automate workflows using DevOps tools (e.g., CI/CD, Jenkins, Git ). Collaborate on cloud-based data initiatives using AWS (S3) and Azure Databricks . Contribute to Agile/Scrum development processes and ensure timely delivery of project milestones. Work closely with cross-functional teams across multiple locations and time zones. Mentor team members More ❯
databases including PostgreSQL . Integrate and automate workflows using DevOps tools (e.g., CI/CD, Jenkins, Git ). Collaborate on cloud-based data initiatives using AWS (S3) and Azure Databricks . Contribute to Agile/Scrum development processes and ensure timely delivery of project milestones. Work closely with cross-functional teams across multiple locations and time zones. Mentor team members More ❯
Chicago, Illinois, United States Hybrid/Remote Options
Newcastle Associates, Inc
kinds of data, and collaborating with both technical and non-technical teammates, this role is for you. What Youll Do Build and manage data pipelines in Azure (Data Factory, Databricks, Synapse, etc.). Pull in data from different sourcesAPIs, databases, cloud apps, even streaming data. Organize, clean, and transform data so its ready for reporting, dashboards, or advanced analytics. Keep More ❯
Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and experimental teams. More ❯
Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and experimental teams. More ❯
. Familiarity with Snowflake cost monitoring, governance, replication , and environment management. Strong understanding of data modelling (star/snowflake schemas, SCDs, lineage). Proven Azure experience (Data Factory, Synapse, Databricks) for orchestration and integration. Proficient in SQL for complex analytical transformations and optimisations. Comfortable working in agile teams and using Azure DevOps for CI/CD workflows. Prior experience in More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Robert Half
. Familiarity with Snowflake cost monitoring, governance, replication , and environment management. Strong understanding of data modelling (star/snowflake schemas, SCDs, lineage). Proven Azure experience (Data Factory, Synapse, Databricks) for orchestration and integration. Proficient in SQL for complex analytical transformations and optimisations. Comfortable working in agile teams and using Azure DevOps for CI/CD workflows. Prior experience in More ❯
data quality, governance, and compliance processes Skills & experience required • Proven background leading data engineering teams or projects in a technology-driven business • Expert knowledge of modern cloud data platforms (Databricks, Snowflake, ideally AWS) • Advanced Python programming skills and fluency with the wider Python data toolkit • Strong capability with SQL, Spark, Airflow, Terraform, and workflow orchestration tools • Solid understanding of CICD More ❯
experience of: delivering high quality, complex technology solutions in commercial and government organizations delivering data analytics platforms and enterprise applications; working with SQL, SSRS, SSIS, SSAS, Azure Data Factory, Databricks, Python, DAX senior level understanding of health care management systems; and working with senior executives Deep expertise in one or more major DBMS platforms (e.g., Microsoft SQL Server, Oracle, PostgreSQL More ❯
and SQL Hands-on experiences with Data Architecture, including: Cloud platforms and orchestration tools (e.g. Dagster, Airflow) AI/MLOps: Model deployment, monitoring, lifecycle management. Big Data Processing: Spark, Databricks, or similar. Bonus: Knowledge Graph engineering, graph databases, ontologies. Located in London And ideally you... Are a zero-to-one builder who thrives on autonomy and ambiguity. Are a strong More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Higher - AI recruitment
and SQL Hands-on experiences with Data Architecture, including: Cloud platforms and orchestration tools (e.g. Dagster, Airflow) AI/MLOps: Model deployment, monitoring, lifecycle management. Big Data Processing: Spark, Databricks, or similar. Bonus: Knowledge Graph engineering, graph databases, ontologies. Located in London And ideally you... Are a zero-to-one builder who thrives on autonomy and ambiguity. Are a strong More ❯
basis your varied role will include, but will not be limited to: Design, build, and optimize high-performance data pipelines and ETL workflows using tools like Azure Synapse, Azure Databricks or Microsoft Fabric. Implement scalable solutions to ingest, store, and transform vast datasets, ensuring data availability and quality across the organization. Write clean, efficient, and reusable Python code tailored to More ❯
Letchworth Garden City, Hertfordshire, United Kingdom Hybrid/Remote Options
Willmott Dixon Group
with hands-on experience in relational and dimensional data modelling. Modern Data Engineering: Proven ability to design and deliver scalable solutions using tools like Microsoft Fabric (strongly preferred), Synapse, Databricks, or similar. Supporting Know-How: Solid grasp of data architecture, governance and security. DevOps & Cloud Fluency: Practical experience with CI/CD pipelines, APIs, and cloud tooling (eg Azure DevOps More ❯
12+ years of experience in a customer facing technical role and a working experience in: Distributed systems and massively parallel processing technologies and concepts such as Snowflake, Teradata, Spark, Databricks, Hadoop, Oracle, SQL Server, and performance optimisation Data strategies and methodologies such as Data Mesh, Data Vault, Data Fabric, Data Governance, Data Management, Enterprise Architecture Data organisation and modelling concepts More ❯
Automation Scripting Languages (Python, Powershell) Reporting tools (Power BI, SSRS or Tableau) Microsoft Fabric/Data Engineer - Modern Data Architecttures e.g lakehouse, data mesh ETL/ELT Tools (dbt, Databricks) Source Control (GitHub, Tortoise SVN) Batch Scheduling (Control-M, Autosys) CI/CD Education/Qualifications: Degree educated and/or equivalent experience.Personal Requirements: Excellent communication skills Results driven, with More ❯
Data Engineer (Databricks) – AI/Data Consulting Firm About Us We are an ambitious consulting firm focused on delivering cutting-edge solutions in data and AI. Our mission is to empower organisations to unlock the full potential of their data by leveraging platforms like Databricks alongside other emerging technologies. As a Data Engineer, you will play a crucial role in … building and optimising data solutions, ensuring scalability, performance, and reliability for our clients' complex data challenges. The Role As a Data Engineer (Databricks), you will be responsible for designing, implementing, and optimising large-scale data processing systems. You will work closely with clients, data scientists, and solution architects to ensure efficient data pipelines, reliable infrastructure, and scalable analytics capabilities. This … practices, coding standards, and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms More ❯
Data Engineer (Databricks) – AI/Data Consulting Firm About Us We are an ambitious consulting firm focused on delivering cutting-edge solutions in data and AI. Our mission is to empower organisations to unlock the full potential of their data by leveraging platforms like Databricks alongside other emerging technologies. As a Data Engineer, you will play a crucial role in … building and optimising data solutions, ensuring scalability, performance, and reliability for our clients' complex data challenges. The Role As a Data Engineer (Databricks), you will be responsible for designing, implementing, and optimising large-scale data processing systems. You will work closely with clients, data scientists, and solution architects to ensure efficient data pipelines, reliable infrastructure, and scalable analytics capabilities. This … practices, coding standards, and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms More ❯
Overview Databricks is the data and AI company. We enable data teams to solve bold problems by building and running a data and AI infrastructure platform used by thousands of customers. The Lakeflow Jobs team focuses on data-aware orchestration within the Databricks Data Intelligence Platform, powering ETL, AI/ML, BI, and streaming workloads with high reliability. As a … of major orchestration tools (Airflow, Dagster, Prefect, etc.) Technical understanding of data pipelines (Spark, dbt, Lakeflow Pipelines) Strong data analysis and operationalization skills (SQL, Python, building operational dashboards) About DatabricksDatabricks is a data and AI company trusted by more than 10,000 organizations worldwide. We unify and democratize data, analytics, and AI through the Databricks Data Intelligence Platform. We … are headquartered in San Francisco with offices globally. We were founded by the creators of Lakehouse, Apache Spark, Delta Lake, and MLflow. Benefits Databricks strives to provide comprehensive benefits and perks that meet the needs of our employees. For region-specific details, please visit our benefits page. Our Commitment to Diversity and Inclusion Databricks is committed to fostering a diverse More ❯
for designing, developing, and implementing data solutions on the Microsoft Azure platform, including data pipelines and architectures Azure services: Expertise in Azure services such as Azure Data Factory, Azure Databricks, and Azure SQL Database . Data pipeline development: Experience with designing and building ETL/ELT pipelines. Projects include: AI, Dashboards, Cloud Infrastructure, Data pipelines, Data Lake, Data models Experience More ❯
with Palantir Foundry is a must • Strong background in Python, Java, or Scala. • Proficiency in SQL, data processing, and ETL/ELT design. • Experience with cloud-native data platforms (Databricks, Spark, etc.). • Familiarity with CI/CD pipelines and automated testing. The Package: • £70,000 – £120,000 base • Bonus scheme • Pension & benefits package To hear more about this Data More ❯
Glasgow, Scotland, United Kingdom Hybrid/Remote Options
NLB Services
years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. · 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines · 3+ years of proficiency in working with Snowflake or similar cloud-based data warehousing solutions · 3+ years of experience in data development and More ❯
both business partners and with technical staff. Minimum Experience: Azure/AWS/GCP 3+ years (Azure Preferred) o Azure Cloud Services Kubernetes(AKS,ACA), Kafta, Apache Sparko CosmosDB, Databricks GraphQLo Service Mesh/Orchestration o Security Vaults, Tokens, Okta, IAM, Azure App Service (Easy Auth) Java 1-3+ years Oracle/Postgres 1-3 years NoSQL More ❯
ownership/stewardship, data quality, data security, and data architecture. Experience in the energy trading sector or similarly data-rich environments. Experience with data platforms and tools (e.g., Azure, Databricks, MSSQL, Kafka). Hands-on experience developing conceptual, logical, and physical data models. Interest in the latest technologies and automation, with a curiosity to research and innovate Person Specification Taking More ❯
design and architecting solutions. • Hand on experience in technology consulting, enterprise and solutions architecture and architectural frameworks, Data Modelling, Experience on ERWIN Modelling. • Hands on experience in ADF, Azure Databricks, Azure Synapse, Spark, PySpark, Python/Scala, SQL. • Hands on Experience in Designing and building Data Lake from Multiple Source systems/Data Providers. • Experience on Data Modelling, architecture, implementation More ❯