or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, Apache Spark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with DeltaLake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication skills. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate More ❯
or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, Apache Spark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with DeltaLake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication skills. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate More ❯
or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, Apache Spark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with DeltaLake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication skills. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate More ❯
or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, Apache Spark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with DeltaLake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication skills. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate More ❯
london (city of london), south east england, united kingdom
Capgemini
or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, Apache Spark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with DeltaLake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication skills. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate More ❯
initiative for the business, directly impacting customer experience and long-term data strategy. Key Responsibilities: Design and build scalable data pipelines and transformation logic in Databricks Implement and maintain DeltaLake physical models and relational data models. Contribute to design and coding standards, working closely with architects. Develop and maintain Python packages and libraries to support engineering work. … Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. Python development Familiarity with CI/CD and DevOps principles. Desirable Skills Data Vault 2.0. Data Governance & Quality More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
teams, manage dependencies, and drive alignment among cross-functional stakeholders. Technical Skills: Cloud Architecture: Deep expertise in Microsoft Azure, including: Azure Functions, Azure AI Search, API Management, Azure Data Lake, Azure Service bus, (link removed) NET Frameworks Azure AD/Entra ID, Azure App-Insights, Azure DevOps and CI/CD pipelines Databricks: Hands-on experience with: Data engineering … workflows DeltaLake and Spark optimization Integration with Azure services Unity Catalog Data Serving strategies Frontend Development: Proficiency in React, Angular, JavaScript and related frameworks Experience with enterprise-grade Front end applications. Integration of frontend apps with Azure-hosted APIs Responsibilities: Architect & review scalable and secure cloud solutions using Azure. Lead design and implementation of data pipelines using More ❯
that supports future data engineering and analytics needs. Essential Skills & Experience Proven experience as a Senior Data Engineer or Databricks Engineer with strong hands-on expertise in Databricks (Spark, DeltaLake). Deep knowledge of data engineering best practices, data pipelines, and ETL/ELT processes. Strong background in performance tuning and cost optimisation within Databricks. Experience leading … architecture reviews and delivering redesigned solutions at scale. Solid understanding of modern data platform components: Azure Data Lake, SQL, CI/CD, DevOps tooling. More ❯
primarily in Python and SQL) in cloud environments, with an emphasis on scalability, code clarity, and long-term maintainability Hands-on experience with Databricks and/or Spark, especially DeltaLake, Unity Catalog, and MLflow Deep familiarity with cloud platforms, particularly AWS and Google Cloud Proven ability to manage data architecture and production pipelines in a fast-paced More ❯
Dauphin, Pennsylvania, United States Hybrid / WFH Options
Lambdanets Services LLC
of data designs, logical and physical data models, data dictionaries, and metadata repositories. REQUIRED SKILLS: The Data Modeler/Architect can design, develop, and implement data models and data lake architecture to provide reliable and scalable applications and systems to meet the organization's objectives and requirements. The Data Modeler is familiar with a variety of database technologies, environments … facts, dimensions, and star schema concepts and terminology Strong proficiency with SQL Server, T-SQL, SSIS, stored procedures, ELT processes, scripts, and complex queries Working knowledge of Azure Databricks, DeltaLake, Synapse and Python Experience with evaluating implemented data systems for variances, discrepancies, and efficiency Experience with Azure DevOps and Agile/Scrum development methods Auditing databases to More ❯
Ashburn, Virginia, United States Hybrid / WFH Options
Adaptive Solutions, LLC
prototype to production • Minimum of 3 years' experience building and deploying scalable, production-grade AI/ML pipelines in AWS and Databricks • Practical knowledge of tools such as MLflow, DeltaLake, and Apache Spark for pipeline development and model tracking • Experience architecting end-to-end ML solutions, including feature engineering, model training, deployment, and ongoing monitoring • Familiarity with More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs More ❯
Hebburn, Tyne and Wear, Tyne & Wear, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
strong potential for extension Required Skills: * Proven experience with the Azure Data Stack (Azure Data Factory, Azure Synapse, Azure SQL, ADLS Gen2) * Strong hands-on expertise with Databricks (Spark, DeltaLake, notebooks, orchestration) * Experience building and optimising data pipelines for analytics and reporting * Ability to work collaboratively with cross-functional teams and stakeholders * Strong communication and problem-solving More ❯
with DoD programs, national security systems, or classified deployments (IL4-IL6). • Familiarity with ATO processes, RMF, NIST 800-53, and DISA STIGs. • Knowledge of data lakehouse architectures (Iceberg, DeltaLake), real-time streaming (Kafka, Pulsar, Red Panda), and modern data stacks. • Understanding of machine learning workflows and MLOps practices. • Experience with React/TypeScript, GraphQL APIs, or More ❯
and others. • Proficiency in technologies in the Apache Hadoop ecosystem, especially Hive, Impala and Ranger • Experience working with open file and table formats such Parquet, AVRO, ORC, Iceberg and DeltaLake • Extensive knowledge of automation and software development tools and methodologies. • Excellent working knowledge of Linux. Good working networking knowledge. • Ability to gain customer trust, ability to plan More ❯
technical execution. Required Qualifications Proven experience as a Solutions Architect or Resident Architect with a focus on data and AI solutions. Deep knowledge of Lakehouse architectures (e.g., Databricks, Snowflake, DeltaLake). Expertise in AI/ML frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Demonstrated success with data migration projects and modernization efforts. Strong understanding of data governance More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs More ❯
Data Pipelines Experience with modern data stacks is essential for building robust, scalable data pipelines across cloud and hybrid platforms. Key technologies include: Spark Databricks Python/Scala SQL DeltaLake Airflow Experience with Containerization and Orchestration Proficiency in containerization tools like Docker and Kubernetes, as well as understanding orchestration workflows, is highly beneficial for Data Architects. More ❯
Welwyn Garden City, Hertfordshire, South East, United Kingdom
La Fosse
record of curating and synchronising metadata across distributed platforms. Platform & Tool Expertise Alation power user : Expert in Stewardship Workbench, OCF connectors, and governance workflows. Databricks ecosystem : Unity Catalog administration, DeltaLake governance, lakehouse patterns, Genie AI/BI spaces. Hands-on with data quality tools such as Great Expectations, Ataccama, Monte Carlo (or similar). Experience with metadata More ❯
methodologies Experience designing models across raw, business, and consumption layers Solid understanding of metadata, cataloguing, and data governance practices Working knowledge of modern data platforms such as Databricks, Azure, DeltaLake, etc. Excellent communication and stakeholder engagement skills Data Architect - Benefits: Competitive base salary with regular reviews Car allowance - circa £5k per annum Discretionary company bonus Enhanced pension More ❯
processing systems. Familiarity with DevOps tools such as GitHub Actions or Jenkins. Solid grounding in modern engineering principles and full-stack development. Bonus Skills: Airflow, Kafka/Kafka Connect, DeltaLake, JSON/XML/Parquet/YAML, cloud-based data services. Why Apply? Work for a global payments innovator shaping the future of commerce. Join a highly More ❯
processing systems. Familiarity with DevOps tools such as GitHub Actions or Jenkins. Solid grounding in modern engineering principles and full-stack development. Bonus Skills: Airflow, Kafka/Kafka Connect, DeltaLake, JSON/XML/Parquet/YAML, cloud-based data services. Why Apply? Work for a global payments innovator shaping the future of commerce. Join a highly More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
BIOMETRIC TALENT
in a complex organisation Expertise in Kimball and Data Vault 2.0 methodologies Strong grasp of data modelling, metadata, and governance knowledge Hands-on experience with modern data platforms (Databricks, DeltaLake, Unity Catalog, Azure) Ability to define and drive architecture principles, patterns, and best practices Excellent communication and stakeholder management skills Retail industry experience is a bonus but More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid / WFH Options
IET
Youll oversee the modernisation of our Azure-based data environment, champion scalable and resilient data pipelines, and drive adoption of tools such as Fabric, Data Factory, Databricks, Synapse, and Delta Lake. Working across all areas of the IET, youll ensure our data is secure, accessible, and delivering maximum value for analytics, business intelligence, and operational excellence. Youll foster a … Lead and modernise The IETs data architecture, ensuring alignment with best practice and strategic goals. Drive adoption of Azure data technologies such as Fabric, Data Factory, Databricks, Synapse, and Delta Lake. Develop and maintain scalable, resilient data pipelines to support analytics and reporting. Stay hands-on in solving technical challenges, implementing solutions, and ensuring the reliability of our data More ❯