Pennsylvania office. What Youll Do As part of Federated Hermes Global Technology Organization, you will be responsible to: Design, build, and maintain scalable data pipelines on Databricks (using Spark, DeltaLake, etc.) Write clean, efficient and maintainable PySpark or SQL code for data transformation Design robust data models for analytics and reporting Ensure data quality, consistency, and governance More ❯
Dauphin, Pennsylvania, United States Hybrid / WFH Options
Lambdanets Services LLC
of data designs, logical and physical data models, data dictionaries, and metadata repositories. REQUIRED SKILLS: The Data Modeler/Architect can design, develop, and implement data models and data lake architecture to provide reliable and scalable applications and systems to meet the organization's objectives and requirements. The Data Modeler is familiar with a variety of database technologies, environments … facts, dimensions, and star schema concepts and terminology Strong proficiency with SQL Server, T-SQL, SSIS, stored procedures, ELT processes, scripts, and complex queries Working knowledge of Azure Databricks, DeltaLake, Synapse and Python Experience with evaluating implemented data systems for variances, discrepancies, and efficiency Experience with Azure DevOps and Agile/Scrum development methods Auditing databases to More ❯
primarily in Python and SQL) in cloud environments, with an emphasis on scalability, code clarity, and long-term maintainability Hands-on experience with Databricks and/or Spark, especially DeltaLake, Unity Catalog, and MLflow Deep familiarity with cloud platforms, particularly AWS and Google Cloud Proven ability to manage data architecture and production pipelines in a fast-paced More ❯
Ashburn, Virginia, United States Hybrid / WFH Options
Adaptive Solutions, LLC
prototype to production • Minimum of 3 years' experience building and deploying scalable, production-grade AI/ML pipelines in AWS and Databricks • Practical knowledge of tools such as MLflow, DeltaLake, and Apache Spark for pipeline development and model tracking • Experience architecting end-to-end ML solutions, including feature engineering, model training, deployment, and ongoing monitoring • Familiarity with More ❯
with DoD programs, national security systems, or classified deployments (IL4-IL6). • Familiarity with ATO processes, RMF, NIST 800-53, and DISA STIGs. • Knowledge of data lakehouse architectures (Iceberg, DeltaLake), real-time streaming (Kafka, Pulsar, Red Panda), and modern data stacks. • Understanding of machine learning workflows and MLOps practices. • Experience with React/TypeScript, GraphQL APIs, or More ❯
methodologies Experience designing models across raw, business, and consumption layers Solid understanding of metadata, cataloguing, and data governance practices Working knowledge of modern data platforms such as Databricks, Azure, DeltaLake, etc. Excellent communication and stakeholder engagement skills Data Architect - Benefits: Competitive base salary with regular reviews Car allowance - circa £5k per annum Discretionary company bonus Enhanced pension More ❯
technical execution. Required Qualifications Proven experience as a Solutions Architect or Resident Architect with a focus on data and AI solutions. Deep knowledge of Lakehouse architectures (e.g., Databricks, Snowflake, DeltaLake). Expertise in AI/ML frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Demonstrated success with data migration projects and modernization efforts. Strong understanding of data governance More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid / WFH Options
IET
Youll oversee the modernisation of our Azure-based data environment, champion scalable and resilient data pipelines, and drive adoption of tools such as Fabric, Data Factory, Databricks, Synapse, and Delta Lake. Working across all areas of the IET, youll ensure our data is secure, accessible, and delivering maximum value for analytics, business intelligence, and operational excellence. Youll foster a … Lead and modernise The IETs data architecture, ensuring alignment with best practice and strategic goals. Drive adoption of Azure data technologies such as Fabric, Data Factory, Databricks, Synapse, and Delta Lake. Develop and maintain scalable, resilient data pipelines to support analytics and reporting. Stay hands-on in solving technical challenges, implementing solutions, and ensuring the reliability of our data More ❯
Herefordshire, West Midlands, United Kingdom Hybrid / WFH Options
IO Associates
and maintain platform software, libraries, and dependencies . Set up and manage Spark clusters , including migrations to new platforms. Manage user accounts and permissions across identity platforms. Maintain the DeltaLake and ensure platform-wide security standards. Collaborate with the wider team to advise on system design and delivery . What we're looking for: Strong Linux engineering More ❯
and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Deltalake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the world are using More ❯