Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support More ❯
related field (or equivalent experience) 3-5 years of experience in data engineering (healthcare/medical devices preferred but not required) Strong Python programming and data engineering skills (Pandas, PySpark, Dask) Proficiency with databases (SQL/NoSQL), ETL processes, and modern data frameworks (Apache Spark, Airflow, Kafka) Solid experience with cloud platforms (AWS, GCP, or Azure) and CI/ More ❯
client value and broaden relationships at senior levels with current and prospective clients. Our Tech Stack Cloud: Azure, sometimes GCP & AWS Data Platform: Databricks, Snowflake, BigQuery Data Engineering tools: Pyspark, Polars, DuckDB, Malloy, SQL Infrastructure-as-code: Terraform, Pulumi Data Management and Orchestration: Airflow, dbt Databases and Data Warehouses: SQL Server, PostgreSQL, MongoDB, Qdrant, Pinecone GenAI: OpenAI APIs, HuggingFace More ❯
delivering enterprise-grade data platforms on GCP, AWS, or Azure Deep expertise in data modeling, data warehousing, distributed systems, and modern data lake architectures Advanced proficiency in Python (including PySpark) and SQL, with experience building scalable data pipelines and analytics workflows Strong background in cloud-native data infrastructure (e.g., BigQuery, Redshift, Snowflake, Databricks) Demonstrated ability to lead teams, set More ❯
exposure to Natural Gas and Power markets, balancing mechanisms, and regulatory frameworks (e.g., REMIT, EMIR). Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as More ❯
Required Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
engagement. * Drive innovation through advanced analytics and research-based problem solving. To be successful you should have: 10 years hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Previous experience in implementing best practices for data engineering, including data governance, data quality, and data security. Proficiency More ❯
engagement.* Drive innovation through advanced analytics and research-based problem solving. To be successful you should have: 10 years hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Previous experience in implementing best practices for data engineering, including data governance, data quality, and data security. Proficiency More ❯
on experience with the Azure Data Stack, critically ADF and Synapse (experience with Microsoft Fabric is a plus) Highly developed python and data pipeline development knowledge, must include substantial PySpark experience Demonstrable DevOps and DataOps experience with an understanding of best practices for engineering, test and ongoing service delivery An understanding of Infrastructure as Code concepts (Demonstrable Terraform experience More ❯
For further details or to enquire about other roles, please contact Nick Mandella at Harnham. KEYWORDS Python, SQL, AWS, GCP, Azure, Cloud, Databricks, Docker, Kubernetes, CI/CD, Terraform, Pyspark, Spark, Kafka, machine learning, statistics, Data Science, Data Scientist, Big Data, Artificial Intelligence, private equity, finance. More ❯
complex data sets. Collaborate with data scientists to deploy machine learning models. Contribute to strategy, planning, and continuous improvement. Required Experience: Hands-on experience with AWS data tools: Glue, PySpark, Athena, Iceberg, Lake Formation. Strong Python and SQL skills for data processing and analysis. Deep understanding of data governance, quality, and security. Knowledge of market data and its business More ❯
complex data sets. Collaborate with data scientists to deploy machine learning models. Contribute to strategy, planning, and continuous improvement. Required Experience: Hands-on experience with AWS data tools: Glue, PySpark, Athena, Iceberg, Lake Formation. Strong Python and SQL skills for data processing and analysis. Deep understanding of data governance, quality, and security. Knowledge of market data and its business More ❯
Enforce GDPR-compliant governance and security * Optimize performance and cost of data workflows * Collaborate with teams to deliver clean, structured data Key Skills Required: * Azure data services, Python/PySpark/SQL, data modelling * Power BI (preferred), legal system familiarity (bonus) * Strong grasp of UK data regulations Certifications: * Microsoft certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect) desirable More ❯
personalized digital interactions.Some Technologies We Work With:Python primarily, with bits and pieces of typescript and scalaGCP, AWS, Azure - in this order of relevanceGitHub, Docker, GitHub Actions, Terraform, KubernetesPandas, PySpark and SparkVertex AI, Azure OpenAI for LLMs Job Responsibilities Lead the execution of projects in a high-performing data science team, fostering professional growth and creating an inclusive and More ❯
and stakeholder engagement abilities Strategic mindset with a focus on risk, governance, and transformation Proven ability to lead projects and coach others Technical skills to include: Python or R Pyspark Experience of deploying models AWS cloud Experience of GenAi Experience of working in a large complex organisation. Finance sector would be desirable, but not essential. Locations: London, Northampton, Manchester More ❯
Databricks, Synapse (Azure SQL DW), Cosmos DB, and Azure Data Lake. Knowledge of data testing concepts and best practices within data warehouse environments. Hands-on experience with Python/PySpark scripting for test development. Exposure to data testing automation is highly advantageous. Experience in the insurance domain is preferred. Excellent communication skills, both written and verbal. Previous experience in More ❯
Experience in Cloud Data Pipelines Building cloud data pipelines involves using Azure native programming techniques such as PySpark or Scala and Databricks. These pipelines are essential for tasks like sourcing, enriching, and maintaining structured and unstructured data sets for analysis and reporting. They are also crucial for secondary tasks such as flow pipelines, streamlining AI model performance, and enhancing More ❯
business analytics. Practical experience in coding languages such as Python, R, Scala (Python preferred). In database technologies including SQL, ETL, No-SQL, DW, and Big Data technologies like pySpark, Hive. Accenture is a global professional services company offering expertise in digital, cloud, and security solutions across various industries worldwide. 108 E 16th Street, New York, NY More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
Technology Architect - Data Engineering (Hybrid - London) Contract: 6 months Work Mode: Hybrid (12 days WFO/month)We are looking for an experienced Technology Architect with deep expertise in PySpark, ADF, and Databricks to lead and design data engineering solutions for our client. What You'll Do: Lead technical design using Medallion architecture and Azure ServicesCreate conceptual diagrams, source … pipelines Collaborate effectively with team members and stakeholdersOptional: Work with Log Analytics and KQL queries Must-Have Skills: 10+ years of experience in Data Engineering Hands-on experience with PySpark, ADF, Databricks, SQL Strong understanding of dimensional modeling, normalization, schema design, and data harmonization Experience with Erwin and data modeling toolsExcellent communication, problem-solving, and client-facing skills Why More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯