Lincoln, Lincolnshire, East Midlands, United Kingdom Hybrid / WFH Options
Frontier Agriculture Limited
designing and implementing database structures Ability to create and maintain conceptual, logical, and physical data models Familiarity with data warehousing and ETL processes in Azure Data Factory and Azure Databricks Knowledge of industry best practices for data modelling and database design Ability to collaborate with cross-functional teams to gather and analyse data requirements Experience in performance tuning and optimization More ❯
Employment Type: Permanent, Work From Home
Salary: Competitive + Benefits + 33 Days Holiday + Employee Assistance Program
Liverpool, Lancashire, United Kingdom Hybrid / WFH Options
USS Investment Management Limited
for: Design and manage Azure infrastructure, including AD, VMs, networking, storage, and compliance Develop Infrastructure-as-Code using Terraform; automate tasks with PowerShell and Python Implement and support Azure Databricks and Microsoft Fabric for data and analytics workloads Monitor environments with Azure Monitor, Log Analytics, and Application Insights Lead technical handovers, mentoring teams and ensuring smooth transition to support Collaborate More ❯
algorithms. Good social skills Experience working in a distributed team. Background in software development or in a hands-on technical role pipelines) Desirable: Spark data processing Cloud technologies - Azure, DataBricks, Kubernetes Software Architectural skills DevOps Agile/Scrum certified. Strong understanding of data security, including encryption/decryption. Proven leadership skills with experience managing teams and collaborating with third-party More ❯
Experience working with common data transformation and storage formats, e.g. Apache Parquet, Delta tables. Strong experience working with containerisation (e.g. Docker) and deployment (e.g. Kubernetes). Experience with Spark, Databricks, data lakes. Highly proficient in working with version control and Git/GitHub. Awareness of data standards such as GA4GH ( ) and FAIR ( ) Competitive salary starting from £74,000 Generous Pension More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Publicis Media
users in line with security standards Qualifications ✔ Experience in a data, operations, or analytics-focused role ✔ Familiarity with BI and reporting platforms such as Power BI, Datorama, Data Studio, Databricks, SQL ✔ Strong communication skills and a collaborative, solutions-focused mindset ✔ An organised, proactive approach—comfortable working across teams and timelines ✔ Excitement for new technology, data storytelling, and digital marketing innovation More ❯
users in line with security standards Qualifications ✔ Experience in a data, operations, or analytics-focused role ✔ Familiarity with BI and reporting platforms such as Power BI, Datorama, Data Studio, Databricks, SQL ✔ Strong communication skills and a collaborative, solutions-focused mindset ✔ An organised, proactive approach—comfortable working across teams and timelines ✔ Excitement for new technology, data storytelling, and digital marketing innovation More ❯
or working sessions with other engineers across the organisation to help smooth the path to adoption Learn and explore new tools and cloud native technologies (e.g. AWS, GCP, Snowflake, Databricks) under the guidance of senior team members Key Skills and Experience required - A strong academic background in Computer Science, Engineering, Data Science, or a related technical discipline Proficiency in Java More ❯
or internal tools Strong understanding of data lifecycle management: ingestion, transformation, modelling, governance, and analytics enablement Experience working with modern data infrastructure and tooling (e.g. Airflow, Azure Data Factory, Databricks, Snowflake, DBT, Redshift) Able to navigate technical discussions with engineers and translate them into clear requirements for delivery Familiar with data quality and monitoring practices, data observability, schema standardization, and More ❯
Leverage advanced tools such as Power BI, DAX, and data engineering practices to build scalable and insightful analytics solutions. Drive the adoption and integration of modern data platforms like Databricks and Microsoft Fabric to enhance data accessibility, quality, and performance. Introduce and implement emerging technologies and best practices to elevate the analytics maturity of the Supply Chain organization. Cross-Functional More ❯
upskilling team members is critical for this role Expertise in data engineering - you love data modelling and data lifecycles! Advanced experience with cloud platforms, infrastructure as code (AWS, Terraform, Databricks) Proficiency in SQL and one or more of our core languages, Python 3/TypeScript; knowledge of core programming principles to adapt to other languages as needed. Leading the team More ❯
to work with cross-functional teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure More ❯
equivalent), CMS platforms (Strapi or equivalent) Working on Developer Portals or documentation relating to shared development services Experience of Docker and/or Kubernetes an advantage. Working knowledge of DataBricks/Spark (using Python and associated frameworks) Working knowledge of Azure Data Lake and Azure Blob storage What we offer This is a permanent role The team is based from More ❯
equivalent), CMS platforms (Strapi or equivalent) Working on Developer Portals or documentation relating to shared development services Experience of Docker and/or Kubernetes an advantage. Working knowledge of DataBricks/Spark (using Python and associated frameworks) Working knowledge of Azure Data Lake and Azure Blob storage What we offer This is a permanent role The team is based from More ❯
as modern data platforms, data product engineering, data marketplace architecture, data developer portals, platform engineering. Experience co-selling partner solutions with hyperscalers or platforms (e.g. AWS, Azure, GCP, Snowflake, Databricks). Outstanding communication skills - able to translate complex ideas for both technical and business audiences. Demonstrated thought leadership in AI/ML such as speaking at industry events, contributing to More ❯
as modern data platforms, data product engineering, data marketplace architecture, data developer portals, platform engineering. Experience co-selling partner solutions with hyperscalers or platforms (e.g. AWS, Azure, GCP, Snowflake, Databricks). Outstanding communication skills - able to translate complex ideas for both technical and business audiences. Demonstrated thought leadership in AI/ML such as speaking at industry events, contributing to More ❯
for analyzing large structured and unstructured data sets. Hands on experience with the set up and maintenance of bronze, silver, and gold layers in a big data platform, ideally Databricks and Apache Spark. And experience building and maintaining DBT models. A desire and passion for transforming raw data into structured tables which can answer common business questions. Proven experience working More ❯
Demonstrated experience in identifying and testing core growth assumptions through comprehensive analysis of diverse data sources. Experience performing analysis on large data sets in a Big Data platform (i.e. Databricks or a similar platform) - Comfortable with SQL is crucial, as well as exposure to social media or creator analytics tools (e.g. Sprinklr, Brandwatch, CreatorIQ or similar) Power BI and ideally More ❯
ability to work wimulti-functionalnal teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure More ❯
course versed in SQL and its use in data pipelines. You have recent experience building, deploying, and running data pipelines or insights tooling on cloud services (AWS, Azure, Snowflake, Databricks) You're used to working in an agile, cross-functional team with industry-standard practices You're used to producing repeatable, automated tests for your own work You love working More ❯
maturity assessments and alignment of gaps to enabling technology solutions Appreciation/Experience in designing and governing data platform architectures (e.g. broad understanding of Informatica, Collibra, Ab Initio, Snowflake, Databricks) Appreciation of and interest in attaining end-to-end data skills e.g., data quality, metadata, data-mesh, data security, privacy & compliance Experience with Enterprise/platform/application (e.g. cloud More ❯
maturity assessments and alignment of gaps to enabling technology solutions Appreciation/Experience in designing and governing data platform architectures (e.g. broad understanding of Informatica, Collibra, Ab Initio, Snowflake, Databricks) Appreciation of and interest in attaining end-to-end data skills e.g., data quality, metadata, data-mesh, data security, privacy & compliance Experience with Enterprise/platform/application (e.g. cloud More ❯
with LLMs - fine-tuning, prompt engineering, vector databases, or RAG pipelines Proven experience with A/B testing, experimentation design, or causal inference to guide product decisions Exposure to Databricks, MLflow, AWS, and PySpark (or similar technologies) is a plus Excitement about Ophelos' mission to support households and businesses in breaking the vicious debt cycle About Our Team Ophelos launched More ❯
individuals who want to drive value, work in a fast-paced environment, and solve real business problems. You are a coder who writes efficient and optimized code leveraging key Databricks features. You are a problem-solver who can deliver simple, elegant solutions as well as cutting-edge solutions that, regardless of complexity, your clients can understand, implement, and maintain. You … Strong written and verbal communication skills required Ability to manage an individual workstream independently 3+ years of experience developing and deploying ML models in any platform (Azure, AWS, GCP, Databricks etc.) Ability to apply data science methodologies and principles to real life projects Expertise in software engineering concepts and best practices Self-starter with excellent communication skills, able to work … independently, and lead projects, initiatives, and/or people Willingness to travel. Want to stand out? Consulting Experience Databricks Machine Learning Associate or Machine Learning Professional Certification. Familiarity with traditional machine learning tools such as Python, SKLearn, XGBoost, SparkML, etc. Experience with deep learning frameworks like TensorFlow or PyTorch. Knowledge of ML model deployment options (e.g., Azure Functions, FastAPI, Kubernetes More ❯
role, you'll: Have experience developing ELT/ETL ingestion pipelines for structured and unstructured data sources. Have experience with Azure cloud platform tools such as Azure Data Factory, Databricks, Logic Apps, Azure Functions, ADLS, SQL Server, and Unity Catalog. Have a strong understanding of the Databricks platform, including managing workflows, jobs, and notebooks. Be experienced in data modeling in More ❯
engineers and scientists working across Africa, Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data … data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows … or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines and version control systems like Git. Knowledge of ETL tools and More ❯