like S3, Lambda, BigQuery, or Databricks. Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such as Spark, Hadoop, or Kafka, is a plus. Strong problem-solving skills and the ability to work in a collaborative team environment. Excellent verbal and written communication skills. Bachelor's degree More ❯
independently and as part of a team. Preferred Qualifications: Master's degree in Computer Science, Data Science, or a related field. Experience with big data technologies such as Hadoop, Spark, or Kafka. Experience with data visualization tools such as Power BI, Tableau, or Qlik. Certifications in Azure data and AI technologies. Benefits We offer a competitive, market-aligned salary More ❯
in data engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands More ❯
tools like Collibra, Alation, Microsoft Purview, or Informatica, including projects around lineage, cataloging, and quality rules. Strong hands-on development experience in SQL and Python, with working knowledge of Spark or other distributed data processing frameworks. Design, development and implementation of distributed data solutions using API and microservice-based architecture. Deep understanding of ETL/ELT architecture, streaming, and More ❯
with SQL, NoSQL, and data visualization tools. Strong analytical and problem-solving skills. Experience with social media analytics and user behavior analysis. Knowledge of big data technologies like Hadoop, Spark, Kafka. Familiarity with AWS machine learning services such as SageMaker and Comprehend. Understanding of data governance and security in AWS. Excellent communication and teamwork skills. Attention to detail and More ❯
with SQL, NoSQL, visualization tools, and statistical packages. Strong analytical and problem-solving skills. Experience with social media analytics and user behavior. Knowledge of big data technologies like Hadoop, Spark, Kafka. Familiarity with AWS ML services like SageMaker, Comprehend. Understanding of data governance and security in AWS. Excellent communication and collaboration skills. Attention to detail and quality delivery. Compensation More ❯
Experience with SQL and NoSQL databases, visualization tools. Strong analytical and problem-solving skills. Experience with social media analytics and user behavior. Familiarity with big data technologies like Hadoop, Spark, Kafka. Knowledge of AWS machine learning services like SageMaker, Comprehend. Understanding of data governance and security in AWS. Excellent communication and teamwork skills. Attention to detail and ability to More ❯
with the ability to influence others Skills and Abilities Platforms & Tools Languages: Python, SQL, T-SQL, SSIS Methodologies: Agile, DevOps must have Concepts: ELT/ETL, DWH, APIs (RESTful), Spark APIs, FTP protocols, SSL, SFTP, PKI (public Key Infrastructure) and Integration testing Management Duties Yes We are an equal opportunity employer, and we are proud to share that More ❯
with Big Data. Must know modern cloud platforms including Azure, GCP, etc and technologies across traditional and contemporary software, with focus as below:Expert understanding: Azure Data Factory, Databricks, Spark, Azure SQL Database, Azure DevOps/Git, Data Lake, Delta Lake/Lakehouse architecture, Power BI.Working Knowledge: Azure WebApp, Azure Networking concepts.Conceptual Knowhow (nice to have): Azure AI Services More ❯
data requirements and deliver high-quality data solutions. Automation: Implement automation processes and best practices to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera – Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in More ❯
SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up algorithms More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
ability to explain technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant … MUST have the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Lead Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, On-Prem, Cloud, ETL, Azure Data Fabric, ADF, Databricks, Azure Data, Delta Lake, Data Lake. Please note that due to a More ❯
Milton Keynes, England, United Kingdom Hybrid / WFH Options
Santander
members, stakeholders and end users conveying technical concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation Data Transformation DevOps - data pipelines Controls - selection and build Reference and metadata More ❯
projects. Requirements Expertise in development languages including but not limited to: Java/J2EE, Scala/Python, XML, JSON, SQL, Spring/Spring Boot. Expertise with RESTful web services, Spark, Kafka etc. Experience with relational SQL, NoSQL databases and cloud technologies such as AWS/Azure/Google Cloud Platform (GCP), Kubernetes, and Docker. Extensive knowledge of object-oriented More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for More ❯
in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying ML and data solutions. Knowledge of MLOps More ❯
and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. Good knowledge of Python and Spark are required. Experience in ETL & ELT Good understanding of one scripting language Good understanding of how to enable analytics using cloud technology and ML Ops Experience in Azure Infrastructure More ❯
architecture patterns for LLMs, NLP, MLOps, RAG, APIs, and real-time data integration Strong background in working with cloud platforms (GCP, AWS, Azure) and big data technologies (e.g., Kafka, Spark, Snowflake, Databricks) Demonstrated ability to work across matrixed organizations and partner effectively with IT, security, and business stakeholders Experience collaborating with third-party tech providers and managing outsourced solution More ❯
ability to explain complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of Delta Lake, ApacheSpark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of regulatory data requirements such as Solvency II More ❯
Reading, England, United Kingdom Hybrid / WFH Options
Areti Group | B Corp™
support for Data Analysts with efficient and performant queries. • Skilled in optimizing data ingestion and query performance for MSSQL or other RDBMS. • Familiar with data processing frameworks such as Apache Spark. • Highly analytical and tenacious in solving complex problems. More ❯