london, south east england, United Kingdom Hybrid / WFH Options
Peaple Talent
Azure or AWS Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What More ❯
Stroud, south east england, United Kingdom Hybrid / WFH Options
Data Engineer
excellence and be a person that actively looks for continual improvement opportunities. Knowledge and skills Experience as a Data Engineer or Analyst Databricks/ApacheSpark SQL/Python BitBucket/GitHub. Advantageous dbt AWS Azure Devops Terraform Atlassian (Jira, Confluence) About Us What's in it for More ❯
london, south east england, United Kingdom Hybrid / WFH Options
JSS Search
the ability to work in a fast-paced, collaborative environment. Strong communication and interpersonal skills. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of machine learning and AI integration with data architectures. Certification in cloud platforms or data management. More ❯
Functions, and Synapse Analytics. Proficient in Python and advanced SQL, including query tuning and optimisation. Hands-on experience with big data tools such as Spark, Hadoop, and Kafka. Familiarity with CI/CD pipelines, version control, and deployment automation. Experience using Infrastructure as Code tools like Terraform. Solid understanding More ❯
data technologies Technical Skills Advanced machine learning and deep learning techniques Natural language processing Time series analysis and forecasting Reinforcement learning Big data technologies (Spark, Hadoop) Cloud infrastructure and containerization (Docker, Kubernetes) Version control and CI/CD practices More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Mastek
for this role. Knowledge of cloud-based database solutions (AWS, Azure, Google Cloud) is an advantage. Preferred Qualifications: Experience with big data technologies (Hadoop, Spark, Snowflake) is a plus. Certification in database technologies or testing methodologies. Knowledge of scripting languages like Python or Shell scripting for test automation. Work More ❯
experience in machine learning frameworks, including architectural design and data platforms. Knowledge of cloud platforms (AWS, Azure, or GCP) and data engineering tools (e.g., Spark, Kafka). Exceptional communication skills, with the ability to influence technical and non-technical stakeholders alike. More ❯
Kinesis, Step Functions, Lake Formation and data lake design. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure-as-Code (Terraform, CloudFormation) for automating AWS data More ❯
london, south east england, United Kingdom Hybrid / WFH Options
eTeam
including OAuth, JWT, and data encryption. Fluent in English with strong communication and collaboration skills. Preferred Qualifications Experience with big data processing frameworks like Apache Flink or Spark. Familiarity with machine learning models and AI-driven analytics. Understanding of front-end and mobile app interactions with backend services. Expertise More ❯
to support business insights, analytics, and other data-driven initiatives. Job Specification (Technical Skills): Cloud Platforms: Expert-level proficiency in Azure (Data Factory, Databricks, Spark, SQL Database, DevOps/Git, Data Lake, Delta Lake, Power BI), with working knowledge of Azure WebApp and Networking. Conceptual understanding of Azure AI More ❯
Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience working with Azure data platform services, including Storage, ADLS Gen2, Azure Functions, Kubernetes. Background in cloud platforms and data architectures, such … experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage, Quality More ❯
for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R … 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure More ❯
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
management, code repositories, and automation. Requirements 5+ Years experience in the Data and Analytics Domain. Previous Management experience is preferred. Strong expertise in Databricks (Spark, Delta Lake, Notebooks). Advanced knowledge of SQL development. • Familiarity with Azure Synapse for orchestration and analytics. Working experience with Power BI for reporting More ❯
and classification techniques. Fluency in a programming language (Python, C, C++, Java, SQL). Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau). More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Merlin Entertainments
Deep expertise with Databricks and modern data platforms in the cloud (Azure, AWS, or GCP). Strong technical background in big data frameworks (e.g., Spark, Kafka), distributed systems, and scalable data architectures. Excellent understanding of data governance, security, and privacy, with practical knowledge of GDPR compliance. Track record of More ❯
Kotlin. Familiarity with Kotlin or willingness to learn. Industrial experience with AWS/GCP/Azure. Knowledge of common data products such as Hadoop, Spark, Airflow, PostgreSQL, S3, etc. Problem solving/troubleshooting skills and attention to detail. 👋 About Us High-quality data access and provisioning shouldn't be More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Kantar Media
technologies. Experienced in writing and running SQL and Bash scripts to automate tasks and manage data. Skilled in installing, configuring, and managing Hive on Spark with HDFS. Strong analytical skills with the ability to troubleshoot complex issues and analyze large volumes of text or binary data in Linux or More ❯
+ 10% bonus + benefits Purpose: Build and maintain large, scalable Data Lakes, processes and pipelines Tech: Python, Iceberg/Kafka, Spark/Glue, CI/CD Industry: Financial services/securities trading Immersum continue to support a leading SaaS securities trading platform, who are hiring their first Data … Infra tooling using Terraform, Ansible and Jenkins whilst automating everything with Python Tech (experience in any listed is advantageous) Python Cloud: AWS Lake house: ApacheSpark or AWS Glue Cloud Native storage: Iceberg, RDS, RedShift, Kafka IaC: Terraform, Ansible CI/CD: Jenkins, Gitlab Other platforms such as More ❯
grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML More ❯
Technology, or related field. Proficiency in software engineering with experience in Java & Spring or other major programming languages. Preferred Qualifications: Experience with Spring Boot, Spark (Big Data), and Message Bus Architecture. Familiarity with containerisation (e.g., Kubernetes), AWS Cloud, and CICD pipelines (Jenkins). If you meet the above criteria More ❯
with attention to detail and accuracy. Adept at queries, report writing, and presenting findings. Experience working with large datasets and distributed computing tools (Hadoop, Spark, etc.) Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.). Experience with data profiling tools and processes. Knowledge More ❯
etc Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics, risk More ❯
london, south east england, United Kingdom Hybrid / WFH Options
twentyAI
data lake solutions. Hands-on experience with data modelling , ETL/ELT pipelines , and data integration across multiple systems. Familiarity with tools like Kafka , Spark , and modern API-based architectures . Experience with relational databases such as Oracle and SQL Server . Knowledge of data governance platforms like Purview More ❯