of professional experience in data engineering roles, preferably for a customer facing data product Expertise in designing and implementing large-scale data processing systems with data tooling such as Spark, Kafka, Airflow, dbt, Snowflake, Databricks, or similar Strong programming skills in languages such as SQL, Python, Go or Scala Demonstrable use and an understanding of effective use of AI More ❯
means to solve challenges. Proficiency in a programming language (e.g., Scala, Python, Java, C#) with understanding of domain modelling and application development. Knowledge of data management platforms (SQL, NoSQL, Spark/Databricks). Experience with modern engineering tools (Git, CI/CD), cloud platforms (Azure, AWS), and Infrastructure as Code (Terraform, Pulumi). Familiarity with various frameworks across front More ❯
day ago London, England, United Kingdom 2 days ago Senior Lead Software Engineer - Team Lead - Accelerator Business London, England, United Kingdom 2 weeks ago Senior Software Engineer (Java, Spark) - SaaS Software (Trade Surveillance & Complaince) City Of London, England, United Kingdom 150,000 - 175,000 4 hours ago London, England, United Kingdom 3 months ago Principal Generative AI Software Engineer More ❯
internal and external training. What you'll bring Mandatory Proficient in either GCP (Google) or AWS cloud Hands on experience in designing and building data pipelines using Hadoop and Spark technologies. Proficient in programming languages such as Scala, Java, or Python. Experienced in designing, building, and maintaining scalable data pipelines and applications. Hands-on experience with Continuous Integration and More ❯
languages from 25 countries (i.e., if you're interested in learning to make Chicken Moambe, we're the place). About Us Taptap Send is backed by top VCs (Spark, Canaan, Reid Hoffman, Breyer Capital, etc.), rapidly growing and a great place for those looking for both impact and a fast-paced tech startup environment. Read more about the … leading cross-border fintech for emerging markets. And that's just the beginning We charge fees for transfers on fixed exchange rate corridors (e.g., XOF, XAF). Our Investors Spark Capital, Canaan, Reid Hoffman, Breyer Capital (Jim Breyer), Unbound (Shravin Mittal), Wamda (Fadi Ghandour), Firstminute Capital, Slow Ventures (Sam Lessin), Helios Partners (Souleymane Ba), Crossbeam Ventures (Ali Hamed), Nikesh … data engineer or in a similar role Technical expertise with data models Great numerical and analytical skills Experience with event-driven and streaming data architectures (using technologies such as ApacheSpark, Flink or similar) Degree in Computer Science, IT, or similar field; a Master's is a plus or four years' equivalent experience Taptap Values Impact first Team More ❯
Agile projects Skills & Experience: Proven experience as a Lead Data Solution Architect in consulting environments Expertise in cloud platforms (AWS, Azure, GCP, Snowflake) Strong knowledge of big data technologies (Spark, Hadoop), ETL/ELT, and data modelling Familiarity with Python, R, Java, SQL, NoSQL, and data visualisation tools Understanding of machine learning and AI integration in data architecture Experience More ❯
ensure all data is accurate, accessible, and secure. To be successful in this role, you should have experience with: Cloud platforms (AWS/Azure) Data Engineering (Airflow/DBT, Spark) DevSecOps practices Additional highly valued skills include: Terraform Python/Java AWS/Azure Data Engineering Snowflake/Databricks You may be assessed on key skills relevant for success More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
scripting languages like Python or KornShell. Knowledge of writing and optimizing SQL queries for large-scale, complex datasets. PREFERRED QUALIFICATIONS Experience with big data technologies such as Hadoop, Hive, Spark, EMR. Experience with ETL tools like Informatica, ODI, SSIS, BODI, or DataStage. We promote an inclusive culture that empowers Amazon employees to deliver the best results for our customers. More ❯
years in data architecture and solution design, and a history of large-scale data solution implementation. Technical Expertise : Deep knowledge of data architecture principles, big data technologies (e.g., Hadoop, Spark), and cloud platforms like AWS, Azure, or GCP. Data Management Skills : Advanced proficiency in data modelling, SQL/NoSQL databases, ETL processes, and data integration techniques. Programming & Tools : Strong More ❯
Bachelor's or Master's degree in Computer Science, Engineering, or relevant experience hands-on with data engineering Strong hands-on knowledge of data platforms and tools, including Databricks, Spark, and SQL Experience designing and implementing data pipelines and ETL processes Good knowledge of ML ops principles and best practices to deploy, monitor and maintain machine learning models in More ❯
in Python, and familiarity with ML frameworks like TensorFlow or PyTorch . You have exposure to cloud platforms (e.g., AWS, GCP), containerization (Docker, Kubernetes), and scalable data systems (e.g., Spark, Kafka). You are experienced or interested in ML model serving technologies (e.g., MLflow , TensorFlow Serving) and CI/CD tools (e.g., GitHub Actions). You understand ML algorithms More ❯
communication and stakeholder management skills when engaging with customers Significant experience of coding in Python and Scala or Java Experience with big data processing tools such as Hadoop or Spark Cloud experience; GCP specifically in this case, including services such as Cloud Run, Cloud Functions, BigQuery, GCS, Secret Manager, Vertex AI etc. Experience with Terraform Prior experience in a More ❯
in Python, and familiarity with ML frameworks like TensorFlow or PyTorch . You have exposure to cloud platforms (e.g., AWS, GCP), containerization (Docker, Kubernetes), and scalable data systems (e.g., Spark, Kafka). You are experienced or interested in ML model serving technologies (e.g., MLflow , TensorFlow Serving) and CI/CD tools (e.g., GitHub Actions). You understand ML algorithms More ❯
troubleshooting of Production systems incidents in working hours and out of hours on a best endeavours basis. Skills/Qualifications Essential Skills Strong working knowledge of Azure Databricks/Spark (using Python and associated frameworks) Privacera and SQL Strong working knowledge of Azure Data Lake and Blob storage Strong experience of building data reporting and visualisations using PowerBI and More ❯
learn etc.) Have experience productionising machine learning models Are an expert in at least one of one of : predictive modelling, classification, regression, optimisation or recommendation systems Have experience with Spark Have knowledge of DevOps technologies such as Docker and Terraform and ML Ops practices and platforms like ML Flow Have experience with agile delivery methodologies and CI/CD More ❯
learn etc.) Have experience productionising machine learning models Are an expert in at least one of one of : predictive modelling, classification, regression, optimisation or recommendation systems Have experience with Spark Have knowledge of DevOps technologies such as Docker and Terraform and ML Ops practices and platforms like ML Flow Have experience with agile delivery methodologies and CI/CD More ❯
environment (Python, Go, Julia etc.) •Experience with Amazon Web Services (S3, EKS, ECR, EMR, etc.) •Experience with containers and orchestration (e.g. Docker, Kubernetes) •Experience with Big Data processing technologies (Spark, Hadoop, Flink etc) •Experience with interactive notebooks (e.g. JupyterHub, Databricks) •Experience with Git Ops style automation •Experience with ix (e.g, Linux, BSD, etc.) tooling and scripting •Participated in projects More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
source systems into our reporting solutions. Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like ApacheSpark to ensure seamless data flow. Monitoring and Failure Recovery: Implement monitoring procedures to detect failures or unusual data profiles and establish recovery processes to maintain data integrity. … in Azure data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, and Azure Data Factory. Understanding of data extraction from vendor REST APIs. Spark/Pyspark or Python skills a bonus or a willingness to develop these skills. Experience with monitoring and failure recovery in data pipelines. Excellent problem-solving skills and attention More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Fruition Group
best practices for data security and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, ApacheSpark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and ETL/ELT development in More ❯
of large-scale distributed data processing. Experience with developing extract-transform-load (ETL). Experience with distributed messaging systems like Kafka and RabbitMQ. Experience with distributed computing frameworks like ApacheSpark and Flink. Bonus Points Experience working with AWS or Google Cloud Platform (GCP). Experience in building a data warehouse and data lake. Knowledge of advertising platforms. More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Farnborough, Hampshire, England, United Kingdom Hybrid / WFH Options
Eutopia Solutions ltd
with Microsoft Azure and Azure SQL Database Proficiency with Docker and containerisation tools Experience working with APIs for data extraction Desirable Skills Familiarity with big data technologies such as Spark and Kafka Experience with machine learning frameworks like TensorFlow or PyTorch Knowledge of data visualisation tools such as Power BI or Tableau Strong understanding of data modelling and database More ❯
require the following experience: Quant Degree such as Maths, Physics, Computer Science, Engineering etc Software Development experience in Python or Scala An understanding of Big Data technologies such as Spark, messaging services like Kafka or RabbitMQ, and workflow management tools like Airflow SQL & NoSQL expertise, ideally including Postgres, Redis, MongoDB etc Experience with AWS, and with tools like Docker More ❯
with Azure Data Factory, Azure Functions, and Synapse Analytics. Proficient in Python and advanced SQL, including query tuning and optimisation. Hands-on experience with big data tools such as Spark, Hadoop, and Kafka. Familiarity with CI/CD pipelines, version control, and deployment automation. Experience using Infrastructure as Code tools like Terraform. Solid understanding of Azure-based networking and More ❯