pivotal role in designing, building, and maintaining their data infrastructure while collaborating closely with senior stakeholders across the organisation. Your expertise in Azure, Databricks, Spark, Python, and data modelling will be critical in driving the success of their data initiatives. Key Responsibilities: Lead the complete development cycle of data … comprehensive of data modelling, data warehousing principles, and the innovative Lakehouse architecture. Exceptional proficiency in ETL methodologies, preferably utilising Azure Databricks or equivalent technologies (Spark, Spark SQL, Python, SQL), including deep insight into ETL/ELT design patterns. Proficient in Databricks, SQL, and Python, with a robust understanding more »
a qualified Data Engineer to join our team, where your responsibilities will include: Designing, optimizing, and maintaining scalable data pipelines and ETL processes using Spark, ensuring streamlined data processing and integration. Collaborating cross-functionally to translate complex data requirements into actionable technical solutions that drive business objectives. Leveraging Microsoft … the Midlands. Ideal Candidate Profile: We are seeking an individual who have the following attributes: Proven expertise as a Data Engineer, demonstrating proficiency in ApacheSpark and cloud-based technologies, particularly Microsoft Azure and Databricks. Strong programming skills, with a focus on Python, along with proficiency in ETL more »
Azure Solutions Architect Expert. Experience with other cloud platforms such as AWS or Google Cloud Platform. Knowledge of big data technologies such as Hadoop, Spark, etc. If you are passionate about leveraging Azure technologies to drive data-driven insights and solutions, we encourage you to apply for this exciting more »
skills include: Experience deploying, securing and supporting cloud infrastructure platforms Understanding of security frameworks/standards Understanding of data streaming and messaging frameworks (Kafka, Spark, etc.) and modern database technologies (Cockroach etc.) Understanding of distributed tracing and monitoring (Zipkin, OpenTracing, Prometheus, ELK stack, Micrometer metrics, etc.) Experience with containers more »
Kotlin. Familiarity with Kotlin or willingness to learn. Industrial experience with AWS/GCP/Azure. Knowledge of common data products such as Hadoop, Spark, Airflow, PostgreSQL, S3, etc. Problem solving/troubleshooting skills and attention to detail. 👋 About Us High-quality data access and provisioning shouldn't be more »
creates a sense of trust with stakeholders. Preferred qualifications, capabilities and skills Experience with deep learning frameworks (pytorch, tensorflow) Experience with big-data technologies (Spark, Hadoop) or distributed computation frameworks (Dask, Modin) Hands on experience with Natural Language Processing (NLP) and Large Language Models (LLMs) Experience of creating and more »
Essential Skills Proven experience as a Data Engineer Well versed in the following: cloud-based data storage solutions, data lakes, customer data platforms. (Python, Spark, SQL, Cloud Data Environments such as AWS, GCP, Azure) Good understanding of data modelling methods and data partitioning and compaction methods in Data Lake more »
managers, to understand data requirements and deliver high-quality solutions as well as architecting data ingestion, transformation, and storage processes using tools such as ApacheSpark, Azure Data Factory, and other similar technologies. Other core duties include optimizing data pipeline performance, ensuring data accuracy, reliability, and timely delivery. … Services Certifications in relevant technologies, such as Azure Data Engineer or Databricks Certified Developer Experience with real-time data processing and streaming technologies like Apache Kafka or Azure Event Hubs Knowledge of data visualization tools, such as Power BI or Tableau Contributions to open-source projects or active participation more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
programming language (Java, C++, Kotlin would be beneficial) Cloud experience (we use Azure, AWS or GCP welcome) Kafka or exposure to ActiveMQ, RabbitMQ or Spark Orchestration and Containerisation experience (Kubernetes, Docker and Microservices) Creating greenfield microservices, this team plan to add a wealth of functionality to existing systems as more »
Bedford, England, United Kingdom Hybrid / WFH Options
Understanding Recruitment
etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a month more »
Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Dupen Ltd
APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. Senior ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. This is a fantastic opportunity to join more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
pipeline and workflow management and their tools such as Airflow. Strong understanding of relational SQL and NoSQL databases, including MongoDB and stream-processing systems: Spark-Streaming, Kinesis etc. Ability to understand any scripting language and tools. Rewards & Benefits TCS is consistently voted a Top Employer in the UK and more »
and coding environments. Bonus Skills: Python/PHP/Typescript/ReactJS AI/ML models and usage ETL pipelines in AWS (Glue/ApacheSpark) API Load testing If you would like more information on the role or like to apply for then please send your CV more »
Data Analytics in Azure Synapse Analytics, Azure Analysis Services Data Ingestion and Storage including Azure Data Factory, Azure Databricks, Azure Data Lake, Kafka and Spark Streaming, Azure EventHub/IoT Hub, and Azure Stream Analytics Experience with Object-oriented/object function scripting languages: Python preferred more »
Milton Keynes, Buckinghamshire, South East, United Kingdom Hybrid / WFH Options
Dupen Ltd
Linux, APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. Note: as there are actually two roles more »
Engineering experience Demostrate In-depth knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse, RedShift) Experience of analytics technologies (Spark, Hadoop, Kafka) Have familiarity with Data Lakehouse architecture, SQL Server, DataOps, and data lineage concepts Demonstrate In-depth knowledge of large-scale data platforms more »
mining, data analysis, and strong software engineering skills. Strong understanding of Data Engineering Proficiency in AWS, data warehousing (Snowflake, Databricks, Redshift), big data frameworks (Spark, Kafka), container orchestration platforms (Kubernetes), and data integration/ETL tools. Strong written and verbal communication skills, with the ability to explain technical concepts more »
CI/CD/YAML/ARM/Terraform MSBI Traditional Stack (SQL, SSAS, SSIS, SSRS) Azure Automation/PowerShell Azure Streaming Analytics/Spark Streaming Azure Functions/C# .NET PowerApps Data Science Master Data Management/MDS WHY ADATIS? There’s a long list of reasons, from more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling and insurance are a plus. Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow more »
Staines-Upon-Thames, England, United Kingdom Hybrid / WFH Options
IFS
solutions Proficiency in data pipeline orchestration across hybrid environments, leveraging the latest in Azure and allied technologies. Expertise in data processing with tools like Spark or Dask, and fluency in Python, Scala, C#, or Java. Expertise in DevOps and CI/CD automation , ensuring seamless deployment with tools like more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Intec Select
cross-functionally across the business to understand the requirements of the products Designing and implementing performance related data ingestion pipelines from multiple sources using ApacheSpark Integrating end-to-end data pipelines ensuring a high level of quality is maintained Working with an Agile delivery/DevOps methodology more »
management and data governance open source platform that we will teach you. Other technologies in use in our space: RESTful services, Maven/Gradle, ApacheSpark, BigData, HTML 5, AngularJs/ReactJs, IntelliJ, Gitlab, Jira. Cloud Technologies: You’ll be involved in building the next generation of finance more »