creates a sense of trust with stakeholders. Preferred qualifications, capabilities and skills Experience with deep learning frameworks (pytorch, tensorflow) Experience with big-data technologies (Spark, Hadoop) or distributed computation frameworks (Dask, Modin) Hands on experience with Natural Language Processing (NLP) and Large Language Models (LLMs) Experience of creating and more »
Essential Skills Proven experience as a Data Engineer Well versed in the following: cloud-based data storage solutions, data lakes, customer data platforms. (Python, Spark, SQL, Cloud Data Environments such as AWS, GCP, Azure) Good understanding of data modelling methods and data partitioning and compaction methods in Data Lake more »
managers, to understand data requirements and deliver high-quality solutions as well as architecting data ingestion, transformation, and storage processes using tools such as ApacheSpark, Azure Data Factory, and other similar technologies. Other core duties include optimizing data pipeline performance, ensuring data accuracy, reliability, and timely delivery. … Services Certifications in relevant technologies, such as Azure Data Engineer or Databricks Certified Developer Experience with real-time data processing and streaming technologies like Apache Kafka or Azure Event Hubs Knowledge of data visualization tools, such as Power BI or Tableau Contributions to open-source projects or active participation more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
programming language (Java, C++, Kotlin would be beneficial) Cloud experience (we use Azure, AWS or GCP welcome) Kafka or exposure to ActiveMQ, RabbitMQ or Spark Orchestration and Containerisation experience (Kubernetes, Docker and Microservices) Creating greenfield microservices, this team plan to add a wealth of functionality to existing systems as more »
programming language (Java, C++, Kotlin would be beneficial) Cloud experience (we use Azure, AWS or GCP welcome) Kafka or exposure to ActiveMQ, RabbitMQ or Spark Orchestration and Containerisation experience (Kubernetes, Docker and Microservices) Creating greenfield microservices, this team plan to add a wealth of functionality to existing systems as more »
Bedford, England, United Kingdom Hybrid / WFH Options
Understanding Recruitment
etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a month more »
Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Dupen Ltd
APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. Senior ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. This is a fantastic opportunity to join more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
success. 💼 What You Bring to the Table: Expertise in designing and deploying production data pipelines within a big data architecture using Java, Python, Scala, Spark, and SQL. Proven experience in tasks like scripting, API data extraction, and SQL queries. Collaboration with engineering teams and integration of data engineering components more »
pipeline and workflow management and their tools such as Airflow. Strong understanding of relational SQL and NoSQL databases, including MongoDB and stream-processing systems: Spark-Streaming, Kinesis etc. Ability to understand any scripting language and tools. Rewards & Benefits TCS is consistently voted a Top Employer in the UK and more »
and coding environments. Bonus Skills: Python/PHP/Typescript/ReactJS AI/ML models and usage ETL pipelines in AWS (Glue/ApacheSpark) API Load testing If you would like more information on the role or like to apply for then please send your CV more »
Modelling. Experience with at least one or more of these programming languages: Python, Scala/Java Experience with distributed data and computing tools, mainly ApacheSpark & Kafka Understanding of critical path approaches, how to iterate to build value, engaging with stakeholders actively at all stages. Able to deal more »
Data Analytics in Azure Synapse Analytics, Azure Analysis Services Data Ingestion and Storage including Azure Data Factory, Azure Databricks, Azure Data Lake, Kafka and Spark Streaming, Azure EventHub/IoT Hub, and Azure Stream Analytics Experience with Object-oriented/object function scripting languages: Python preferred more »
messaging frameworks and/or distributed tracing/monitoring, this will put you in a good position. Tech stack includes AWS, GCP, Azure, Kafka, Spark, Zipkin, OpenTracing, Prometheus, Grafana, ELK stack, Micrometer metrics, Docker, Kubernetes, Helm, automating deployment, releases, testing in CI, continuous delivery pipelines. more »
frameworks (TensorFlow, PyTorch) Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a month more »
Milton Keynes, Buckinghamshire, South East, United Kingdom Hybrid / WFH Options
Dupen Ltd
Linux, APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. Note: as there are actually two roles more »
mining, data analysis, and strong software engineering skills. Strong understanding of Data Engineering Proficiency in AWS, data warehousing (Snowflake, Databricks, Redshift), big data frameworks (Spark, Kafka), container orchestration platforms (Kubernetes), and data integration/ETL tools. Strong written and verbal communication skills, with the ability to explain technical concepts more »
CI/CD/YAML/ARM/Terraform MSBI Traditional Stack (SQL, SSAS, SSIS, SSRS) Azure Automation/PowerShell Azure Streaming Analytics/Spark Streaming Azure Functions/C# .NET PowerApps Data Science Master Data Management/MDS WHY ADATIS? There’s a long list of reasons, from more »
RDBMS environments: Sybase ASE/IQ, Oracle or DB2 It would be great if you have: Experience in Cluster Computing and Big Data solutions: Spark, Hadoop, HDSF, XRS using public cloud Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work more »
Staines-Upon-Thames, England, United Kingdom Hybrid / WFH Options
IFS
solutions Proficiency in data pipeline orchestration across hybrid environments, leveraging the latest in Azure and allied technologies. Expertise in data processing with tools like Spark or Dask, and fluency in Python, Scala, C#, or Java. Expertise in DevOps and CI/CD automation , ensuring seamless deployment with tools like more »
of AWS services with the ability to demonstrate working on large engagements * Experience of AWS tools (e.g. Athena, Redshift, Glue, EMR) * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ELT data pipelines. * Deep understanding of data manipulation/wrangling techniques * Demonstrable knowledge of applying Data … DB/Neo4j/Elastic, Google Cloud Datastore. * Snowflake Data Warehouse/Platform * Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. * Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc * Experience building and deploying solutions to Cloud more »
Swansea, Wales, United Kingdom Hybrid / WFH Options
CPS Group (UK) Limited
my client will train you): Knowledge of Microsoft SQL Server and packaged BI tools (SSAS and SSIS). Docker, Kubernetes and cloud computing technologies. Apache Kafka and data streaming. Familiarity with ApacheSpark or similar data processing tools. Experience developing and maintaining CICD pipelines, particularly Azure DevOps more »