value through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from more »
pivotal role in designing, building, and maintaining their data infrastructure while collaborating closely with senior stakeholders across the organisation. Your expertise in Azure, Databricks, Spark, Python, and data modelling will be critical in driving the success of their data initiatives. Key Responsibilities: Lead the complete development cycle of data … comprehensive of data modelling, data warehousing principles, and the innovative Lakehouse architecture. Exceptional proficiency in ETL methodologies, preferably utilising Azure Databricks or equivalent technologies (Spark, Spark SQL, Python, SQL), including deep insight into ETL/ELT design patterns. Proficient in Databricks, SQL, and Python, with a robust understanding more »
Azure Solutions Architect Expert. Experience with other cloud platforms such as AWS or Google Cloud Platform. Knowledge of big data technologies such as Hadoop, Spark, etc. If you are passionate about leveraging Azure technologies to drive data-driven insights and solutions, we encourage you to apply for this exciting more »
managers, to understand data requirements and deliver high-quality solutions as well as architecting data ingestion, transformation, and storage processes using tools such as ApacheSpark, Azure Data Factory, and other similar technologies. Other core duties include optimizing data pipeline performance, ensuring data accuracy, reliability, and timely delivery. … Services Certifications in relevant technologies, such as Azure Data Engineer or Databricks Certified Developer Experience with real-time data processing and streaming technologies like Apache Kafka or Azure Event Hubs Knowledge of data visualization tools, such as Power BI or Tableau Contributions to open-source projects or active participation more »
Staines-Upon-Thames, England, United Kingdom Hybrid / WFH Options
IFS
solutions Proficiency in data pipeline orchestration across hybrid environments, leveraging the latest in Azure and allied technologies. Expertise in data processing with tools like Spark or Dask, and fluency in Python, Scala, C#, or Java. Expertise in DevOps and CI/CD automation , ensuring seamless deployment with tools like more »
and classification techniques, and algorithms Fluency in a programming language (Python, C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
working in the world of Data Science You're more than capable with SQL & Python You have exposure to big data technologies such as Spark Ideally you will have experience with statistical analysis, machine learning algorithms, and data mining techniques You have excellent communication skills and can communicate well more »
platforms, Azure is desirable Software development experience is desirable Data architecture knowledge is desirable API design and deployment experience is desirable Big data (e.g. Spark) experience is desirable NoSQL DB experience is desirable Qualifications 2+ years of data science experience Right to work in the UK and/or more »
Skills & Experience At least 10 years experience working with JavaScript or Python/Java Previous experience deploying Software into the Cloud EKS, Docker, Kubernetes ApacheSpark or NiFi Microservice architecture experience Experience with AI/ML systems more »
ll also get exposure to Python, lots of SQL (of course) and depending on your level of experience, data stream processing tools like Kafka, Spark, etc. As this company continues to build new platforms and modernise, you’ll also be exposed to the cloud and various other modern tools more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed- in a more »
EXPERIENCE: Experience using JavaScript or Python Experience deploying software into the cloud and on-premise Developing software products EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi Experience with microservices architecture Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »
Sheffield, England, United Kingdom Hybrid / WFH Options
Undisclosed
knowledge of security principles and best practices for cloud-based solutions. Preferred Skills: Certification in cloud platforms. Experience with big data technologies such as Apache Hadoop, Spark, or Kafka. Knowledge of data governance and compliance frameworks. Familiarity with DevOps practices and tools (e.g., Git, Jenkins, Terraform). HSBC more »
ingestion pipelines. Requirements: Proven experience working with Python or Java or C# Experience working with ELT/ELT technologies such as Airflow, Argo, Dagster, Spark, Hive Strong technical expertise, especially in data processing and exploration, with a willingness to learn new technologies. A passion for automation and driving continual more »
equity financing to mid-market and late-stage companies. Liquidity Group is backed by leading global financial institutions including Japan’s largest bank, MUFG, Spark Capital, and Apollo Asset Management. About the Role We are looking for highly motivated credit professionals who are able to work independently and in more »
and Data Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI … Azure Cloud environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Lorien
in SQL, NoSQL, Blob,Delta Lake, and other enterprise scale data stores. Data Orchestration - Enterprise scale usage of technology such as Azure Data Factory, Apache Airflow, Logic Apps, DBT, SnapLogic, Spark or similar tools. Software Tooling - GIT/GitHub, CI/CD, deployment tools like Octopus, Terraform infrastructure more »
Luton, England, United Kingdom Hybrid / WFH Options
Ventula Consulting
science and analytics team in deploying pipelines. Coach and mentor the team to improve development standards. Key requirements: Strong hands-on experience with Databricks, Spark, SQL or Scala. Proven experience designing and building data solutions on a cloud based, big data distributed system (AWS/Azure etc.) Hands-on … models and following best practices. The Ability to develop pipelines using SageMaker, MLFlow or similar frameworks. Strong experience with data programming frameworks such as Apache Spark. Understanding of common Data Science and Machine Learning models, libraries and frameworks. This role provides a competitive salary plus excellent benefits package. In more »
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Version 1
REMOTE BASED WITH VERY OCCASIONAL TRAVEL TO CLIENT SITES AND OFFICE. Would you like to the opportunity to expand your skillset across Java, Python, Spark, Hadoop, Trino & Airflow across the Banking & Financial Services industries? How about if you worked with an Innovation Partner of the Year Winner (2023 Oracle … date with the latest trends and best practices, and share knowledge with the team. Qualifications You will have expertise within the following: Java, Python, Spark, Hadoop (Essential) Trino, Airflow (Desirable) Architecture and capabilities. Designing and implementing complex solutions with a focus on scalability and security. Excellent communication and collaboration more »
analytics. The Client would also like to see experience of managing and leading a team of Data Scientists. Should have experience of SCALA/SPARK and Hadoop. Initially this is a 3 month contract assignment in Canary Wharf - with likelihood that it will go on beyond that point. Location more »
science, e.g., SQL, R and Python alongside the ability to use tools and packages such as Alteryx, Jupyter notebook, R Markdown, TensorFlow, Keras, Pytorch, ApacheSpark etc. oPractical proficiency in producing reproducible code and pipelines including documentation, governance and assurance frameworks, automation and code review using tools such more »
Newcastle Upon Tyne, United Kingdom Hybrid / WFH Options
NHS Counter Fraud Authority
science, e.g., SQL, R and Python alongside the ability to use tools and packages such as Alteryx, Jupyter notebook, R Markdown, TensorFlow, Keras, Pytorch, ApacheSpark etc. oPractical proficiency in producing reproducible code and pipelines including documentation, governance and assurance frameworks, automation and code review using tools such more »
science, e.g., SQL, R and Python alongside the ability to use tools and packages such as Alteryx, Jupyter notebook, R Markdown, TensorFlow, Keras, Pytorch, ApacheSpark etc. oPractical proficiency in producing reproducible code and pipelines including documentation, governance and assurance frameworks, automation and code review using tools such more »
working with AWS technologies such as Lambda, ECS Fargate, API Gateway, RDS, DynamoDB, EMR building customer-facing applications and APIs building data pipelines using Spark + Scala that process Tb of data per day working with customers to understand the business context of new features participating in design reviews more »
with JavaScript or Python Experience deploying software into the cloud and on-premise. Developing software products. Experience with EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi. Experience with microservices architectures. Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »