Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
Leicestershire, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting & AB testing. Leadership Expertise in utilizing SQL more »
systems and offer improvements that will help reduce technical/code/engineering debt. Key Skills: Extensive experience with Machine Learning and Spark/PySpark Recommendation systems, pattern recognition, data mining, artificial intelligence Modern Parallel Computing; distributed clusters, multicore servers, GPU’s Experience with developing machine learning models at more »
United Kingdom Information Technology (IT) Group Functions Job Reference # 294845BR City London Job Type Full Time Your role Do you have proven track record of building big data solutions? Are you confident at iteratively refining user requirements and removing more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting and propensity. Expertise in utilizing SQL for more »
GitHub for version control, you will champion DevOps practices to ensure seamless collaboration and automation across the data engineering lifecycle. Your proficiency in SQL, PySpark, and Python will be helpful in transforming raw data into valuable insights, while your familiarity with Kafka will enable real-time data processing capabilities. … responsibilities: Lead the design, development, and maintenance of Azure-based data pipelines and analytical solutions using Databricks, Synapse, and other relevant services. Leverage SQL, PySpark, and Python to perform data transformations, aggregations, and analysis on large datasets. Architect data storage solutions using Azure SQL Database, Azure Data Lake Storage more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Lorien
experience of this is a strong preference. However other Cloud platFforms like AWS/GCP are acceptable. Coding - Experience using Python with data (Pandas, PySpark) would be an advantage. Other languages such as C# would be beneficial but not essential. What’s next? If you believe you have the more »
Azure Cloud platform Knowledge on orchestrating workloads on cloud Ability to set and lead the technical vision while balancing business drivers Strong experience with PySpark, Python programming Proficiency with APIs, containerization and orchestration is a plus Qualifications: Bachelor's and/or master’s degree About you: You are more »
can offer you exposure to the latest technologies. We are looking for a senior Data Engineer who has solid Python skills as well as Pyspark, DataBricks and SQL, as well as Data Modeling, and Azure Data Factors . Azure Devops would be a distinct advantage. Strong communication and business more »
Cycle · Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Preferred qualifications, capabilities, and skills: · Skilled with Python or PySpark · Exposure to cloud technologies (Airflow, Astronomer, Kubernetes, AWS, Spark, Kafka) · Experience with Big Data solutions or Relational DB. · Experience in Financial Service Industry is more »
modern NLP methods required. Specifically: Transformers models (ex. BERT), LLMs, RAG & Fine-Tuning, OpenAI Stack, Langchain etc. Experience with Big Data technologies a plus —PySpark, H20.ai, Cloud AI platforms, Kubernetes Must be able to translate business requirements into analytical problems Must have proven ability to merge and transform disparate more »
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Anson McCade
Senior Data Engineer • Location: Belfast based – Hybrid - flexible working • Salary : £50,000 - £60,000 • Package : 10% bonus + 11% pension Overview One of the UK’s leading digital solution provider’s is searching for a Senior Data Engineer to join more »
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Anson McCade
Data Engineer • Location : Belfast based – Hybrid - flexible working • Salary : £42,500 - £50,000 • Package: 10% bonus + 11% pension Overview One of the UK’s leading digital solution provider’s is searching for a Data Engineer to join their practice more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Lorien
of this is a strong preference. However other Cloud platforms like AWS/GCP are acceptable. • Coding Languages - Experience using Python with data (Pandas, PySpark) would be an advantage. Other languages such as C# would be beneficial but not essential. Their lovely offices are based in the West Midlands more »
I have placed quite a few candidates with this organisation now, all have said glowing reviews about it, they're the leader in legal representation comparison and have rather interesting, unique data to work with. They are using modern Azure more »
months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Skills: Spark Architecture – component understanding around Spark … Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able … are Cluster level failures. Cloudera (CDP) – Knowledge of understanding how Cloudera Spark is set up and how the run time libraries are used by PySpark code. Prophecy – High level understanding of Low-Code No-Code prophecy set up and its use to generate PySpark code. more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
that are comfortable with terms like; Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs, Python, Pyspark, and Azure. Flexible Working : The role is 60% work from home Sponsorship : Sadly sponsorship isn't available for these roles. About You: Bachelors Degree more »
in Azure Cloud technologies e.g., ML/OPS, ML Flow, Azure Data Factory, Azure Function, Data Bricks, Event Hub, Microservices/API, Python/PYSPARK/R, or SQL 2+ years of experience designing, developing, and implementing Big Data platforms using Azure Cloud architecture with structured and unstructured data more »
in Azure Cloud technologies e.g., ML/OPS, ML Flow, Azure Data Factory, Azure Function, Data Bricks, Event Hub, Microservices/API, Python/PYSPARK/R, or SQL 2+ years of experience designing, developing, and implementing Big Data platforms using Azure Cloud architecture with structured and unstructured data more »
in Azure Cloud technologies e.g., ML/OPS, ML Flow, Azure Data Factory, Azure Function, Data Bricks, Event Hub, Microservices/API, Python/PYSPARK/R, or SQL 2+ years of experience designing, developing, and implementing Big Data platforms using Azure Cloud architecture with structured and unstructured data more »
in Azure Cloud technologies e.g., ML/OPS, ML Flow, Azure Data Factory, Azure Function, Data Bricks, Event Hub, Microservices/API, Python/PYSPARK/R, or SQL 2+ years of experience designing, developing, and implementing Big Data platforms using Azure Cloud architecture with structured and unstructured data more »
in Azure Cloud technologies e.g., ML/OPS, ML Flow, Azure Data Factory, Azure Function, Data Bricks, Event Hub, Microservices/API, Python/PYSPARK/R, or SQL 2+ years of experience designing, developing, and implementing Big Data platforms using Azure Cloud architecture with structured and unstructured data more »
in Azure Cloud technologies e.g., ML/OPS, ML Flow, Azure Data Factory, Azure Function, Data Bricks, Event Hub, Microservices/API, Python/PYSPARK/R, or SQL 2+ years of experience designing, developing, and implementing Big Data platforms using Azure Cloud architecture with structured and unstructured data more »
in Azure Cloud technologies e.g., ML/OPS, ML Flow, Azure Data Factory, Azure Function, Data Bricks, Event Hub, Microservices/API, Python/PYSPARK/R, or SQL 2+ years of experience designing, developing, and implementing Big Data platforms using Azure Cloud architecture with structured and unstructured data more »