and quality rules What Makes a Great Candidate: Experience in a data specialist role, a passion for working with data and helping stakeholders Highly proficient in SQL, Python, and Apache Spark, with demonstrable work experience using these tools in a production context A minimum 2.1 degree obtained in Computer Science, or a related field, ideally from a Russell Group More ❯
London, England, United Kingdom Hybrid / WFH Options
FIND | Creating Futures
Engineering (open to professionals from various data eng. backgrounds — data pipelines, ML Eng, data warehousing, analytics engineering, big data, cloud etc.) Technical Exposure: Experience with tools like SQL, Python, Apache Spark, Kafka, Cloud platforms (AWS/GCP/Azure), and modern data stack technologies Formal or Informal Coaching Experience: Any previous coaching, mentoring, or training experience — formal or informal More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
M-KOPA
engineering, for machine learning or general analytics use case. Additionally, having experience with Kubernetes or other platforms for containerized applications as well as working with orchestration systems such as Apache Airflow would be essential to succeed in this role. The ideal candidate for this role would need to have proficiency in programming languages (Python, C#, Java, etc.) as well More ❯
on real client use cases. Proficient in one of the deep learning stacks such as PyTorch or Tensorflow. Working knowledge of parallelisation and async paradigms in Python, Spark, Dask, Apache Ray. An awareness and interest in economic, financial and general business concepts and terminology. Excellent written and verbal command of English. Strong problem-solving, analytical and quantitative skills. A More ❯
pipelines, and implementing scalable solutions that meet the evolving needs of the business. Utilise your strong background in data engineering, combined with your existing experience using SQL, Python and Apache Spark in production environments. The role will entail strong problem-solving skills, attention to detail, and the ability to work independently while collaborating closely with internal and external stakeholders. More ❯
CD pipelines Familiar with observability tools, logging frameworks, and performance monitoring Background in serverless technologies (e.g., Lambda, Step Functions, API Gateway) Experience with data tools like EMR, Glue, or Apache Spark Understanding of event-driven architecture (EventBridge, SNS, SQS) Knowledge of AWS database offerings including DynamoDB and RDS Familiarity with multi-region deployments and failover strategies AWS certifications (Solutions More ❯
are constantly looking for components to adopt in order to enhance our platform. What you'll do: Develop across our evolving technology stack - we're using Python, Java, Kubernetes, Apache Spark, Postgres, ArgoCD, Argo Workflow, Seldon, MLFlow and more. We are migrating into AWS cloud and adopting many services that are available in that environment. You will have the More ❯
users, and engineering colleagues across divisions to create end-to-end solutions. Learn from experts and mentor junior members. Leverage data-streaming technologies including Kafka CDC, Kafka topics, EMS, Apache Flink. Innovate and incubate new ideas. Work on a broad range of problems involving large data sets, real-time processing, messaging, workflow, and UI/UX. Drive the full More ❯
Jenkins, TeamCity Scripting languages such as PowerShell, bash Observability/Monitoring: Prometheus, Grafana, Splunk Containerisation tools such as Docker, K8S, OpenShift, EC, containers Hosting technologies such as IIS, nginx, Apache, App Service, LightSail Analytical and creative approach to problem solving We encourage you to apply , even if you don't meet all of the requirements. We value your growth More ❯
Required Skills: Proven experience managing Power BI deployments (including workspaces, datasets, and reports). Strong understanding of data pipeline deployment using tools like Azure Data Factory, AWS Glue, or Apache Airflow. Hands-on experience with CI/CD tools (Azure DevOps, GitHub Actions, Jenkins). Proficiency in scripting (PowerShell, Python, or Bash) for deployment automation. Experience with manual deployment More ❯
datos: Kimball, Immon, Data Vault 2.0. - Desarrollo de procesos dirigidos por metadatos. - Conocimientos en herramientas de integración de datos como: dbt, Talend, etc. - Conocimiento en herramientas de orquestación como Apache Airflow. - Conocimiento de metodologías de gestión del código. Requisitos muy valorables : Estar familiarizado con términos como DataOps, Data Observability, Data Mesh, etc. Experiencia previa en consultoría dentro del mundo More ❯
measure the quality . Experience working with cloud platforms like Azure , AWS , or GCP for machine learning workflows. Understanding of data engineering pipelines and distributed data processing (e.g., Databricks, Apache Spark ). Strong analytical skills, with the ability to transform raw data into meaningful insights through AI techniques. Experience with LangChain/LlamaIndex , vector databases (e.g., FAISS) , fine-tuning More ❯
science solutions in a commercial setting. MSc in Computer Science, Machine Learning, or a related field. Experience building data pipelines (realtime or batch) & data quality using modern toolchain (e.g., Apache Spark, Kafka, Airflow, dbt). Strong foundational knowledge of machine learning and deep learning algorithms, including deep neural networks, supervised/unsupervised learning, predictive analysis, and forecasting. Expert-level More ❯
the development and adherence to data governance standards. Data-Driven Culture Champion : Advocate for the strategic use of data across the organization. Skills-wise, you'll definitely: Expertise in Apache Spark Advanced proficiency in Python and Pyspark Extensive experience with Databricks Advanced SQL knowledge Proven leadership abilities in data engineering Strong experience in building and managing CI/CD More ❯
Familiar with observability tools, logging frameworks, and performance monitoring Desirable Skills: Background in serverless technologies (e.g. Lambda, Step Functions, API Gateway) Experience with data tools like EMR, Glue, or Apache Spark Understanding of event-driven architecture (EventBridge, SNS, SQS) Knowledge of AWS database offerings including DynamoDB and RDS Familiarity with multi-region deployments and failover strategies AWS certifications (Solutions More ❯
Familiar with observability tools, logging frameworks, and performance monitoring Desirable Skills: Background in serverless technologies (e.g. Lambda, Step Functions, API Gateway) Experience with data tools like EMR, Glue, or Apache Spark Understanding of event-driven architecture (EventBridge, SNS, SQS) Knowledge of AWS database offerings including DynamoDB and RDS Familiarity with multi-region deployments and failover strategies AWS certifications (Solutions More ❯
Familiar with observability tools, logging frameworks, and performance monitoring Desirable Skills: Background in serverless technologies (e.g. Lambda, Step Functions, API Gateway) Experience with data tools like EMR, Glue, or Apache Spark Understanding of event-driven architecture (EventBridge, SNS, SQS) Knowledge of AWS database offerings including DynamoDB and RDS Familiarity with multi-region deployments and failover strategies AWS certifications (Solutions More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Noir
They’re Looking For: Experience in a data-focused role, with a strong passion for working with data and delivering value to stakeholders. Strong proficiency in SQL, Python, and Apache Spark , with hands-on experience using these technologies in a production environment. Experience with Databricks and Microsoft Azure is highly desirable. Financial Services experience is a plus but not More ❯
data pipelines, data integration, and ETL processes. Proficiency in production-grade coding, automated testing, and CI/CD deployment (e.g., Github Actions, Jenkins). Experience with distributed frameworks like Apache Spark, data modeling, database systems, and SQL optimization. Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Redshift, BigQuery. Additional desirable qualities: Knowledge of UK broadcast industry More ❯
London, England, United Kingdom Hybrid / WFH Options
Arreoblue
on knowledge of the following technologies: Databricks Dedicated SQL Pools Synapse Analytics Data Factory To set yourself up for success you should have in-depth knowledge of the languages Apache Spark, SQL, Python, along with solid development practices. Additionally, you will be required to have in-depth knowledge of supporting Azure platforms such as Data Lake, Key Vault, DevOps More ❯
London, England, United Kingdom Hybrid / WFH Options
Arreoblue
or more of the following technologies: Databricks Dedicated SQL Pools Synapse Analytics Data Factory To set yourself up for success, you should have in-depth knowledge of the languages Apache Spark, SQL, and Python, along with solid development practices. Additionally, you will be required to have in-depth knowledge of supporting Azure platforms such as Data Lake, Key Vault More ❯
Data Analytics, Snowflake, Talend, Databricks, Salesforce, HubSpot, SaaS, Data Lakes, APIs, AdTech, GDPR, CCPA, B2B, Sales Consultant, Account Manager, DataOps, CICD, DevOps, CI/CD, GenAI, RAG, AWS, Azure, Apache, Spark, Kafka – UK Wide - £80,000-£95,000 + OTE Our large Microsoft Partner client requires a Senior Data Analytics Sales Consultant to help spearhead their data sales strategy More ❯
bring to the table? Proven experience with SQL, including optimizing query performance. Hands on experience working with large datasets and modern tooling (for example: Google BigQuery, AWS Redshift, Databricks, Apache Spark) Strong analytical skills, with the ability to explore data distributions, assess data quality and identify patterns or anomalies. Familiarity with data quality challenges, with the the skillset to More ❯
future-proofing of the data pipelines. ETL and Automation Excellence: Lead the development of specialized ETL workflows, ensuring they are fully automated and optimized for performance using tools like Apache Airflow, Snowflake, and other cloud-based technologies. Drive improvements across all stages of the ETL cycle, including data extraction, transformation, and loading. Infrastructure & Pipeline Enhancement: Spearhead the upgrading of More ❯
London, England, United Kingdom Hybrid / WFH Options
Our Future Health
command line knowledge and Unix skills. Good understanding of cloud environments (ideally Azure), distributed computing and optimising workflows and pipelines. Understanding of common data transformation and storage formats, e.g. Apache Parquet, Delta tables. Understanding of containerisation (e.g. Docker) and deployment (e.g. Kubernetes). Working knowledge using Spark, Databricks, Data Lakes. Follow best practices like code review, clean code and More ❯