flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom
IO Associates
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
time and batch inference. Monitor and troubleshoot deployed models to ensure reliability and performance. Stay updated with advancements in machine learning frameworks and distributed computing technologies. Requirements: Proficiency in Apache Spark and Spark MLlib for machine learning tasks. Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering). Experience with distributed systems like Hadoop for data storage and More ❯
team, you'll deliver Linux infrastructure solutions and support for a diverse range of clients. Expect to work with: Linux distributions: Debian, Ubuntu, Red Hat Enterprise Linux Web stacks: Apache, Nginx, MySQL, PostgreSQL, PHP, Python Networking: Static/dynamic routing, DNS, VPNs, and firewalls Containers & automation: Docker, Kubernetes, and CI/CD pipelines Cloud platforms: AWS, Azure, and Google More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
KO2 Embedded Recruitment Solutions LTD
team, you'll deliver Linux infrastructure solutions and support for a diverse range of clients. Expect to work with: Linux distributions: Debian, Ubuntu, Red Hat Enterprise Linux Web stacks: Apache, Nginx, MySQL, PostgreSQL, PHP, Python Networking: Static/dynamic routing, DNS, VPNs, and firewalls Containers & automation: Docker, Kubernetes, and CI/CD pipelines Cloud platforms: AWS, Azure, and Google More ❯
audiences. Self-motivated and able to work independently. Preferred Qualifications: Background in investment banking or financial services. Hands-on experience with Hive, Impala, and the Spark ecosystem (e.g., HDFS, Apache Spark, Spark-SQL, UDFs, Sqoop). Proven experience building and optimizing big data pipelines, architectures, and data sets. More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
applications and high proficiency SQL for complex querying and performance tuning. ETL/ELT Pipelines: Proven experience designing, building, and maintaining production-grade data pipelines using Google Cloud Dataflow (Apache Beam) or similar technologies. GCP Stack: Hands-on expertise with BigQuery , Cloud Storage , Pub/Sub , and orchestrating workflows with Composer or Vertex Pipelines. Data Architecture & Modelling: Ability to More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Hays
in Technical Data Analysis. Proficiency in SQL, Python, and Spark. Experience within an investment banking or financial services environment. Exposure to Hive, Impala, and Spark ecosystem technologies (e.g. HDFS, Apache Spark, Spark-SQL, UDF, Sqoop). Experience building and optimizing Big Data pipelines, architectures, and data sets. Familiarity with Hadoop and Big Data ecosystems. Strong knowledge of Data Warehouse More ❯
Sheffield, Yorkshire, United Kingdom Hybrid / WFH Options
Reach Studios Limited
Azure etc.) What You'll Need Must-haves: Comprehensive experience in a DevOps or SRE role, ideally in a multi-project environment Deep experience with web stacks: Nginx/Apache, PHP-FPM, MySQL, Redis, Varnish, Elasticsearch Proven expertise in managing and optimising Cloudflare across DNS, security, performance, and access Experience with Magento 2 infrastructure and deployment CI/CD More ❯
time and batch inference Monitor and troubleshoot deployed models to ensure reliability and performance Stay updated with advancements in machine learning frameworks and distributed computing technologies Experience: Proficiency in Apache Spark and Spark MLlib for machine learning tasks Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering) Experience with distributed systems like Hadoop for data storage and processing More ❯
and infrastructure as code (Terraform) for rapid deployment. Experience that will give you an edge Experience using Python on Google Cloud Platform for Big Data projects, including BigQuery, DataFlow (Apache Beam), Cloud Run, Cloud Functions, Cloud Workflows, and Cloud Composer. Strong SQL development skills. Proven expertise in data modeling, ETL development, and data warehousing. Knowledge of data management fundamentals More ❯
of large-scale distributed data processing. Experience with developing extract-transform-load (ETL). Experience with distributed messaging systems like Kafka and RabbitMQ. Experience with distributed computing frameworks like Apache Spark and Flink. Bonus Points Experience working with AWS or Google Cloud Platform (GCP). Experience in building a data warehouse and data lake. Knowledge of advertising platforms. About More ❯
Keepalived, Cloud Load Balancing, etc Scripting - bash, python Virtualisation and Orchestration - docker, kubernetes Databases - Cloud Managed MySql Desireable OS Administration - Linux - debian based flavours Cloud - Google Cloud Web servers - Apache, Caching - Varnish Messaging queues - rabbitmq CI/CD - jenkins, Hashicorp toolkit Virtualisation and Orchestration - vmware APIs - REST, JSON We only invite applications in English Diversity & Inclusion at Sportserve At More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
Terraform. Experience with observability stacks (Grafana, Prometheus, OpenTelemetry). Familiarity with Postgres. Interest in data-privacy, AdTech/MarTech or large-scale data processing. Familiarity with Kafka, gRPC or Apache Spark. As well as working as part of an amazing, engaging and collaborative team, we offer our staff a wide range of benefits to motivate them to be the More ❯
knowledge and Unix skills. Highly proficient working with cloud environments (ideally Azure), distributed computing and optimising workflows and pipelines. Experience working with common data transformation and storage formats, e.g. Apache Parquet, Delta tables. Strong experience working with containerisation (e.g. Docker) and deployment (e.g. Kubernetes). Experience with Spark, Databricks, data lakes. Highly proficient in working with version control and More ❯
in a hybrid environment requiring clear and effective communication. Strong engineering fundamentals with a passion for simplicity and precision Ideal, But Not Required Experience with database technologies (Postgres, DynamoDB, Apache Iceberg). Experience with serverless technologies (e.g. Lambda) Required Experience Prior industry experience with Python. Prior industry experience with public cloud providers (preferably AWS). Our Offer Work with More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
source systems into our reporting solutions. Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like Apache Spark to ensure seamless data flow. Monitoring and Failure Recovery: Implement monitoring procedures to detect failures or unusual data profiles and establish recovery processes to maintain data integrity. Azure More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯