experience in data engineering. Experienced in building ETL data pipelines. Relational database experience w/PostgreSQL. Understanding of tech within our stack: AWS/Apache beam/Kafka. Experience with Object Orientated Programming A desire to work in the commodities/trading sector. Permanent/Full-Time Employment. Hybrid more »
London, Liverpool, Merseyside, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
Glue). Hands-on experience with Databricks for data processing and analytics. Proficient in Python programming for data manipulation and automation. Solid understanding of Apache Spark for big data processing. Strong SQL skills for data querying, transformation, and analysis. Excellent problem-solving abilities and attention to detail. Ability to more »
Linux environments. Knowledge of data modeling, database design, and query optimization techniques. Experience with real-time data processing, streaming, and analytics technologies (e.g., Kafka, Apache Flink). Familiarity with financial markets, trading systems, and quantitative analysis is a plus. Excellent problem-solving, analytical, and communication skills, with the ability more »
in Computer Science, Software Engineering, or a related field.Proven experience as a Senior Software Developer, with a strong background in LAMP stack applications (Linux, Apache, MySQL, PHP).Proficiency in front-end technologies such as HTML, CSS, JavaScript, and modern frameworks like React or Angular.Strong experience with database design and more »
Python, bash scripting, React, Go.Experience with deploying, configuring, and managing cloud architecture and technologies in AWS environments.Experience with web application services such as NGINX, Apache, JBoss.Knowledge of OpenShift Containerisation, RHEL 6,7,8, Docker and Kubernetes.Experience with monitoring systems e.g., ELK, Nagios, New Relic, DataDog, Splunk etc.Working knowledge of more »
working in a Product organisation and ideally Fintech • Practical hands on knowledge of Java Technology Stack. J2EE/Spring etc. • Experience of working with Apache Nifi • Experienced in working with AWS, Docker, Jenkins etc. rolling out AWS environments and environment strategy. • Have sound Database experience, preferably Oracle. • Experience in more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.Our Commitment to Diversity and InclusionAt Databricks, we are committed more »
engineers of varying levels of experience Flexibility and willingness to adapt to new software and techniques Nice to Have Experience working with projects in Apache Spark, Databricks of similar Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities?A technical expert and leader on the Petcare more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in Apache Spark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the Petcare more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in Apache Spark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the Petcare more »
as Linux, Bash and Powershell with a good grounding in Linux OS and IP Networking. With application and web server knowledge with NGINX and Apache and virtualisation knowledge with Docker and K8S. The majority of the applications are written in Java and Terraform cloud infrastructure provisioning and APM tool more »
master and meta data management Experience with Azure SQL Database, Azure Data Factory, Azure Storage, Azure IaaS/PaaS related database implementations. Experience with Apache spark and new Fabric framework would be a plus. more »
in Computer Science, Engineering (or other related STEM subject) 5+ years experience in data engineering 2+ years in a leadership role. Experience working with Apache Spark, Azure Data Factory and other data pipelines tools. Strong programming skills. Impeccable communication skills. Precise attention to detail. Pioneering attitude. If you are more »
one cloud platform (preferably GCP).BSc/MSc in computer science, maths, physics or STEM subject.Basic knowledge of statistics and machine learning.Experience with Spark, Apache services, ETL tools, Data visualization and dashboards.Experience with streamed data processing, parallel compute, and/or event based architectures.Experience with web-scraping tools and more »
testing, and maintenance of data pipelines and data storage systems on Google Cloud Platform (GCP). You will be working with technologies such as Apache Airflow, BigQuery, Python, and SQL to transform and load large data sets, ensuring high data quality and accessibility for business intelligence and analytics purposes. more »
and IAM. Experience with containerization and orchestration tools, particularly Kubernetes. Proficiency in infrastructure as code tools such as Terraform, Ansible, or CloudFormation. Experience in Apache Airflow, AWS Backup & S3 versioning Solid understanding of CI/CD concepts and experience implementing CI/CD pipelines using tools like Jenkins, GitLab more »
and IAM. Experience with containerization and orchestration tools, particularly Kubernetes. Proficiency in infrastructure as code tools such as Terraform, Ansible, or CloudFormation. Experience in Apache Airflow, AWS Backup & S3 versioning Solid understanding of CI/CD concepts and experience implementing CI/CD pipelines using tools like Jenkins, GitLab more »
City of London, London, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in Apache Spark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge of more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.Our Commitment to Diversity and InclusionAt Databricks, we are committed more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.Our Commitment to Diversity and InclusionAt Databricks, we are committed more »
Azure data platform technologies, ideally including Data Lake Gen 2, Synapse, Analysis Services, and Power BI. Proven skills with Azure Data Factory & Databricks/Apache/Sparx, along with extensive SQL knowledge (Microsoft SQL Server 2005+) and SSIS experience. Proficiency in Microsoft DevOps, metadata management, data quality, and Data more »
Agile software development and system architecture within the Telco OSS domain, with preferred experience in Network GIS (Hexagon, IQ Geo) and workflow tooling (Appian, Apache Airflow). Strong understanding of platform and product dynamics, including Platform Engineering and its relevance to OSS. Extensive background in DevOps practices, encompassing test more »
Recent and proven experience of using Red Hat Linux (or others Unix flavours) including scripting in a commercial environment Experience supporting applications (Java, .NET, Apache, IIS); Desirable: Knowledge of Microsoft Windows Server. Experience with working within financial industry; Experience with working within an ITIL framework; Experience with working with more »
teams to support the orchestration of our ETL pipelines using Airflow and manage our tech stack including Python, Next.js, Airflow, PostgreSQL MongoDB, Kafka and Apache Iceberg. Optimize infrastructure costs and develop strategies for efficient resource utilization. Provide critical support by monitoring services and quickly resolving production issues. Contribute to more »