equity financing to mid-market and late-stage companies. Liquidity Group is backed by leading global financial institutions including Japan’s largest bank, MUFG, Spark Capital, and Apollo Asset Management. About the role We're on the lookout for accomplished credit professionals to assume the role of Director within more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
ETL processes, and data warehousing solutions. Programming: Utilize Python, Java, Scala, or GoLang to build and optimize data pipelines. Distributed Processing: Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database more »
within an e-commerce or online business context. Commercially minded, thinking about ways to increase revenue & profitability. Proficiency in data manipulation tools (Python, Pandas, Spark, SQL) and data visualization tools (Apache Superset, Tableau, Power BI, ggplot2) and MS Excel. Grasp of pricing strategies, market dynamics, and consumer behaviour more »
Location: Milton Keynes Key responsibilities: experience as an Kafka Developer (Ideally Kafka Streams). on experience in Big data technologies (Hadoop, Hue, Hive, Impala,Spark,.) and stronger within Kafka. and experience using Key value data bases . Experience developing microservices using Spring. and develop cloud-based data engineering … for different applications ,usually 24/7 applications with a big performance requirement . Key skills/knowledge/experience: Data Hadoop - Hive and Spark/Scala solid experience advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL with Code Repositories as more »
Greater London, England, United Kingdom Hybrid / WFH Options
Hunter Bond
My client are looking for a talented and motivated Big Data Architect (Azure, Databricks, Spark) to be based in their London office. You'll be responsible for providing technical leadership in architecting and designing end-to-end solutions for the organisation's datalake initiatives, as they provide increasing numbers … improvements in design, processes, and implementation to improve operational management, scalability, and extensibility. The following skills/experience is essential: Strong implementation experience using Spark and Databricks Strong Cloud experience (ideally Azure) Previously heavily involved in an implementation programme Data Warehouse Strong stakeholder management experience Excellent IT background, ideally more »
City of London, London, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
Manchester, North West, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Redshift, DynamoDB, Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline more »
network programming. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling and insurance are a plus. Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow more »
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
level within a typical retail trading environment is key. Experience required: A background in leveraging hands on skills using tools such as Python, R, Spark, Hadoop, SQL and cloud based platforms such as GCP, Azure and AWS to manipulate and analyse various data sets in large volumes Background in more »
dynamic. Knowledge and understanding of OTC products (Interest Rate Swaps, Variance Swaps, CDS, etc.) bookings. Familiarity with C++ and Big Data tools such as Spark, Kafka, Elastic. Join us and be part of a team that values innovation, collaboration, and excellence. Take your career to new heights with a more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed in a more »
product experimentation, Causal AI, and advanced statistical techniques. Deep knowledge of data science tools (e.g., scikit-learn, TensorFlow, PyTorch) and big data technologies (e.g., Spark). Proficiency in Python for data manipulation, model building, and scripting. Strong communication skills to present findings to both technical and non-technical audiences. more »
Lead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, ApacheSpark - London Based I am hiring for a Lead Data Engineer for a crucial role within one of my Investment Bank clients in London. This role is at Director level as they require a very senior candidate … Leading data engineering practices Support current applications Introduce AI practices to the team/project Communicate key successes with stakeholders Key Skills: Azure Databricks ApacheSpark Datascience, AI, ML Certifications or continued upskilling/contribution to blog posts within Data & AI beneficial but not essential. This is a … without sponsorship, if you are interested please apply or email me directly - aaron.dhammi@nicollcurtin.com Lead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, ApacheSpark - London Based more »
Data Scientists and Service Engineering teams Experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, ApacheSpark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties Develop and define key business questions and to build … a related field Experience of Data platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Kinesis/Kafka/Spark/Storm implementations Experience with analytic solutions applied to the Marketing or Risk needs of enterprises Basic understanding of machine learning fundamentals Ability to … take Machine Learning models and implement them as part of data pipeline IT platform implementation experience Experience with one or more relevant tools ( Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis) Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) Current hands-on implementation experience more »
Spark Architect/SME Contract Role- 6 months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. … Skills: Spark Architecture – component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark … and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. Monitoring – Be able to monitor Spark jobs using wider tools such as Grafana to see more »
data engineering or a similar role. > Proficiency in programming languages such as Python, Java, or Scala. > Strong experience with data processing frameworks such as ApacheSpark, Apache Flink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing more »
development (ideally AWS) Knowledge and ideally hands-on experience with data streaming, event-based architectures and Kafka Strong communication and interpersonal skills Experience with ApacheSpark or Apache Flink would be ideal, but not essential Please note, this role is unable to provide sponsorship. If this role more »
Greater Bristol Area, United Kingdom Hybrid / WFH Options
Anson McCade
and product development, encompassing experience in both stream and batch processing. Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: Scripting and data extraction via APIs, along with composing SQL queries. Integrating data more »