experience with Python (2+ Years) Experience working with REST Microservices Strong SQL Experience working with very large data sets. Knowledge of big data tools (Spark, Kafka etc) Experience working in finance (Preferred) Strong formal education - ideally in Computer Science If this sounds of interest, then please do not hesitate more »
you'll spearhead backend and data engineering and mentor team members. Tech stack: Athena; Python, Flask, Redis, Postgres, React, Plotly, Docker, SQL, Athena & EMR Spark, ECS and Temporal. This is a 60/40 split between tech and leadership. Your background: 8 years+ coding experience, 4+ years Python experience. … being the preference not essential) Start-up experience is preferred but not essential. The extras: Experience working on AI-based products. Distributed computing experience (Spark, MPI, etc) Experience orchestrating workflows, particularly within distributed system environments. Knowledge of MLOps principles and practices, especially in implementing them within production settings. What more »
Proficiency in working with large datasets and databases (e.g., SQL, NoSQL). Hands-on experience with data processing frameworks/libraries (e.g., Pandas, NumPy, Spark). Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Qualifications: Experience with distributed computing platforms (e.g., Hadoop, Apachemore »
Terraform/Docker/Kubernetes. Write software using either Java/Scala/Python . The following are nice to have, but not required - ApacheSpark jobs and pipelines. Experience with any functional programming language. Database design concepts. Writing and analysing SQL queries. Application overVIOOH Our recruitment team more »
skills include: Experience deploying, securing and supporting cloud infrastructure platforms Understanding of security frameworks/standards Understanding of data streaming and messaging frameworks (Kafka, Spark, etc.) and modern database technologies (Cockroach etc.) Understanding of distributed tracing and monitoring (Zipkin, OpenTracing, Prometheus, ELK stack, Micrometer metrics, etc.) Experience with containers more »
EC3V 1LT Working Arrangements: Hybrid, 2-3 days p/w in office Salary: £75,000-£85,000 Industry: Insurance Tech Stack: Python, SQL, Spark, Azure 👩🏻💻 Great opportunity for a talented Engineer (Python, SQL, Spark, Azure) to join a market leading cyber insurance company. The Company 🚀 Tech driven … business around the world to provide unique, competitive and secure insurance packages. The Role ✨ They are seeking a highly pragmatic Engineer (Python, SQL, Spark, Azure) to help build out their new data platform. You (Python, SQL, Spark, Azure) will work closely with Architects, Data Scientist and Software Engineers … to build out a greenfield platform that can be used to provide business critical insights. The ideal candidate (Python, SQL, Spark, Azure) will be comfortable working with non technical colleagues to build a platform that benefits the entire business Desired Skills ⚙️ Python (SQL Server, Azure SQL) Databricks Ariflow, Kafka more »
GCP. Essential Skills and Experience: * AWS (e.g., Athena, Redshift, Glue, EMR) * Strong AWS Data Solution Architect Experience on Data Related Projects * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ELT data pipelines. * Deep understanding of data manipulation/wrangling techniques * Demonstrable knowledge of applying Data … Cloud Datastore. * BigQuery and Data Studio/Looker. * Snowflake Data Warehouse/Platform * Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. * Experience of working CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc. * Experience and knowledge of application Containerisation, Docker, Kubernetes more »
London, England, United Kingdom Hybrid / WFH Options
Ripple Labs Inc
powering machine-learning models. Have a strong background in developing distributed systems with experience in scalable data pipelines Familiar with big data technologies like Spark or Flink and comfortable in engineering data pipelines using big data technologies on financial datasets Experience with RESTful APIs and server-side APIs integration more »
Essential Skills Proven experience as a Data Engineer Well versed in the following: cloud-based data storage solutions, data lakes, customer data platforms. (Python, Spark, SQL, Cloud Data Environments such as AWS, GCP, Azure) Good understanding of data modelling methods and data partitioning and compaction methods in Data Lake more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
Migration Strategy FinOps Cloud Governance Experience with Relational Databases like Oracle NoSQL Databases and or Big Data technologies e g Oracle SQL Server Postgres Spark Hadoop other Open Source Must have experience in Data Security Solutions Identity and Access Management and Data Security Access Management experience of DevOps CI more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
Platforms Must have 8 years’ Experience with Relational Databases like Oracle NoSQL Databases and or Big Data technologies e g Oracle SQL Server Postgres Spark Hadoop other Open Source Must have experience in Data Security Solutions Identity and Access Management and Data Security Access Management Must have 3 years more »
of data structures, algorithms, and statistical techniques for machine learning. Experience with cloud platforms (e.g., AWS, Azure, GCP) and distributed computing technologies (e.g., Hadoop, Spark) is a plus. Excellent communication skills, with the ability to effectively collaborate with interdisciplinary teams and communicate technical concepts to non-technical stakeholders. If more »
o Must have 8 years’ Experience with Relational Databases like Oracle NoSQL Databases and or Big Data technologies e g Oracle SQL Server Postgres Spark Hadoop other Open Source o Must have experience in Data Security Solutions Identity and Access Management and Data Security Access Management o Must have more »
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
o Must have 8+ years’ Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). o Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) o Must more »
level within a typical retail trading environment is key. Experience required: A background in leveraging hands on skills using tools such as Python, R, Spark, Hadoop, SQL and cloud based platforms such as GCP, Azure and AWS to manipulate and analyse various data sets in large volumes Background in more »
dynamic. Knowledge and understanding of OTC products (Interest Rate Swaps, Variance Swaps, CDS, etc.) bookings. Familiarity with C++ and Big Data tools such as Spark, Kafka, Elastic. Join us and be part of a team that values innovation, collaboration, and excellence. Take your career to new heights with a more »
o Must have 8 years’ Experience with Relational Databases like Oracle NoSQL Databases and or Big Data technologies e g Oracle SQL Server Postgres Spark Hadoop other Open Source o Must have experience in Data Security Solutions Identity and Access Management and Data Security Access Management o Must years more »
o Must have 8+ years’ Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). o Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) o Must more »
Platforms Must have 8+ years’ Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed in a more »
of databases. Snowflake is widely used, as are Docker and Kubernetes for containerisation. ETL and ELT tech are also used every day, primarily Airflow, Spark, Hive and a lot more. You’ll need to come from a strong academic background with some commercial experience in a data heavy software more »
product experimentation, Causal AI, and advanced statistical techniques. Deep knowledge of data science tools (e.g., scikit-learn, TensorFlow, PyTorch) and big data technologies (e.g., Spark). Proficiency in Python for data manipulation, model building, and scripting. Strong communication skills to present findings to both technical and non-technical audiences. more »