IIBA (International Institute of Business Analysis)
Good work ethic, self-starter, and results-oriented Additional Preferred Qualifications: Domain knowledge in Financial Industry and Capital Markets is a plus. Experience with Big Data technologies ( i.e. Kafka, Apache Spark, NOSQL) Knowledge of BI tools like Power BI, Microstrategy etc Exposure to Python and Scala Exposure to Salesforce ecosytem About S&P Global Ratings At S&P Global More ❯
and reliability across our platform. Working format: full-time, remote. Schedule: Monday to Friday (the working day is 8+1 hours). Responsibilities: Design, develop, and maintain data pipelines using Apache Airflow . Create and support data storage systems (Data Lakes/Data Warehouses) based on AWS (S3, Redshift, Glue, Athena, etc.). Integrate data from various sources, including mobile More ❯
data into a data platform using Fivetran. Experience of developing BI dashboards using Power BI. Knowledge of security concepts relevant to Azure. Experience of workflow management tools such as Apache Airflow. Interested in the role? Complete the online application. We look forward to getting to know you. Discover more about LGT Wealth Management A message from our CEO Ben More ❯
data ecosystem (e.g., Pandas, NumPy) and deep expertise in SQL for building robust data extraction, transformation, and analysis pipelines. Hands-on experience with big data processing frameworks such as Apache Spark, Databricks, or Snowflake, with a focus on scalability and performance optimization Familiarity with graph databases (e.g., Neo4j, Memgraph) or search platforms (e.g., Elasticsearch, OpenSearch) to support complex data More ❯
Proficiency in one or more programming languages including Java, Python, Scala or Golang. Experience with columnar, analytical cloud data warehouses (e.g., BigQuery, Snowflake, Redshift) and data processing frameworks like Apache Spark is essential. Experience with cloud platforms like AWS, Azure, or Google Cloud. Strong proficiency in designing, developing, and deploying microservices architecture, with a deep understanding of inter-service More ❯
MySQL Exposure to Docker, Kubernetes, AWS, Helm, Terraform, Vault, Grafana, ELK Stack, New Relic Relevant experience in the maintenance of data APIs and data lake architectures, including experience with Apache Iceberg, Trino/Presto, Clickhouse, Snowflake, BigQuery. Master's degree in Computer Science or Engineering-related field Get to know us better YouGov is a global online research company More ❯
technical direction to a growing team of developers globally. The platform is a Greenfield build using standard modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. The platform runs in a hybrid mode both on-premise and in AWS utilising technologies such as EKS, S3, FSX. The main purpose of this role is to More ❯
and BI . Advanced skills in SQL and Python , with hands-on experience in relational databases across cloud and on-prem environments. Familiarity with modern data technologies such as Apache Spark , Kafka , or Snowflake . A comprehensive understanding of the data engineering lifecycle, including Agile delivery , DevOps , Git , APIs , containers , microservices , and pipeline orchestration . Nice to have: DP More ❯
and BI . Advanced skills in SQL and Python , with hands-on experience in relational databases across cloud and on-prem environments. Familiarity with modern data technologies such as Apache Spark , Kafka , or Snowflake . A comprehensive understanding of the data engineering lifecycle, including Agile delivery , DevOps , Git , APIs , containers , microservices , and pipeline orchestration . Nice to have: DP More ❯
South East London, London, United Kingdom Hybrid / WFH Options
TEN10 SOLUTIONS LIMITED
and data validation techniques. Experience using test automation frameworks for data pipelines and ETL workflows Strong communication and stakeholder management skills. Nice-to-Have: Hands-on experience with Databricks , Apache Spark , and Azure Deequ . Familiarity with Big Data tools and distributed data processing. Experience with data observability and data quality monitoring. Proficiency with CI/CD tools like More ❯
methodologies. Collaborating with stakeholders to define data strategies, implement data governance policies, and ensure data security and compliance. About you: Strong technical proficiency in data engineering technologies, such as Apache Airflow, ClickHouse, ETL tools, and SQL databases. Deep understanding of data modeling, ETL processes, data integration, and data warehousing concepts. Proficiency in programming languages commonly used in data engineering More ❯
tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should More ❯
primarily GCP. Experience with some or all of the services below would put you at the top of our list Google Cloud Storage Google Data Transfer Service Google Dataflow (Apache Beam) Google PubSub Google CloudRun BigQuery or any RDBMS Python Debezium/Kafka dbt (Data Build tool) Interview process Interviewing is a two way process and we want you More ❯
Python, Snowflake • Strong SQL query writing skills and excellent understanding of SQL query performance optimization • Very Good knowledge of Agile and SDLC processes • Strong experience of streaming architecture, preferably Apache Spark. • Knowledge of cloud concepts (Azure), data warehouse and services • Able to demonstrate very good analytical and problem-solving skills. • Sound written and verbal communication skills and ability to More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
iO Associates - UK/EU
Skills & Experience: Strong experience with Snowflake data warehousing Solid AWS cloud engineering experience Proficient in Python for data engineering workflows Skilled in building and maintaining Airflow DAGs Familiarity with Apache Iceberg for table format and data lake optimisation If this could be of interest, please get in touch with Alex Lang at iO Associates to apply and for more More ❯
Computer Science, Engineering, or a related field, or equivalent industry experience. Preferred Qualifications Experience or interest in mentoring junior engineers. Familiarity with data-centric workflows and pipeline orchestration (e.g., Apache Airflow). Proficiency in data validation, anomaly detection, or debugging using tools like Pandas, Polars, or data.table/R. Experience working with AWS or other cloud platforms. Knowledge of More ❯
the development and adherence to data governance standards. Data-Driven Culture Champion : Advocate for the strategic use of data across the organization. Skills-wise, you'll definitely: Expertise in Apache Spark Advanced proficiency in Python and Pyspark Extensive experience with Databricks Advanced SQL knowledge Proven leadership abilities in data engineering Strong experience in building and managing CI/CD More ❯
Strong experience in Infrastructure as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using Apache Spark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling and More ❯
Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras More ❯
IIBA (International Institute of Business Analysis)
and implementation experience using Python or Java (Required) Bachelor's degree or equivalent in Computer Science, Mathematics or Finance-related field (Required) Knowledge of workflow management frameworks such as Apache Airflow (Preferred) Knowledge of cloud computing infrastructure, such as AWS (Preferred) Knowledge of BI visualisation tools such as Looker or Power BI (Preferred) Are you the right candidate? Yes More ❯
Substantial experience using tools for statistical modelling of large data sets Some familiarity with data workflow management tools such as Airflow as well as big data technologies such as Apache Spark or other caching and analytics technologies Expertise in model training, Statistics, model evaluation, deployment and optimisation, including RAG-based architectures. More ❯
Substantial experience using tools for statistical modelling of large data sets Some familiarity with data workflow management tools such as Airflow as well as big data technologies such as Apache Spark or other caching and analytics technologies Expertise in model training, Statistics, model evaluation, deployment and optimisation, including RAG-based architectures. More ❯
Python and SQL (including OO and functional programming). Knowledge and experience in cloud computing, ideally AWS. Experience working with a range of database technologies, including PostgreSQL. Experience using Apache Spark for processing large datasets. Experience with frameworks and technologies used in component orchestration, such as Airflow. Experience applying software development best practices such as version control, unit testing More ❯
with new methodologies to enhance the user experience. Key skills: Senior Data Scientist experience Commercial experience in Generative AI and recommender systems Strong Python and SQL experience Spark/Apache Airflow LLM experience MLOps experience AWS Additional information: This role offers a strong salary of up to £95,000 (Depending on experience/skill) with hybrid working (2 days More ❯