and harnessing diverse AWS services.Key Requirements:High level of experience in both SQL and Python programming (10+ years)Experience managing data engineering pipelines using Apache AirflowProficiency in CI/CD pipelines and automation Git proficiency for version control (branching strategies and repo management)Competent in monitoring tools such as more »
Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on more »
in Computer Science, Engineering (or other related STEM subject) 5+ years experience in data engineering 2+ years in a leadership role. Experience working with Apache Spark, Azure Data Factory and other data pipelines tools. Strong programming skills. Impeccable communication skills. Precise attention to detail. Pioneering attitude. If you are more »
London, England, United Kingdom Hybrid / WFH Options
Client Server
to production, providing subject matter expertise on the .Net stack and contributing to technical design discussions. You'll use a range of technology including Apache Flink with Java for large scale data processing and will be able to assess and recommend new and emerging technologies, using the best tool more »
Greater London, England, United Kingdom Hybrid / WFH Options
CommuniTech Recruitment Group
AWS ecosystems like Lambdas, step functions and ECS services. Experience of Dremio is a nice to have Experience with data stack technologies, such as Apache Iceberg & Spark. Exposure to Apache Airflow, Prefect, Dagster, DBT. Expertise in data analysis with exposure to data services (such as Glue, Lake Formation more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database Management: Handle SQL databases like Oracle, MySQL, or PostgreSQL. Data Governance: Ensure data quality, security, and compliance with best practices. Ideal Candidate more »
City of London, London, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in Apache Spark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge of more »
Product Manager (Product Owner Business Analysis Data Lake Data Mesh Datamesh SQL Architecture Big Data AWS Python Dremio Apache Iceburg Iceberg Arrow DBT Tableau PowerBI Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund Buy Side) required by my asset … fund, asset manager, investment management) Role: Product Manager (Product Owner Business Analysis Data Lake Data Mesh Datamesh SQL Architecture Big Data AWS Python Dremio Apache Iceburg Iceberg Arrow DBT Tableau PowerBI Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund more »
to production, providing subject matter expertise on the .Net stack and contributing to technical design discussions. You'll use a range of technology including Apache Flink with Java for large scale data processing and will be able to assess and recommend new and emerging technologies, using the best tool more »
London, England, United Kingdom Hybrid / WFH Options
Client Server
to production, providing subject matter expertise on the .Net stack and contributing to technical design discussions. You'll use a range of technology including Apache Flink with Java for large scale data processing and will be able to assess and recommend new and emerging technologies, using the best tool more »
Greater London, England, United Kingdom Hybrid / WFH Options
Understanding Recruitment
use Java (for a very small amount of scripting work) Have public cloud experience with AWS or other cloud providers Have an understanding of Apache products such as Kafka and Flake Good knowledge of development using CI/CD Bonus points if you knowledge of: Web products Financial markets more »
experience in data engineering. Experienced in building ETL data pipelines. Relational database experience w/PostgreSQL. Understanding of tech within our stack: AWS/Apache beam/Kafka. Experience with Object Orientated Programming A desire to work in the commodities/trading sector. Permanent/Full-Time Employment. Hybrid more »
testing, and maintenance of data pipelines and data storage systems on Google Cloud Platform (GCP). You will be working with technologies such as Apache Airflow, BigQuery, Python, and SQL to transform and load large data sets, ensuring high data quality and accessibility for business intelligence and analytics purposes. more »
teams to support the orchestration of our ETL pipelines using Airflow and manage our tech stack including Python, Next.js, Airflow, PostgreSQL MongoDB, Kafka and Apache Iceberg. Optimize infrastructure costs and develop strategies for efficient resource utilization. Provide critical support by monitoring services and quickly resolving production issues. Contribute to more »
City of London, England, United Kingdom Hybrid / WFH Options
Penguin Recruitment
market leading software required to communicate and exchange technical information. Demonstrable academic qualifications appropriate to the role. Be a NABERS Assessor Have experience with Apache HVAC (preferred) Have a genuine interest and enthusiasm in the built environment and progress toward Net Zero Carbon. Demonstrable familiarity with key industry agendas more »
Elasticsearch, Bigquery, PostgresQL FullCircl 3 Lead_Data_Engineer 04.24 · Kubernetes, Docker, Airflow KEY RESPONSIBILITIES · Designing and implementing scalable data pipelines using tools such as Apache Spark, Google PubSub etc. · Optimizing data storage and retrieval systems for maximum performance using both relational and NoSQL databases. · Continuously monitoring and improving the … Data Infrastructure projects, as well as designing and building data intensive applications and services. · Experience with data processing and distributed computing frameworks such as Apache Spark · Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin · Deep knowledge of data modelling, data access, and data more »
London, England, United Kingdom Hybrid / WFH Options
Pioneer Search
Senior Scala Developer - Apache Spark - Urgent Requirement Contract Length: 6 Months IR35 status: Inside Location: London - Hybrid working A Senior Scala Developer with experience in Apache Spark is needed for a British consultancy organisation. You will be an integral member of the team providing technical expertise to the more »
ETL/ELT tools.Experience with NoSQL type environments, Data Lakes, Lake-Houses (Cassandra, MongoDB or Neptune).Experience with distributed storage, processing engines such as Apache Hadoop and Apache Spark.Experience with message brokering/stream processing services such as Apache Kafka, Confluent, Azure Stream Analytics.Experience in Test Driven more »
the following: - Experience of working in an Agile product delivery framework - Experience with PHP 8+ and the Laravel framework - Experience with Linux, NGINX (or Apache), MySQL server. LEMP/LAMP stack. - Experience of writing unit tests with test frameworks (PHPUnit, Codeception, etc.) Although we have a dedicated QA Test more »