Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: ApacheNiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn More ❯
languages such as Java, TypeScript, Python, and Go Web libraries and frameworks such as React and Angular Designing, building, and maintaining CI/CD pipelines Big data technologies like NiFi, Hadoop, Spark Cloud and containerization technologies such as AWS, OpenShift, Kubernetes, Docker DevOps methodologies, including infrastructure as code and GitOps Database technologies, e.g., relational databases, Elasticsearch, MongoDB Why join More ❯
as System engineers to support both data and application integrations using bespoke tools written in Python/Java, as well as tools such as Meltano, Airflow, Mulesoft/Snaplogic, ApacheNIFI, and Kafka, ensuring a robust, well-modelled, and scalable data analytics infrastructure running on MySQL and Postgres style databases primarily. Requirements: Advanced SQL development and deep understanding … integration (REST/SOAP) Proficiency in at least 1 object/procedural/functional language (e.g: Java, PHP, Python) Familiarity with EAI tools such as MuleSoft/SnapLogic or ApacheNiFi Experience with infrastructure-as-code tools such as Terraform and Ansible Experience with version control (e.g. Git, SVN) and CI/CD workflows for deployment Experience scraping More ❯
integrations between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Fortice
integrations between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and More ❯
CLEARED/DV CLEARANCE/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯