timely delivery. Essential Criteria 5+ years of experience in Technical Data Analysis. Proficiency in SQL, Python, and Spark. Experience within an investment banking or financial services environment. Exposure to Hive, Impala, and Spark ecosystem technologies (e.g. HDFS, Apache Spark, Spark-SQL, UDF, Sqoop). Experience building and optimizing Big Data pipelines, architectures, and data sets. Familiarity with Hadoop More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Hays
timely delivery. Essential Criteria 5+ years of experience in Technical Data Analysis. Proficiency in SQL, Python, and Spark. Experience within an investment banking or financial services environment. Exposure to Hive, Impala, and Spark ecosystem technologies (e.g. HDFS, Apache Spark, Spark-SQL, UDF, Sqoop). Experience building and optimizing Big Data pipelines, architectures, and data sets. Familiarity with Hadoop More ❯
in Python and SQL Demonstrable hands-on experience in AWS cloud Data ingestions both batch and streaming data and data transformations (Airflow, Glue, Lambda, Snowflake Data Loader, FiveTran, Spark, Hive etc.). Apply agile thinking to your work. Delivering in iterations that incrementally build on what went before. Excellent problem-solving and analytical skills. Good written and verbal skills … translate concepts into easily understood diagrams and visuals for both technical and non-technical people alike. AWS cloud products (Lambda functions, Redshift, S3, AmazonMQ, Kinesis, EMR, RDS (Postgres . Apache Airflow for orchestration. DBT for data transformations. Machine Learning for product insights and recommendations. Experience with microservices using technologies like Docker for local development. Apply engineering best practices to More ❯
4. (Mandatory) Demonstrated experience in large-scale data migration efforts. 5. (Mandatory) Demonstrated experience with database architecture, performance design methodologies, and system-tuning recommendations. Preference for familiarity with Glue, Hive, and Iceberg or similar 6. (Mandatory) Demonstrated experience with Python, Bash, and Terraform 7. (Mandatory) Demonstrated experience with DevSecOps solutions and tools 8. (Mandatory) Demonstrated experience implementing CI/… with Data Quality and Data Governance concepts and experience. 11. (Desired) Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. 12. (Desired) Demonstrated experience with Apache Spark More ❯
multiple heterogenous data sources. • Good knowledge of warehousing and ETLs. Extensive knowledge of popular database providers such as SQL Server, PostgreSQL, Teradata and others. • Proficiency in technologies in the Apache Hadoop ecosystem, especially Hive, Impala and Ranger • Experience working with open file and table formats such Parquet, AVRO, ORC, Iceberg and Delta Lake • Extensive knowledge of automation and More ❯
Columbia, South Carolina, United States Hybrid / WFH Options
Systemtec Inc
technologies and cloud-based technologies AWS Services, State Machines, CDK, Glue, TypeScript, CloudWatch, Lambda, CloudFormation, S3, Glacier Archival Storage, DataSync, Lake Formation, AppFlow, RDS PostgreSQL, Aurora, Athena, Amazon MSK, Apache Iceberg, Spark, Python ONSITE: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed. Standard work hours: 8:30 AM - 5:00 PM Required Qualifications of the … using Databricks, AI and Machine Learning Amazon Bedrock, AWS Sagemaker, Unified Studio, R Studio/Posit Workbench, R Shiny/Posit Connect, Posit Package Manager, AWS Data Firehose, Kafka, Hive, Hue, Oozie, Sqoop, Git/Git Actions, IntelliJ, Scala Responsibilities of the Data Engineer (AWS): Act as an internal consultant, advocate, mentor, and change agent providing expertise and technical More ❯
for a long-term programme with a Public Sector customer of ours. Skills/Experience: Hands-on experience implementing and maintaining metadata repositories and data catalogues using tools like Apache Atlas, Hive Metastore, AWS Glue, and AWS DataZone Experience designing pipelines, data lakes & warehouses Experience designing physical, logical, and conceptual data models to support scalable and maintainable systems. More ❯
following would be required: Data modelling- experience delivering physical, logical or conceptual data models Data design- experience developing data warehouses, lakehouses or data lakes Use of tools such as Apache Atlas, Hive Metastore, AWS Glue/Datazone Data standards- experience driving data standards If you are interested in working for a government client on long term data projects More ❯
We are seeking a specialist Kotlin Developer with experience working on Big Data projects in a high-performance environment. We're working with banks and other major financial institutions on projects where microseconds count. Essential functions You will build and More ❯
Modelling : Physical, logical, conceptual models; data flow diagrams; ontologies; UML/Visio/Sparx Data Standards : Technical specs, code assurance, championing interoperability Metadata Management : Data catalogues, repositories; tools like Apache Atlas, Hive Metastore, AWS Glue/Datazone Data Design : Data lakes, warehouses, lakehouses, pipelines, meshes, marketplaces If you're passionate about shaping data strategy and architecture in a More ❯