wrangling, visualization, and reporting. Specialised in AWS cloud technologies for ETL, data warehouse, and data lake design. Hands-on experience with AWS services like EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. Capable of processing large volumes of structured and unstructured data on AWS. Familiarity with AWS best practices in data more »
large scale DW/BI systems for B2B SAAS companies · Experience with open-source tools like Apache Flink and AWS tools like S3, Redshift, EMR and RDS. · Experience with AI/Machine Learning and Predictive Analytics · Experience in developing global products will be a big plus. · Understanding of budgets more »
large engagements. This role requires candidates to go through SC Clearance, so you must be eligible. Experience of AWS tools (e.g Athena, Redshift, Glue, EMR) Java, Scala, Python, Spark, SQL Experience of developing enterprise grade ETL/ELT data pipelines. NoSQL Databases. Dynamo DB/Neo4j/Elastic, Google more »
data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud Formation, NewRelic, Jenkins, Grafana, PagerDuty, GitHub, GitHub Actions Database : MySQL … or Scala skills and UI skills Experience with designing and implementing high volume data processing jobs is a required. Working knowledge of Spark on EMR is a preferred. Strong database development skills, including advanced SQL, relational and NoSQL database technologies. Experience with AWS technologies are required. Strong analytical and more »
to ingest, transform and load the datasets. Your Profile Key skills/knowledge/experience: Good understanding of AWS cloud data platform services: EC2, EMR, RDS, Redshift, Glue. Ability to work with object-oriented scripting languages: Python, Pyspark. Knowledge of data pipeline and workflow management and their tools such more »
Chicago, Illinois, United States Hybrid / WFH Options
Request Technology - Robyn Honquest
AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL more »
have an understanding of Data modelling Principles & best practices and also needs prior experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
machine learning models in production environments. AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant more »
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
machine learning models in production environments. AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant more »
Apache Iceberg & Spark. Exposure to Apache Airflow, Prefect, Dagster, DBT. Expertise in data analysis with exposure to data services (such as Glue, Lake Formation, EMR, EventBridge, Athena, etc.) & metadata management tools (such as Amundsen, Atlas, DataHub, OpenDataDiscovery, Marquez, etc.), Experience on RDBMS like PostgreSQL would be a plus. Experience more »
Greater London, England, United Kingdom Hybrid / WFH Options
CommuniTech Recruitment Group
Apache Iceberg & Spark. Exposure to Apache Airflow, Prefect, Dagster, DBT. Expertise in data analysis with exposure to data services (such as Glue, Lake Formation, EMR, EventBridge, Athena, etc.) & metadata management tools (such as Amundsen, Atlas, DataHub, OpenDataDiscovery, Marquez, etc.), Experience on RDBMS like PostgreSQL would be a plus. Experience more »
AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes more »
AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes more »
onto the cloud platforms, one of the key strategies for the division in which you’ll get exposure to technologies like AWS S3, Snowflake, EMR etc. These are great roles playing a role on some really big interesting projects. more »
across a wide range of AWS services with the ability to demonstrate working on large engagements * Experience of AWS tools (e.g. Athena, Redshift, Glue, EMR) * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ELT data pipelines. * Deep understanding of data manipulation/wrangling techniques * Demonstrable … applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). * Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR, GCP DataProc or GCP Cloud Data Fusion. * NoSQL Databases. Dynamo DB/Neo4j/Elastic, Google Cloud Datastore. * Snowflake Data Warehouse/Platform * Streaming more »
of Database development experience and a background in an OO language. This role offers the chance to gain exposure in the following technology AWS EMR, Redshift, Kafka Angular/Java (making changes to reflect changes in the data warehouse) By applying to this job you are sending us your more »
role, you'll spearhead backend and data engineering and mentor team members. Tech stack: Athena; Python, Flask, Redis, Postgres, React, Plotly, Docker, SQL, Athena & EMR Spark, ECS and Temporal. This is a 60/40 split between tech and leadership. Your background: 8 years+ coding experience, 4+ years Python more »
London - 3x a week Length: Initial 6 months Contract, Inside IR35 Delivery of AWS-based data & analytics capabilities, built on foundational services such as EMR, Glue, MWAA, Dynamo, Kinesis, Kafka and Sagemaker AWS Cloud Data Platforms Architecture and AWS Cloud Security & Infrastructure Insurance or Regulated Financial Services Industries Solutioning more »
similar). Experience using modern build tools such as Maven, Jenkins, GitHub, etc. Experience with Amazon Web Services a strong plus - CloudFormation, EMR, S3, EC2, Athena etc. Experience with scheduling services such as Airflow, Oozie. Experience with Data ETL and data modeling Experience with building large-scale more »
Greater London, England, United Kingdom Hybrid / WFH Options
Humand Talent
8+ with experience in Laravel/Symfony desired. JavaScript/TypeScript proficiency with NodeJS and React.JS. Familiarity with AWS Services (Aurora, MSK Kafka, ECS, EMR). 3+ years of database experience, ideally with MySQL. Hands-on experience with various data storage paradigms (e.g., RDMS + Document + KV Stores more »
PHP 8+ Experience with Laravel/Symfony framework Strong grasp of JavaScript/TypeScript Familiarity with AWS services including Aurora, MSK Kafka, ECS, and EMR Solid understanding of SQL Key Responsibilities: Collaborate with a talented team to conceptualise, develop, and deploy scalable web applications Write clean, efficient, and maintainable more »
strong emphasis on Java and EMR. Implement and optimise data models and algorithms to extract valuable insights from large datasets. Utilise AWS services, including EMR, Glue, and Iceberg, for data storage, processing, and ingestion. Ensure data quality and integrity by implementing robust validation and monitoring processes. Collaborate with cross … engineering, with a focus on managing and processing financial data. Proficiency in Python and Java, with a strong preference for Java, and experience with EMR for data processing. Solid understanding of data modelling concepts and experience with relevant modelling languages. Hands-on experience with AWS services, including but not … limited to EMR, Glue, and S3. Prior experience with Iceberg for data storage and Glue for data ingestion is highly desirable. Strong SQL skills and experience with relational databases. Proven track record of delivering high-quality data solutions in a fast-paced environment. Excellent communication and collaboration skills, with more »
The role is 2 years fixed term contract. GTTS builds products that help Amazon run the world's largest transportation network, using cutting-edge technologies and machine learning, all running on AWS. We are looking for someone who is passionate about technology, loves solving customer problems, and delivers … Amazon. You help establish technical standards and drive Amazons overall technical architecture, engineering practices, and methodologies. You think globally when building systems, ensuring Amazon builds high performing, scalable systems that work well together. You are hands on, producing both detailed technical work and high-level designs. In GTTS … critical issues arise with the team's products. Key job responsibilities working with AWS technologies such as Lambda, ECS Fargate, API Gateway, RDS, DynamoDB, EMR building customer-facing applications and APIs building data pipelines using Spark + Scala that process Tb of data per day working with customers to more »
boundaries through multiple data centers and AWS regions. Your Profile Key skills/knowledge/experience: Experience of AWS Data platform experience including AWS-EMR Cluster Platform, Glue, EC2, S3, VPC, IAM, RDS, Redshift, Sagemaker, Lake formation and Glue catalogue. Data Storage Fundamentals. Cloud specific patterns and technologies. AWS … DevSecOps pipelines. Experience in data security areas in AWS like IAM, Cloudwatch, Cloudtrail, Inspector. Hands on experience in architecture of the data workloads in EMR\Clusters. Designs system architecture to integrate easily with other AWS services. Strong background in technologies like Spark, Hive and Pyspark. Key Experience: Experience of more »