Bristol, Avon, England, United Kingdom Hybrid/Remote Options
Aspire Personnel Ltd
in AWS cloud technologies for ETL pipeline, data warehouse and data lake design/building and data movement. AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. What you can expect Work to agile best practices and cross-functionally with multiple teams and stakeholders. You’ll be using your technical skills More ❯
Manchester, England, United Kingdom Hybrid/Remote Options
Lorien
years in a technical leadership or management role Strong technical proficiency in data modelling, data warehousing, and distributed systems Hands-on experience with cloud data services (AWS Redshift, Glue, EMR or equivalent) Solid programming skills in Python and SQL Familiarity with DevOps practices (CI/CD, Infrastructure as Code - e.g., Terraform) Excellent communication skills with both technical and non More ❯
Manchester, Lancashire, England, United Kingdom Hybrid/Remote Options
Lorien
years in a technical leadership or management role Strong technical proficiency in data modelling, data warehousing, and distributed systems Hands-on experience with cloud data services (AWS Redshift, Glue, EMR or equivalent) Solid programming skills in Python and SQL Familiarity with DevOps practices (CI/CD, Infrastructure as Code - e.g., Terraform) Excellent communication skills with both technical and non More ❯
site Must be SC cleared, or eligible to obtain SC-level security clearance Job description We are looking for a developer with expertise in Python, AWS Glue, step function, EMR cluster and redshift, to join our AWS Development Team. You will be serving as the functional and domain expert in the project team to ensure client expectations are met. More ❯
similar role. Experience of deploying & managing cloud infrastructure for Data Solutions Hands-on experience of working with AWS services, including but not limited to EC2, S3, RDS, Lambda, Glue, EMR, VPC, IAM, Redshift, etc. Good experience in setting up reliable cloud networking that is highly secured. Experience of setting up standard cloud governance policies through IAM roles. Extensive experience More ❯
London, England, United Kingdom Hybrid/Remote Options
Cint
move fast, stay compliant and take end-to-end responsibility for their products. Major elements of our platform include AWS (we make significant use of S3, RDS, Kinesis, EC2, EMR, ElastiCache, ElasticSearch and EKS). Elements of the platform will start to expand into GCP (Compute Engine, Cloud Storage, Google Kubernetes Engine and BigQuery). Other significant tools of More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Reed
projects. Required Skills & Qualifications: Demonstrable experience in building data pipelines using Spark or Pandas. Experience with major cloud providers (AWS, Azure, or Google). Familiarity with big data platforms (EMR, Databricks, or DataProc). Knowledge of data platforms such as Data Lakes, Data Warehouses, or Data Meshes. Drive for self-improvement and eagerness to learn new programming languages. Ability More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Reed
What We’re Looking For Experience building data pipelines using Spark or Pandas . Familiarity with major cloud platforms (AWS, Azure, or GCP). Understanding of big data tools (EMR, Databricks, DataProc). Knowledge of data architectures (Data Lakes, Warehouses, Mesh). A proactive mindset with a passion for learning new technologies. Nice-to-Have Skills Automated data quality More ❯
to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and Apache Spark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing • Contributing to the development of a lakehouse architecture using Apache … building ETL pipelines Has experience with or is eager to learn Apache Spark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders • Works well in Agile teams and values collaboration over solo heroics Nice-to-haves: It’s great (but not More ❯
City of London, London, United Kingdom Hybrid/Remote Options
N Consulting Global
to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and Apache Spark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing • Contributing to the development of a lakehouse architecture using Apache … building ETL pipelines Has experience with or is eager to learn Apache Spark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders • Works well in Agile teams and values collaboration over solo heroics Nice-to-haves: It’s great (but not More ❯
Fort Lauderdale, Florida, United States Hybrid/Remote Options
Vegatron Systems
then will eventually be sitting in Frt. Lauderdale, FL. Candidates should be senior Data Engineers with big data tools (Hadoop, Spark, Kafka) as well as AWS (cloud services: EC2, EMR, RDS, Redshift) and NOSQL. This is a phone and Skype to hire. Candidates in Florida with a LinkedIn profile preferred but not required. Essential Duties and Responsibilities: • Past experience … or Google Cloud • Experience with big data tools: Hadoop, Spark, Kafka, etc. • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra • Experience with AWS cloud services: EC2, EMR, RDS, Redshift • Experience with stream-processing systems: Storm, Spark-Streaming, etc. • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. • Strong hands-on personality … or Google Cloud • Experience with big data tools: Hadoop, Spark, Kafka, etc. • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra • Experience with AWS cloud services: EC2, EMR, RDS, Redshift • Experience with stream-processing systems: Storm, Spark-Streaming, etc. • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. IDEAL CANDIDATE: Experience More ❯