driven solutions and shape our data strategy to support business growth. Responsibilities Work with teams to understand objectives, data needs, and analytics goals. Develop strategies using AWS services like AmazonRedshift, Athena, EMR, and QuickSight. Design data pipelines and ETL processes for diverse data sources. Apply statistical and machine learning techniques for analysis and pattern recognition. Create KPIs More ❯
London, England, United Kingdom Hybrid / WFH Options
AlphaSights
reliable, scalable, and well-tested solutions to automate data ingestion, transformation, and orchestration across systems. Own data operations infrastructure: Manage and optimise key data infrastructure components within AWS, including AmazonRedshift, Apache Airflow for workflow orchestration, and other analytical tools. You will be responsible for ensuring the performance, reliability, and scalability of these systems to meet the growing More ❯
work on fast-moving, critical projects, contributing to design decisions. Proven project delivery and advanced data validation experience are advantageous. Key Skills & Experience: Proficiency with database systems such as Amazon Aurora MySQL, Microsoft SQL, Oracle, DynamoDB, etc. Expertise in designing, developing, and maintaining complex SQL queries and stored procedures. Performance tuning and optimization of data pipelines. Implementing data monitoring … solutions. Developing scripts and automation tools for data management. Knowledge of version control systems like Git. Familiarity with AWS services: DMS, Lambda, S3, Step Functions, CloudWatch, Redshift, Glue, EventBridge. Experience with CI/CD pipelines and repositories (e.g., Bitbucket). Automated testing design and implementation. Experience with large-scale data environments and warehousing concepts. Data ingestion and integration experience. More ❯
or another Data Science based scripting language. Demonstrated experience and responsibility with data, processes, and building ETL pipelines. Experience with cloud data warehouses such as Snowflake, Azure Data Warehouse, AmazonRedshift, and Google BigQuery. Building visualizations using Power BI or Tableau Experience in designing ETL/ELT solutions, preferably using tools like SSIS, Alteryx, AWS Glue, Databricks, IBM More ❯
Washington, Washington DC, United States Hybrid / WFH Options
Digital Management, Inc
experience using the Datastage tool and version designated in theTask Order• Excellent communication skills Preferred Skills:• Experience with cloud-based Data Warehouse platforms and solutions (e.g., Oracle FDIP, Snowflake, AmazonRedshift, Google BigQuery).• Relevant certifications in data management or ETL tools are a plus Experience with one or more of the following systems: Maximo, PeopleSoft FSCM, HCM More ❯
AWS Cloud Developer Washington, DC We are seeking a Redshift Developer or Oracle Developer with AWS experience (Top Secret clearance required) to join a data warehouse and business intelligence (BI) development team supporting our federal client working to migrate an on-prem Oracle Enterprise Edition (EE) data warehouse to Redshift instance in AWS. This is a full-time … priorities. Requirements: On-site Washington, D.C. Top Secret clearance required. Four to six years' experience planning, designing, developing, architecting, and implementing cloud database solutions with an emphasis on AWS Redshift Strong notion of security best practices (IAM, KMS, etc) Strong knowledge in PL/SQL code to perform ETL for both full loads and incremental loads from an OLTP More ❯
optimize operations and decision-making. Collaborating with Data Engineering and Analytics Tooling teams to improve data sources and develop scalable, accurate reporting processes. Experience: Analyzing and interpreting data with Redshift, Oracle, NoSQL, etc. Data visualization using Tableau, Quicksight, or similar tools Data modeling, warehousing, and ETL pipeline development Statistical analysis using R, SAS, Matlab SQL and scripting (Python) for … data processing and modeling Preferred qualifications: Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, and handling large-scale, complex datasets in a business environment #J-18808-Ljbffr More ❯
and recommendations, shaping our data strategy and supporting business growth. Responsibilities: Collaborate with teams to understand objectives and define analytics goals. Develop data analysis strategies using AWS services like Redshift, Athena, EMR, and QuickSight. Design data pipelines and ETL processes for data extraction, transformation, and loading. Apply statistical and machine learning techniques for predictive and segmentation analyses. Develop KPIs … in AWS. Requirements: Bachelor's or master's in Computer Science, Statistics, Mathematics, or related. Experience as a Data Scientist, especially with AWS analytics. Proficiency in AWS services like Redshift, Athena, EMR, QuickSight. Understanding of data modeling, ETL, data warehousing. Skills in statistical analysis, data mining, machine learning. Programming skills in Python, R, or Scala. Experience with SQL, NoSQL More ❯
develop data-driven solutions and strategies to drive growth. Responsibilities: Collaborate with teams to understand objectives, data needs, and analytics goals. Develop data analysis strategies using AWS services like Redshift, Athena, EMR, and QuickSight. Design data pipelines and ETL processes for data extraction, transformation, and loading. Apply statistical and machine learning techniques for predictive analysis, clustering, segmentation, and pattern … s or master's degree in Computer Science, Statistics, Mathematics, or related fields. Proven experience as a Data Scientist, especially with AWS analytics services. Strong proficiency in AWS tools: Redshift, Athena, EMR, QuickSight. Understanding of data modeling, ETL, and data warehousing. Skills in statistical analysis, data mining, machine learning. Programming skills in Python, R, or Scala. Experience with SQL More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform, StackDriver or More ❯
thrives in greenfield project environments, and enjoys working both independently and collaboratively. Key Responsibilities as a Principal Data Engineer Propose and implement data solutions using AWS services including S3, Redshift, Lambda, Step Functions, DynamoDB, AWS Glue, and Matillion . Work directly with clients to define requirements, refine solutions, and ensure successful handover to internal teams. Design and implement ETL … to a collaborative, knowledge-sharing team culture. Required Qualifications & Skills Strong experience in ETL processes and cloud data warehouse patterns . Hands-on expertise with AWS services (S3, Glue, Redshift). Proficiency with Matillion for data transformation. Experience working with various relational databases . Familiarity with data visualization tools such as QuickSight, Tableau, Looker, or QlikSense . Ability to More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform, StackDriver or More ❯
London, England, United Kingdom Hybrid / WFH Options
Computappoint
desktops Requirements Demonstrable experience within consulting/managed service environments Strong experience in building, configuring and optimising the Data Lakes environment Experience in Landing Zones/Transit Gateways/Redshift/Firehose/CloudTrail/Workspaces Experience within Linux based environments Strong understanding of AWS Data Lake solutions and AWS Redshift AWS Certified Solutions Architect, Developer, or SysOps More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform, StackDriver or More ❯
data pipelines. These pipelines ingest and transform data from diverse sources (e.g., email, CSV, ODBC/JDBC, JSON, XML, Excel, Avro, Parquet) using AWS technologies such as S3, Athena, Redshift, Glue, and programming languages like Python and Java (Docker/Spring) . What You’ll Do Lead the design and development of scalable data pipelines and ETL processes Collaborate … UK for at least 5 years. What We’re Looking for Proven experience leading data engineering teams and delivering technical solutions Strong background in cloud data platforms, especially AWS (Redshift, Athena, EC2, IAM, Lambda, CloudWatch) Proficiency in automation tools and languages (e.g., GitHub/GitLab, Python, Java). Skilled in stakeholder engagement and translating requirements into actionable insights. Ability More ❯
Penryn, England, United Kingdom Hybrid / WFH Options
Aspia Space
including geospatial data—for training our large-scale AI models. Key Responsibilities: •Architect, design, and manage scalable data pipelines and infrastructure across on-premise and cloud environments (AWS S3, Redshift, Glue, Step Functions). •Ingest, clean, wrangle, and preprocess large, diverse, and often messy datasets—including structured, unstructured, and geospatial data. •Collaborate with ML and research teams to ensure … experience in data engineering, data architecture, or similar roles. •Expert proficiency in Python, including popular data libraries (Pandas, PySpark, NumPy, etc.). •Strong experience with AWS services—specifically S3, Redshift, Glue (Athena a plus). •Solid understanding of applied statistics. •Hands-on experience with large-scale datasets and distributed systems. •Experience working across hybrid environments: on-premise HPCs and More ❯
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Delaney & Bourton
selection, cost management and team management Experience required: Experience in building and scaling BI and Data Architecture Expertise in modern BI and Data DW platforms such as Snowflake, BigQuery, Redshift, Power BI etc Background in ETL/ELT tooling and Data Pipelines such as DBT, Fivetran, Airflow Experienced in Cloud based solutions (Azure, AWS or Google More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
selection, cost management, and team management. Experience required: Experience in building and scaling BI and Data Architecture. Expertise in modern BI and Data DW platforms such as Snowflake, BigQuery, Redshift, Power BI, etc. Background in ETL/ELT tooling and Data Pipelines such as DBT, Fivetran, Airflow. Experience with Cloud-based solutions (Azure, AWS, or Google). #J More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
JR United Kingdom
data architecture. Work with technologies such as Python, Java, Scala, Spark, and SQL to extract, clean, transform, and integrate data. Build scalable solutions using AWS services like EMR, Glue, Redshift, Kinesis, Lambda, and DynamoDB. Process large volumes of structured and unstructured data, integrating multiple sources to create efficient data pipelines. Collaborate with engineering teams to integrate data solutions into More ❯
London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
London, England, United Kingdom Hybrid / WFH Options
Builder.ai
new approaches. Extensive software engineering experience with Python (no data science background required). Experience with production microservices (Docker/Kubernetes) and cloud infrastructure. Knowledge of databases like Postgres, Redshift, Neo4j is a plus. Why You Should Join This role sits at the intersection of data science and DevOps. You will support data scientists, design, deploy, and maintain microservices More ❯
technologies. Develops and maintains scalable cloud-based data infrastructure, ensuring alignment with the organization's decentralized data management strategy. Designs and implements ETL pipelines using AWS services (e.g., S3, Redshift, Glue, Lake Formation, Lambda) to support data domain requirements and self-service analytics. Collaborates with data domain teams to design and deploy domain-specific data products, adhering to organizational More ❯
catch anomalies early in both pipelines and the data warehouse. Continuously enhance data processes and infrastructure. Must-Have Requirements Strong SQL skills and experience with cloud-based databases like Redshift and AWS RDS . Solid Python knowledge, including packages for analytics, data transformation, APIs, and ML/AI. Proven experience building and maintaining ETLs/ELTs, ideally using dbt More ❯