technical concepts to non-technical stakeholders. Team Player: Ability to work effectively in a collaborative team environment, as well as independently. Preferred Qualifications: Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Familiarity with AWS and its data services (e.g. S3, Athena, AWS Glue). Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Knowledge of containerization More ❯
Azure, or Google Cloud Platform (GCP). Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle. Experience with big data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow. Proficiency in Python and at least one other programming language More ❯
Quincy, Washington, United States Hybrid / WFH Options
Centene
tools development Data warehouse and data mart design and development ETL (Extract, Transform, Load) processes. Data governance and compliance Proficiency in SQL and Python Knowledge of Big Data technologies (Hadoop, Spark) Cloud computing (AWS, Azure, GCP) Data modeling and architecture Advanced analytics and predictive modeling Knowledge of data privacy laws and regulations Proficiency in BI tools (Tableau, PowerBI) Strong More ❯
Trout Lake, Washington, United States Hybrid / WFH Options
Centene
tools development Data warehouse and data mart design and development ETL (Extract, Transform, Load) processes. Data governance and compliance Proficiency in SQL and Python Knowledge of Big Data technologies (Hadoop, Spark) Cloud computing (AWS, Azure, GCP) Data modeling and architecture Advanced analytics and predictive modeling Knowledge of data privacy laws and regulations Proficiency in BI tools (Tableau, PowerBI) Strong More ❯
Lincoln, Washington, United States Hybrid / WFH Options
Centene
tools development Data warehouse and data mart design and development ETL (Extract, Transform, Load) processes. Data governance and compliance Proficiency in SQL and Python Knowledge of Big Data technologies (Hadoop, Spark) Cloud computing (AWS, Azure, GCP) Data modeling and architecture Advanced analytics and predictive modeling Knowledge of data privacy laws and regulations Proficiency in BI tools (Tableau, PowerBI) Strong More ❯
Auburn, Washington, United States Hybrid / WFH Options
Centene
tools development Data warehouse and data mart design and development ETL (Extract, Transform, Load) processes. Data governance and compliance Proficiency in SQL and Python Knowledge of Big Data technologies (Hadoop, Spark) Cloud computing (AWS, Azure, GCP) Data modeling and architecture Advanced analytics and predictive modeling Knowledge of data privacy laws and regulations Proficiency in BI tools (Tableau, PowerBI) Strong More ❯
White Salmon, Washington, United States Hybrid / WFH Options
Centene
tools development Data warehouse and data mart design and development ETL (Extract, Transform, Load) processes. Data governance and compliance Proficiency in SQL and Python Knowledge of Big Data technologies (Hadoop, Spark) Cloud computing (AWS, Azure, GCP) Data modeling and architecture Advanced analytics and predictive modeling Knowledge of data privacy laws and regulations Proficiency in BI tools (Tableau, PowerBI) Strong More ❯
of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
pipelines and ETL - informatica - Experience in SQL and database management systems - Knowledge of data modelling , warehousing concepts , and ETL processes - Experience with big data technologies and frameworks such as Hadoop, Hive, Spark. Programming experience in Python or Scala. - Demonstrated analytical and problem-solving skills. - Familiarity with cloud platforms (e.g Azure , AWS ) and their data related services - Proactive and detail More ❯
engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands-on knowledge More ❯
solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Knowledge of big data technologies (e.g., Hadoop, Spark). Experience in commodities (Agriculture, Natural Gas, Power). For more information about DRW's processing activities and our use of job applicants' data, please view our Privacy More ❯
Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to More ❯
in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog More ❯
Science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., Apache Spark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., MySQL, PostgreSQL, MongoDB) and data warehousing More ❯
Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as Apache Spark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT processes, SQL, NoSQL databases is a nice-to-have, providing a More ❯
tools and libraries (e.g. Power BI) Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow Previous exposure to big data technologies (e.g. Hadoop, Spark) for large data processing Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. Strong communication More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server More ❯
7+ years in data architecture and solution design, and a history of large-scale data solution implementation. Technical Expertise : Deep knowledge of data architecture principles, big data technologies (e.g., Hadoop, Spark), and cloud platforms like AWS, Azure, or GCP. Data Management Skills : Advanced proficiency in data modelling, SQL/NoSQL databases, ETL processes, and data integration techniques. Programming & Tools More ❯
Statistics, Maths or similar Science or Engineering discipline Strong Python and other programming skills (Java and/or Scala desirable) Strong SQL background Some exposure to big data technologies (Hadoop, spark, presto, etc.) NICE TO HAVES OR EXCITED TO LEARN: Some experience designing, building and maintaining SQL databases (and/or NoSQL) Some experience with designing efficient physical data More ❯
Familiarity with Data Mesh, Data Fabric, and product-led data strategies. Expertise in cloud platforms (AWS, Azure, GCP, Snowflake). Technical Skills Proficiency in big data tools (Apache Spark, Hadoop). Programming knowledge (Python, R, Java) is a plus. Understanding of ETL/ELT, SQL, NoSQL, and data visualisation tools. Awareness of ML/AI integration into data architectures. More ❯
as Python, Java, or C++. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Familiarity with data processing tools and platforms (e.g., SQL, Apache Spark, Hadoop). Knowledge of cloud computing services (e.g., AWS, Google Cloud, Azure) and containerization technologies (e.g., Docker, Kubernetes) is a plus. Hugging Face Ecosystem: Demonstrated experience using Hugging Face tools More ❯
Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you don't tick every box. The More ❯
supervised, unsupervised, reinforcement learning), deep learning, and neural networks. Experience with data preprocessing, feature engineering, and model evaluation techniques.\Familiarity with data management and big data technologies (e.g., Spark, Hadoop, SQL, NoSQL databases). Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services. Strong analytical and problem-solving skills. Excellent communication and collaboration skills More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
Clear communicator, able to translate complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage AI/LLM tools to More ❯