solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). Hands-on experience with ETL frameworks and tools (e.g., Apache NiFi, Talend, Informatica, Airflow). Knowledge of big data technologies (e.g., Hadoop, Apache Spark, Kafka). Experience with cloud platforms (AWS, Azure, Google Cloud) and related services for data storage and processing. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes More ❯
. Hands-on experience with open-source ETL, and data pipeline orchestration tools such as Apache Airflow and Nifi. Experience with large scale/Big Data technologies, such as Hadoop, Spark, Hive, Impala, PrestoDb, Kafka. Experience with workflow orchestration tools like Apache Airflow. Experience with containerisation using Docker and deployment on Kubernetes. Experience with NoSQL and graph databases. Unix More ❯
Informatica, Talend, dbt). Solid understanding of data warehousing conc epts (e.g., Kimball, Inmon) and technologies (e.g., Snowflake, Redshift, BigQuery) Experience w ith big data technolo gies (e.g., Spark, Hadoop, Kafka). Familiarity w ith cloud platf orms (AWS, Azure, or GCP) and their data services. Excell ent problem-solving and analytical sk ills. Str ong communication and interpersonal More ❯
tools like Apache NiFi, Talend, or custom scripts. Familiarity with ELT (Extract, Load, Transform) processes is a plus. Big Data Technologies : Familiarity with big data frameworks such as ApacheHadoop and Apache Spark, including experience with distributed computing and data processing. Cloud Platforms: Proficient in using cloud platforms (e.g., AWS, Google Cloud Platform, Microsoft Azure) for data storage, processing More ❯
engineering or related roles. Technical Skills: Advanced proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Big Data Technologies: Extensive experience with big data technologies (e.g., Hadoop, Spark). Cloud Platforms: Deep understanding of cloud platforms (AWS, GCP, Azure) and their data services. DevOps Expertise: Strong understanding and practical experience with DevOps practices and tools (CI More ❯
Microsoft Azure, or Google Cloud Platform (GCP) Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one other programming language More ❯
Microsoft Azure, or Google Cloud Platform (GCP) Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one other programming language More ❯
S3, Lambda, BigQuery, or Databricks. Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such as Spark, Hadoop, or Kafka, is a plus. Strong problem-solving skills and the ability to work in a collaborative team environment. Excellent verbal and written communication skills. Bachelor's degree in More ❯
and collaboration skills. Nice to have: Hands-on experience with data warehouse and lakehouse architectures (e.g., Databricks, Snowflake, or similar). Experience with big data frameworks (e.g., Apache Spark, Hadoop) and cloud platforms (e.g., AWS, Azure, or GCP). More ❯
and collaboration skills. Nice to have: Hands-on experience with data warehouse and lakehouse architectures (e.g., Databricks, Snowflake, or similar). Experience with big data frameworks (e.g., Apache Spark, Hadoop) and cloud platforms (e.g., AWS, Azure, or GCP). More ❯
Alpharetta, Georgia, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Stamford, Connecticut, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Phoenix, Arizona, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Cincinnati, Ohio, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Chicago, Illinois, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Kansas City, Kansas, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Charlotte, North Carolina, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Altamonte Springs, Florida, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Rapid City, South Dakota, United States Hybrid / WFH Options
Synchrony
commute to our nearest office for in person engagement activities such as business or team meetings, training and culture events. Essential Responsibilities: Develop big data applications for Synchrony in Hadoop ecosystem Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner Work with Sonarqube for code quality analysis and build … data engineering space Work with team members to achieve business results in a fast paced and quickly changing environment Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio Test current processes and identify deficiencies Plan, create and manage the test cases and test scripts Identify process bottlenecks and … School Diploma/GED and minimum 4 to 5 years of Information Technology experience Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git Ability to write abstracted, reusable code components Programming experience in at least one of the following languages: Scala, Java or Python Analytical mindset Willingness and aptitude More ❯
Data Site Reliability Engineer or a similar role, with a focus on data infrastructure management Proficiency in data technologies, such as relational databases, data warehousing, big data platforms (e.g., Hadoop, Spark), data streaming (e.g., Kafka), and cloud services (e.g., AWS, GCP, Azure). Ideally some programming skills in languages like Python, Java, or Scala, with experience in automation and More ❯
and statistical packages. Strong analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as ApacheHadoop, Apache Spark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security best practices in AWS. 12Excellent More ❯
and statistical packages. Strong analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as ApacheHadoop, Apache Spark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security best practices in AWS. 12Excellent More ❯
and statistical packages. Strong analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as ApacheHadoop, Apache Spark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security best practices in AWS. 12Excellent More ❯