will be deployed You have experience in database technologies including writing complex queries against their (relational and non-relational) data stores (e.g. Postgres, ApacheHadoop, Elasticsearch, Graph databases), and designing the database schemas to support those queries You have a good understanding of coding best practices & design patterns and More ❯
such as Pandas, NumPy, and SQLAlchemy. Extensive experience with Dash framework for building web applications. In-depth knowledge of Impala or other SQL-on-Hadoop query engines. Understanding of web development concepts (HTML, CSS, JavaScript). Proficiency in data visualization libraries (Plotly, Seaborn). Solid understanding of database design More ❯
Qualifications/Nice to have Experience with a messaging middleware platform like Solace, Kafka or RabbitMQ. Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark More ❯
languages like Python or KornShell. Knowledge of writing and optimizing SQL queries for large-scale, complex datasets. Experience with big data technologies such as Hadoop, Hive, Spark, EMR. Experience with ETL tools like Informatica, ODI, SSIS, BODI, or DataStage. Our inclusive culture empowers Amazon employees to deliver the best More ❯
environment and understanding of SRE principles & goals along with prior on-call experience. Deep understanding and experience in one or more of the following - Hadoop, Spark, Flink, Kubernetes, AWS. The ability to design, author, and release code in any language (Go, Python, Ruby, or Java). Preferred Qualifications Fast More ❯
Strong communication and collaboration skills. Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. Experience with big data technologies like Hadoop, Spark, or Databricks. Familiarity with machine learning and AI concepts. If you encounter any suspicious mail, advertisements, or persons who offer jobs at Wipro More ❯
DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best More ❯
HiveQL, SparkSQL, Scala). Experience with one or more scripting language (e.g., Python, KornShell). PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR. Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best More ❯
or Amazon QuickSight. Programming Languages: Familiarity with Python or R for data manipulation and analysis. Big Data Technologies: Experience with big data technologies like Hadoop or Spark. Data Governance: Understanding of data governance and data quality management. A Bit About Us When it comes to appliances and electricals, we More ❯
The role We are looking for a Data Engineer to join the Data Science & Engineering team in London. Working at WGSN Together, we create tomorrow A career with WGSN is fast-paced, exciting and full of opportunities to grow and More ❯
dimensional, relational, and Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: Apache Spark, Hadoop, Databricks, Snowflake, etc. Cloud: AWS (Glue, Redshift), Azure (Synapse, Data Factory More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Yelp USA
recommending scalable, creative solutions. Exposure to some of the following technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you'll get: Full responsibility for projects from day More ❯
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
frameworks like TensorFlow, Keras, or PyTorch. Knowledge of data analysis and visualization tools (e.g., Pandas, NumPy, Matplotlib). Familiarity with big data technologies (e.g., Hadoop, Spark). Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a team. Preferred Qualifications: Experience with More ❯
and the ability to work in a fast-paced, collaborative environment. Strong communication and interpersonal skills. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of machine learning and AI integration with data architectures. Certification in cloud platforms or data management. More ❯
london, south east england, united kingdom Hybrid / WFH Options
JSS Search
and the ability to work in a fast-paced, collaborative environment. Strong communication and interpersonal skills. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of machine learning and AI integration with data architectures. Certification in cloud platforms or data management. More ❯
Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi More ❯
warehousing, and building ETL pipelines - Experience with SQL - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as Hadoop, Hive, Spark, EMR - Experience operating large data warehouses Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central More ❯
NoSQL databases Skilled in Python, Java, or Scala for data pipeline development Experienced with BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub Exposure to Hadoop, Spark, Kafka Data Engineer - GCP & Python Location: London, UK Type: Hybrid (3 days onsite, 2 remote) Employment: Full-time, Permanent/Fixed Term We More ❯
experience with data integration tools (e.g., Azure Data Factory, Informatica, Talend) and database technologies (SQL, NoSQL). Familiarity with big data technologies (e.g., Spark, Hadoop) and real-time data processing. Deep understanding of data governance, data quality, and data security principles. Experience working with BI tools and supporting analytics More ❯
. Knowledge of cloud platforms (e.g., Azure). Familiarity with containerization is a plus (e.g., Docker, Kubernetes). Knowledge of big data technologies (e.g., Hadoop, Spark). Knowledge of data lifecycle management. Strong problem-solving skills and attention to detail. Ability to work in an agile development environment. Excellent More ❯
Excellent communication skills. Familiarity with Python and its data, numerical, and machine learning libraries. It would be great if you also had: Experience with Hadoop and Jenkins. Azure and AWS certifications. Familiarity with Java. What we do for you: At Leidos, we are passionate about customer success, united as More ❯