Junior Data Scientist - FinTech 📍 Location: London, UK (Hybrid Working) 💰 Salary: £35,000 - £45,000 + Bonus 🕒 Start Date: ASAP or within 1-2 months Are you a highly analytical STEM graduate with a passion for data science and financial technology More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Intellect Group
Junior Data Scientist - FinTech 📍 Location: London, UK (Hybrid Working) 💰 Salary: £35,000 - £45,000 + Bonus 🕒 Start Date: ASAP or within 1-2 months Are you a highly analytical STEM graduate with a passion for data science and financial technology More ❯
enhanced DV Clearance. WE NEED THE DATA ENGINEER TO HAVE…. Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design More ❯
enhanced DV Clearance. WE NEED THE DATA ENGINEER TO HAVE…. Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design More ❯
Bamboo, Jenkins, GitLab Cl/Pipelines Familiarity with microservices software development technique and container-orchestration (e.g., Kubernetes) What Desired Skills You'll Bring: ApacheHadoop, Accumulo, or NiFi One or more of the following certifications: Cloudera Certified Professional (CCP) Data Engineer, Elastic Certified Observability Engineer, Certified Kubernetes Application Developer More ❯
data science/quantitative modeling to real world, financial use cases. Knowledge of open-source technologies and platforms commonly used for data analysis (e.g., Hadoop, Spark, etc.). More ❯
Elasticsearch, Mongo DB, Node, and Docker • Hands-on experience with AWS cloud • Active TS/SCI clearance with polygraph required Preferred Qualifications • Experience with Hadoop, Map Reduce, Spark, Rabbit MQ, and NiFi a plus • Experience with Sequalize and Umzug • Familiarity with or ability to learn Git, Jenkins, FontAwesome, and More ❯
Experience with RDS like MySQL or PostgreSQL or others, and experienced with NoSQL like Redis or MongoDB or others; * Experience with BigData technologies like Hadoop eco-system is a plus. * Excellent writing, proof-reading and editing skills. Able to create documentation that can express cloud architectures using text and More ❯
Experience with RDS like MySQL or PostgreSQL or others, and experienced with NoSQL like Redis or MongoDB or others; * Experience with BigData technologies like Hadoop eco-system is a plus. * Excellent writing, proof-reading and editing skills. Able to create documentation that can express cloud architectures using text and More ❯
State Disk (SSD) NFS/CIFS based server/storage appliance HPSE Data Domain and similar deduplication products Cloud based storage solutions such as HADOOP, and IBM BigInsights Trouble ticket management utilizing Remedy Original Posting: January 10, 2025 For U.S. Positions: While subject to change based on business needs More ❯
and algorithms. Knowledge of advanced computer science and mathematical concepts, practices, and procedures associated with large datasets. Knowledge of data science technologies such as Hadoop or MapReduce. Experience with data mining or machine learning. Expertise in cloud computing, with the ability to work with massive data sets and complex More ❯
and data visualization as applicable. The ability to use/code in a language applicable to the project or task order such as ApacheHadoop, Python, and advanced knowledge of machine learning. Responsibilities: Work with stakeholders to understand their data needs - researches and provides solutions to meet future growth … and data visualization as applicable. The ability to use/code in a language applicable to the project or task order such as ApacheHadoop, Python, and advanced knowledge of machine learning. Experience in building and maintaining of an enterprise data model Experience in implementing data pipelines using ETL More ❯
data pipelines and REST API Ability to write robust code in Scala Deep knowledge in Spark and Scala API experience required Good experience in Hadoop Big Data Hadoop centric schedulers Big Data and Reporting System integration through API knowledge is an added advantage Understanding of data structures data … Analytical and problem-solving skills Banking domain knowledge is added advantage Knowledge of Regulatory Reporting is a strong plus Technology Minimum Experience Big Data Hadoop 6 years Spark 4 years Scala 4 years SQL 4 years Kakfa 2 years Unix and shell script 2 years Skills: Mandatory Skills : Apache … Spark, Big Data Hadoop Ecosystem, Scala, SparkSQL LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
The DarkStar Group
learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and … Helm) Familiarity with, or a working under-standing of big data search tools (Airflow, Pyspark, Trino, OpenSearch, Elastic, etc.) Desired Skills (Optional) Docker Jenkins Hadoop/Spark Kibana Kafka NiFi ElasticSearch About The DarkStar Group Our Company The DarkStar Group is a small business that solves BIG problems. We More ❯
Paradise Island Bahamas, or the Cambridge Hyatt Resort Desired Skills: • Proficient in Java • Comfortable working in a Linux environment • Experience with Apache Open Source Hadoop, Apache Open Source Accumulo, Apache Open Source NiFi • Familiarity with Context chaining and Graph theory • Experience with Containerization - Docker, Kubernetes • Experience with Enabling tools More ❯
colleagues and teams in the UK, North America, and India. Key responsibilities: Collaborating with the architecture team to define best practice in Java and Hadoop development paradigms including documentation and system monitoring. Challenging and helping to direct our technical roadmap and proposing the adoption of new technology or techniques. … Providing breakdowns of project deliverables and estimates. Designing and building data pipelines and Hadoop storage objects. Assisting in the resolution of production issues when required. Mentoring team members. Working with data analysts to define logical data structures. Encouraging self-learning among the team. Essential Skills & Qualifications: A confident engineer … with an authoritative knowledge of Java and Hadoop including HDFS, Hive, and Spark. Comfortable working with large data volumes and able to demonstrate a firm understanding of logical data structures and analysis techniques. Strong skills in identifying and resolving code vulnerabilities and familiarity with utilizing Citi tools in this More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
The DarkStar Group
learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and … tools/data Desired Skills (Optional) Knowledge of agile methodologies/experience delivering on agile teams (Participates in scrum and PI Planning) Docker, Jenkins, Hadoop/Spark, Kibana, Kafka, NiFi, ElasticSearch About The DarkStar Group Our Company The DarkStar Group is a small business that solves BIG problems. We More ❯
Herndon, Virginia, United States Hybrid / WFH Options
The DarkStar Group
learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and … tools/data Desired Skills (Optional) Knowledge of agile methodologies/experience delivering on agile teams (Participates in scrum and PI Planning) Docker, Jenkins, Hadoop/Spark, Kibana, Kafka, NiFi, ElasticSearch About The DarkStar Group Our Company The DarkStar Group is a small business that solves BIG problems. We More ❯
scalable ELT/ETL flows (batch and streaming) Interface between Data Analysts and Data Scientists Enrich data and load into big data environment (ApacheHadoop on Cloudera) Profil Technical education (Computer Science HTL, Computer Science Degree, Data Science etc.) Experience in data modeling (relational databases and ApacheHadoop) Know-how of core technologies such as SQL (MS SQL Server), ApacheHadoop (Kafka, NiFi, Flink, Scala and/or Java, Python) and Linux Showing interest in working with high-frequency data processing in a near-realtime environment Wir bieten State-of-the-Art Technologies We understand the More ❯
Understanding of Networking concepts and protocols (DNS, TCP/IP, DHCP, HTTPS, etc.). BASIC QUALIFICATIONS - 2+ years of experience in big data/Hadoop with excellent knowledge of Hadoop architecture and administration and support. - Be able to read Java code, and basic coding/scripting ability in … with Databases (MySQL, Oracle, NoSQL) experience. - Good understanding of distributed computing environments and excellent Linux/Unix system administrator skills. PREFERRED QUALIFICATIONS - Proficient in Hadoop Map-Reduce and its Ecosystem (Zookeeper, HBASE, HDFS, Pig, Hive, Spark, etc). - Good understanding of ETL principles and how to apply them within More ❯
State Disk (SSD) NFS/CIFS based server/storage appliance HPSE Data Domain and similar deduplication products Cloud based storage solutions such as HADOOP, and IBM Biginsights Trouble ticket management utilizing remedy IAT Level II Certification Required Benefits: We offer a competitive benefits and compensation package and FUN More ❯
Groovy, Python, and/or shell scripting Javascript development experience with Angular, React, ExtJS and/or Node.js Experience with distributed computing technologies including Hadoop, HBase, Cassandra, Elasticsearch and Apache Spark a plus Hands-on experience working with Elastic Search, Mongo DB, Node, Hadoop, Map Reduce, Spark, Rabbit More ❯
Groovy, Python, and/or shell scripting Javascript development experience with Angular, React, ExtJS and/or Node.js Experience with distributed computing technologies including Hadoop, HBase, Cassandra, Elasticsearch and Apache Spark a plus Hands-on experience working with Elastic Search, Mongo DB, Node, Hadoop, Map Reduce, Spark, Rabbit More ❯
and Application/Platform Lifecycle Management in the area of Production Datawarehouse & Reporting Solutions Continuously enhance our Production Reporting Platforms using various technologies (Oracle, Hadoop, Denodo, ) Define, design, build and enhance business intelligence solutions Partner up with management, application owner, key business customer and team members for the delivery … Knowledge in SQL and PL/SQL in Oracle and know-how in MS SQL Database Experience with Big Data platforms/development (e.g. Hadoop, Spark, Impala, HIVE) Experience in data warehousing projects (as an advantage) Good analytical troubleshooting, problem-solving skills The ability to work independently with minimal More ❯
and Application/Platform Lifecycle Management in the area of Production Datawarehouse & Reporting Solutions Continuously enhance our Production Reporting Platforms using various technologies (Oracle, Hadoop, Denodo, ) Define, design, build and enhance business intelligence solutions Partner up with management, application owner, key business customer and team members for the delivery … Knowledge in SQL and PL/SQL in Oracle and know-how in MS SQL Database Experience with Big Data platforms/development (e.g. Hadoop, Spark, Impala, HIVE) Experience in data warehousing projects (as an advantage) Good analytical troubleshooting, problem-solving skills The ability to work independently with minimal More ❯