to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯
to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯
Python, Java, AWS Infrastructure, Linux, Kubernetes, Hadoop, CI/CD , Big Data Platform, Agile, JIRA, Confluence, Github, Gitlab, puppet, ansible, maven, virtualization, ovirt, proxmox, vmware, Shell/Bash scripting Due to federal contract requirements, United States citizenship and an active TS/SCI security clearance and polygraph are required for the position. Required: Must be a US Citizen. Must … or related discipline from an accredited college or university. Prior experience or familiarity with DISA's Big Data Platform or other Big Data systems (e.g. Cloudera's Distribution of Hadoop, Hortonworks Data Platform, MapR, etc ) is a plus. Experience with CI/CD pipelines (e.g. Gitlab-CI, Travis-CI, etc.). Understanding of agile software development methodologies and use More ❯
in multiple programming languages such as bash, Python, or Go Must have a DoD 8140/8570 compliance certification (i.e. Security+ certification) Preferred Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers and Kubernetes More ❯
Python and Java) Preferred Skills: Full-Stack Web Frameworks: Django, Flask Front-End Frameworks: React, Angular, Typescript Cloud: Openstack, AWS, Azure DevOps: Docker, Ansible, Gitlab Big Data: Spark, Elasticsearch, Hadoop, Neo4j Geo-Related Mapping Experience Apply today to make a real impact with cutting-edge technology and innovative solutions More ❯
Experience Required: Proven expertise in data science, machine learning, or artificial intelligence for security-related applications. Proficiency in Python, R, SQL , and experience with big data platforms (e.g., Spark, Hadoop) . Ability to drive research , identifying innovative solutions for national security challenges. Experience working in classified or sensitive environments , understanding security clearance requirements. Strong analytical capabilities with proven experience More ❯
documents indexing/search, and GPU workloads Experience in the development of algorithms leveraging R, Python, or SQL/NoSQL Experience with Distributed data/computing tools, including MapReduce, Hadoop, Hive, EMR, Spark, Gurobi, or MySQL Experience with visualization packages, including Plotly, Seaborn, or ggplot2 Bachelor's degree More ❯
DBMS, ORM (Hibernate), and APIs. Active TS/SCI Clearance with Full Scope Polygraph. Preferred Skills: Microservices, REST, JSON, XML. CI/CD, Docker, Kubernetes, Jenkins. Big Data technologies (Hadoop, Kafka, Spark). CISSP, Java, AWS certifications a plus. Ready to Transform Your Career? Join Trinity Enterprise Services-where professional growth meets personal fulfillment. Apply today to become a More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Anson McCade
Knowledge of CI/CD processes and infrastructure-as-code. • Eligible for SC clearance (active clearance highly advantageous). Desirable Skills • Exposure to large data processing frameworks (e.g., Spark, Hadoop). • Experience deploying data via APIs and microservices. • AWS certifications (Solution Architect Associate, Data Analytics Speciality, etc.). • Experience in public sector programmes or government frameworks. Package & Benefits More ❯
Knowledge of CI/CD processes and infrastructure-as-code. • Eligible for SC clearance (active clearance highly advantageous). Desirable Skills • Exposure to large data processing frameworks (e.g., Spark, Hadoop). • Experience deploying data via APIs and microservices. • AWS certifications (Solution Architect Associate, Data Analytics Speciality, etc.). • Experience in public sector programmes or government frameworks. Package & Benefits More ❯
such as Java, TypeScript, Python, and Go Web libraries and frameworks such as React and Angular Designing, building, and maintaining CI/CD pipelines Big data technologies like NiFi, Hadoop, Spark Cloud and containerization technologies such as AWS, OpenShift, Kubernetes, Docker DevOps methodologies, including infrastructure as code and GitOps Database technologies, e.g., relational databases, Elasticsearch, MongoDB Why join Gemba More ❯
of streaming documents weekly. Skills and Proficiencies: • Strong understanding of data lake architectures and streaming data environments. • Familiarity with distributed storage systems (e.g., Cloudera) and compute frameworks (e.g., Spark, Hadoop). • Experience with MLOps processes and tools for model deployment, monitoring, and retraining. • Proficient in SQL/NoSQL and languages like R or Python for algorithm development. • Visualization expertise More ❯
frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, AML). 7+ years in IT platform implementation, consulting, and distributed solutions design. Experience with databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis), cloud solutions (AWS or equivalent), systems, networks, and operating systems. If you need accommodations during the application process, please visit this link . More ❯
Cheltenham, Gloucestershire, United Kingdom Hybrid / WFH Options
Gemba Advantage
as Java, TypeScript, Python, and Go Web libraries and frameworks such as React and Angular Designing, building, and maintaining CI/CD pipelines Big data technologies, such as NiFi, Hadoop, Spark Cloud and containerization technologies such as AWS, OpenShift, Kubernetes, Docker DevOps methodologies, such as infrastructure as code and GitOps Database technologies, e.g. relational databases, Elasticsearch, Mongo Why join More ❯
to accommodate any changes in the schedule. Preferred Requirements Prior experience or familiarity with DISA's Big Data Platform or other Big Data systems (e.g. Cloudera's Distribution of Hadoop, Hortonworks Data Platform, MapR, etc ) is a plus. Experience with CI/CD pipelines (e.g. Gitlab-CI, Travis-CI, Jenkins, etc.) Understanding of agile software development methodologies and use More ❯
expertise and technical acumen to ensure successful delivery of complex data projects on time and within budget. Key Responsibilities: Project Management: Lead and manage legacy data platform migration (Teradata, Hadoop), data lake build, and data analytics projects from initiation to completion. Develop comprehensive project plans, including scope, timelines, resource allocation, and budgets. Monitor project progress, identify risks, and implement More ❯
developing and deploying web services working with open-source resources in a government computing environment Maintaining backend GIS technologies ICD 503 Big data technologies such as Accumulo , Spark, Hive, Hadoop , or ElasticSearch F amiliarity with : hybrid cloud/on-prem architecture, AWS, C2S, and OpenStack . concepts such as Data visualization; Data management, Data integration, User Interfaces, Databases CompTIA More ❯
to implement them through libraries. Experience with programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, scikit-learn, along with data visualisation technologies. Experience More ❯
with some of the brightest technical minds in the industry today. BASIC QUALIFICATIONS - 10+ years of technical specialist, design and architecture experience - 10+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 10+ years of consulting, design and implementation of serverless distributed solutions experience - Australian citizen with ability to obtain security clearance. PREFERRED QUALIFICATIONS - AWS Professional level More ❯
/product management environmenta Relevant experience within core java and spark Experience in systems analysis and programming of java applications Experience using big data technologies (e.g. Java Spark, hive, Hadoop) Ability to manage multiple/competing priorities and manage deadlines or unexpected changes in expectations or requirements Prior financial services/trade surveillance experience is desirable Strong analytical and More ❯
between different SQL databases. 5. (Mandatory) Demonstrated professional experience working with Apache NiFi. 6. (Mandatory) Demonstrated professional experience working with large data and high performance compute clusters such as Hadoop or similar. 7. (Mandatory) Demonstrated experience with API development techniques. 8. (Mandatory) Demonstrated experience developing and deploying ETL processes for large data sets. 9. (Mandatory) Demonstrated experience creating operating More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego More ❯
Azure Functions. Strong knowledge of scripting languages (e.g., Python, Bash, PowerShell) for automation and data transformation. Proficient in working with databases, data warehouses, and data lakes (e.g., SQL, NoSQL, Hadoop, Redshift). Familiarity with APIs and web services for integrating external systems and applications into orchestration workflows. Hands-on experience with data transformation and ETL (Extract, Transform, Load) processes. More ❯
Azure Functions. Strong knowledge of scripting languages (e.g., Python, Bash, PowerShell) for automation and data transformation. Proficient in working with databases, data warehouses, and data lakes (e.g., SQL, NoSQL, Hadoop, Redshift). Familiarity with APIs and web services for integrating external systems and applications into orchestration workflows. Hands-on experience with data transformation and ETL (Extract, Transform, Load) processes. More ❯
users or large data sets with 10M+ database records. This is a very Big Data platform. Experience building REST services (orchestration layer) on CRUD data services based on Cloudera Hadoop stack, with an emphasis on performance optimization. Understanding how to secure data in a REST architecture. Knowledge of scaling web applications, including load balancing, caching, indexing, normalization, etc. Proficiency … in Java/Spring web application development. Experience with Test Driven Development and Agile methodologies; Behavior Driven Development is a plus. Knowledge of Hadoop, Big Data, Hive, Pig, NoSQL is a plus, though most engineers with this background may have limited REST experience. Additional Information All your information will be kept confidential according to EEO guidelines. Direct Staffing Inc More ❯