and vetting mission. 4. Demonstrated experience in micro service architecture using Spring Framework, Spring Boot, Tomcat, AWS, Docker Container or Kubernetes solutions. 5. Demonstrated experience in big data solutions (Hadoop Ecosystem, MapReduce, Pig, Hive, DataStax, etc.) in support of a screening and vetting mission. More ❯
innovation. What You'll Bring Experience with SQL (e.g. MS SQL, Oracle) and NoSQL (e.g. MongoDB, Neo4J) Strong skills in Python, ETL, API integration, and data processing Familiarity with Hadoop, Docker, and containerisation technologies Knowledge of Generative AI, Natural Language Processing, or OCR is a plus Understanding of data governance and compliance in sensitive environments Eligibility for SC clearance More ❯
innovation. What You'll Bring Experience with SQL (e.g. MS SQL, Oracle) and NoSQL (e.g. MongoDB, Neo4J) Strong skills in Python, ETL, API integration, and data processing Familiarity with Hadoop, Docker, and containerisation technologies Knowledge of Generative AI, Natural Language Processing, or OCR is a plus Understanding of data governance and compliance in sensitive environments Eligibility for SC clearance More ❯
Data Science, or Physics. • Technical Skills: Proficiency with core technical concepts, including data structures, storage systems, cloud infrastructure, and front-end frameworks. Also, familiarity with technologies like Oracle, PostgreSQL, Hadoop, Spark, AWS, or Azure. • Programming Proficiency: Expertise in programming languages such as Java, C++, Python, JavaScript, or similar. • User-Centered Approach: Understanding of how technical decisions directly impact user More ❯
Stevenage, Hertfordshire, England, United Kingdom Hybrid / WFH Options
MBDA
e.g. MS SQL, Oracle...) noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J...) Data exchange and processing skills (e.g. ETL, ESB, API...) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
internal customer facing, complex and large scale project management experience - 5+ years of continuous integration and continuous delivery (CI/CD) experience - 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 5+ years of consulting, design and implementation of serverless distributed solutions experience - 3+ years of cloud based solution (AWS or equivalent), system, network and operating More ❯
practice of information retrieval, data science, machine learning and data mining - Experience with AWS technologies PREFERRED QUALIFICATIONS - Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. - Experience with statistical analytics and programming languages such as R, Python, Ruby, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you More ❯
challenges. You are proficient in Python, with experience using PySpark and ML libraries such as scikit-learn, TensorFlow, or Keras . You are familiar with big data technologies (e.g., Hadoop, Spark), cloud platforms (AWS, GCP), and can effectively communicate technical concepts to non-technical stakeholders. Accommodation requests If you need assistance with any part of the application or recruiting More ❯
home, there's nothing we can't achieve in the cloud. BASIC QUALIFICATIONS - 7+ years of technical specialist, design and architecture experience - 3+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 7+ years of consulting, design and implementation of serverless distributed solutions experience - 3+ years of software development with object oriented language experience - 3+ years of More ❯
such as Python, Java, or Scala, and experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn Experience with large-scale distributed systems and big data technologies (e.g., Spark, Hadoop, Kafka) Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach More ❯
e.g., AWS, Azure, GCP) and container orchestration (e.g., Kubernetes). Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills. Experience in Cloudera, Hadoop, Cloudera Data Science Workbench (CDSW), Cloudera Machine Learning (CML) would be highly desirable. Work Environment: Full time on-site presence required If you don't meet every single requirement More ❯
/product management environmenta Relevant experience within core java and spark Experience in systems analysis and programming of java applications Experience using big data technologies (e.g. Java Spark, hive, Hadoop) Ability to manage multiple/competing priorities and manage deadlines or unexpected changes in expectations or requirements Prior financial services/trade surveillance experience is desirable Strong analytical and More ❯
heterogenous data sources. • Good knowledge of warehousing and ETLs. Extensive knowledge of popular database providers such as SQL Server, PostgreSQL, Teradata and others. • Proficiency in technologies in the ApacheHadoop ecosystem, especially Hive, Impala and Ranger • Experience working with open file and table formats such Parquet, AVRO, ORC, Iceberg and Delta Lake • Extensive knowledge of automation and software development More ❯
native tech stack in designing and building data & AI solutions Experience with data modeling, ETL processes, and data warehousing Knowledge of big data tools and frameworks such as Spark, Hadoop, or Kafka More ❯
in multiple programming languages such as bash, Python, or Go Must have a DoD 8140/8570 compliance certification (i.e. Security+ certification) Preferred Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers and Kubernetes More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Experis - ManpowerGroup
Using machine learning tools to select features, create and optimize classifiers Qualifications: Programming Skills - knowledge of statistical programming languages like python, and database query languages like SQL, Hive/Hadoop, Pig is desirable. Familiarity with Scala and java is an added advantage. Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Experis
Using machine learning tools to select features, create and optimize classifiers Qualifications: Programming Skills - knowledge of statistical programming languages like python, and database query languages like SQL, Hive/Hadoop, Pig is desirable. Familiarity with Scala and java is an added advantage. Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency More ❯
SQL, Oracle) and noSQL skills (e.g. MongoDB, Neo4J) Proficiency in data exchange and processing tools (ETL, ESB, APIs) Development skills in Python and familiarity with Big Data technologies (e.g. Hadoop) Knowledge of NLP and OCR; Generative AI expertise advantageous Understanding of containerisation (Docker) and, ideally, the industrial/defence sector The Package: Company bonus: Up to £2,500 (based More ❯
real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance, and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums. Qualifications Proven technical pre-sales or technical consulting experience. OR Bachelor's Degree in More ❯
DBMS, ORM (Hibernate), and APIs. Active TS/SCI Clearance with Full Scope Polygraph. Preferred Skills: Microservices, REST, JSON, XML. CI/CD, Docker, Kubernetes, Jenkins. Big Data technologies (Hadoop, Kafka, Spark). CISSP, Java, AWS certifications a plus. Ready to Transform Your Career? Join Trinity Enterprise Services-where professional growth meets personal fulfillment. Apply today to become a More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
Agile working practices CI/CD tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent team-working skills. Strong More ❯
Azure or AWS) Experience on identifying, gathering and prioritizing Non Functional Requirements, Technical pain points and environment/client constraints Experience on scaling data workloads using platforms such as Hadoop and Spark Experience with CI/CD and MLOps pipelines Experience with Infrastructure as a Code Experience on collaborating with pre-sales, presenting data solutions and demonstrating technical capabilities More ❯