innovation. What You'll Bring Experience with SQL (e.g. MS SQL, Oracle) and NoSQL (e.g. MongoDB, Neo4J) Strong skills in Python, ETL, API integration, and data processing Familiarity with Hadoop, Docker, and containerisation technologies Knowledge of Generative AI, Natural Language Processing, or OCR is a plus Understanding of data governance and compliance in sensitive environments Eligibility for SC clearance More ❯
innovation. What You'll Bring Experience with SQL (e.g. MS SQL, Oracle) and NoSQL (e.g. MongoDB, Neo4J) Strong skills in Python, ETL, API integration, and data processing Familiarity with Hadoop, Docker, and containerisation technologies Knowledge of Generative AI, Natural Language Processing, or OCR is a plus Understanding of data governance and compliance in sensitive environments Eligibility for SC clearance More ❯
senior leadership PREFERRED QUALIFICATIONS - Experience working as a BIE in a technology company - Experience with AWS technologies - Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application More ❯
Previous experience as a Data Engineer (3-5 years); Deep expertise in designing and implementing solutions on Google Cloud; Strong interpersonal and stakeholder management skills; In-depth knowledge of Hadoop, Spark, and similar frameworks; In-depth knowledge of programming languages including Java; Expert in cloud-native technologies, IaC, and Docker tools; Excellent project management skills; Excellent communication skills; Proactivity More ❯
models in production and adjusting model thresholds to improve performance Experience designing, running, and analyzing complex experiments or leveraging causal inference designs Experience with distributed tools such as Spark, Hadoop, etc. A PhD or MS in a quantitative field (e.g., Statistics, Engineering, Mathematics, Economics, Quantitative Finance, Sciences, Operations Research) Office-assigned Stripes spend at least 50% of the time More ❯
internal customer facing, complex and large scale project management experience - 5+ years of continuous integration and continuous delivery (CI/CD) experience - 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 5+ years of consulting, design and implementation of serverless distributed solutions experience - 3+ years of cloud based solution (AWS or equivalent), system, network and operating More ❯
NLP technologies Proficiency in Python and/or Scala; experience with ML libraries such as TensorFlow, PyTorch, HuggingFace, or scikit-learn Experience with Databricks, distributed data systems (e.g., Spark, Hadoop), and cloud platforms (AWS, GCP, or Azure) Ability to thrive in ambiguous environments, working closely with cross-functional teams to define and deliver impactful solutions Strong communication skills with More ❯
Automation & Configuration Management Ansible (plus Puppet, SaltStack), Terraform, CloudFormation; Programming Languages and Frameworks Node.js, React/Material-UI (plus Angular), Python, JavaScript; Big Data Processing and Analysis e.g., ApacheHadoop (CDH), Apache Spark; Operating Systems Red Hat Enterprise Linux, CentOS, Debian, or Ubuntu. More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego More ❯
Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
such will have a large focus on training and building your skill set. You will have a genuine opportunity to build your knowledge around cloud and big data, particularly Hadoop and related data science technology. This is a fantastic opportunity to join one of the most innovative and exciting technology companies in London and build a career in the More ❯
Data Scientist - skills in statistics, physics, mathematics, Computer Science, Engineering, Data Mining, Big Data (Hadoop, Hive, MapReduce) This is an exceptional opportunity to work as a Data Scientist within a global analytics team, utilizing various big data technologies to develop complex behavioral models, analyze customer uptake of products, and foster new product innovation. Responsibilities include: Generating and reviewing large More ❯
SQL, Oracle) and noSQL skills (e.g. MongoDB, Neo4J) Proficiency in data exchange and processing tools (ETL, ESB, APIs) Development skills in Python and familiarity with Big Data technologies (e.g. Hadoop) Knowledge of NLP and OCR; Generative AI expertise advantageous Understanding of containerisation (Docker) and, ideally, the industrial/defence sector The Package: Company bonus: Up to £2,500 (based More ❯
classifiers Required Skills/Experience The ideal candidate will have the following: Programming Skills - knowledge of statistical programming languages like python, and database query languages like SQL, Hive/Hadoop, Pig is desirable. Familiarity with Scala and java is an added advantage. Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency More ❯
their own Cloud estate. Responsibilities include: DevOps tooling/automation written with Bash/Python/Groovy/Jenkins/Golang Provisioning software/frameworks (Elasticsearch/Spark/Hadoop/PostgreSQL) Infrastructure Management - CasC, IasC (Ansible, Terraform, Packer) Log and metric aggregation with Fluentd, Prometheus, Grafana, Alertmanager Public Cloud, primarily GCP, but also AWS and Azure More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
and machine learning PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. - Master's degree in math/statistics/engineering or other equivalent quantitative discipline, or PhD More ❯
up (windows, linux, SQL, Redis, WebApps, etc) Experience with containerization Preferred skills: Kubernetes or other orchestration tools NewRelic scripted monitors Experience with security tools (including CheckPoint) and best practices Hadoop, Kafka, Presto (or similar BI/Big Data tech) Sphera is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. More ❯
home, there's nothing we can't achieve in the cloud. BASIC QUALIFICATIONS 7+ years of technical specialist, design and architecture experience 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience 7+ years of consulting, design and implementation of serverless distributed solutions experience 5+ years of software development with object-oriented language experience 3+ years of More ❯
of influencing C-suite executives and driving organizational change • Bachelor's degree, or 7+ years of professional or military experience • Experience in technical design, architecture and databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) • Experience implementing serverless distributed solutions • Software development experience with object-oriented languages and deep expertise in AI/ML PREFERRED QUALIFICATIONS • Proven ability to shape market More ❯
more of the following areas: Software Design or Development, Content Distribution/CDN, Scripting/Automation, Database Architecture, Cloud Architecture, Cloud Migrations, IP Networking, IT Security, Big Data/Hadoop/Spark, Operations Management, Service Oriented Architecture etc. - Experience in a 24x7 operational services or support environment. - Experience with AWS Cloud services and/or other Cloud offerings. Our More ❯