Job Description Title: DevSecOps Engineer Skills: Devops Spark Hadoop PL/SQL Tableau Location: London - Hybrid Duration: 11 Months We are IT Recruitment Specialists partnered with a prestigious Global Consultancy who required a DevSecOps Engineer for one of their Clients based in London (Hybrid). IR35: This role is … Inside of IR35 Job Description: a. Overall 5-7 Years of IT experience b. Strong experience to Dev-Ops principles c. Strong experience in Spark, Tableau, Hadoop, PL/SQL d. Good experience working with AWS platform e. Good exposure of ITIL processes which includes incident, problem, change management more »
Wandsworth, Greater London, Dundonald, United Kingdom
DataBuzz
Bricks setup using Terraform experience. * Experience of MLOps and DataOps. * Experience of using container technologies, cloud platforms (ideally AWS), and distributed processing frameworks like Spark and Dask. * Experience in Javascript application development and UI design. * Expertise in developing mobile applications. * Familiarity with the agile software development process. If you more »
Bricks setup using Terraform experience. * Experience of MLOps and DataOps. * Experience of using container technologies, cloud platforms (ideally AWS), and distributed processing frameworks like Spark and Dask. * Experience in Javascript application development and UI design. * Expertise in developing mobile applications. * Familiarity with the agile software development process. If you more »
CANDIDATES MUST HAVE SC CLEARANCE a. Overall 5-7 Years of IT experience b. Strong experience to Dev-Ops principles c. Strong experience in Spark, Tableau, Hadoop, PL/SQL d. Good experience working with AWS platform e. Good exposure of ITIL processes which includes incident, problem, change management more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
working closely with our product teams on existing projects and new innovations to support company growth and profitability. Our Tech Stack Python Scala Kotlin Spark Google PubSub Elasticsearch Bigquery, PostgresQL Kubernetes, Docker, Airflow Ke y Responsibilities Designing and implementing scalable data pipelines using tools such as ApacheSpark … Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such as ApacheSpark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of data modelling, data access, and more »
analysis. Your expertise will be instrumental in ensuring the security and efficiency of the data handling and reporting processes. Key Responsibilities: Data Processing: Utilize ApacheSpark, AWS RDS, and Hadoop to process large datasets efficiently and securely. Reporting: Generate comprehensive and insightful reports using Tableau. Business Rules Management … adherence to best practices and maintaining high-security standards. Requirements: Security Clearance: Must hold a current and valid Security Clearance. Technical Skills: Proficient with ApacheSpark, AWS RDS, and Hadoop. Experienced in using Tableau for data visualization and reporting. Familiarity with Red Hat Decision Manager for business rules more »
delivering moderate-to-complex data flows as part of a development team in collaboration with others. You’ll be confident using technologies such as: Apache Kafka, Apache NiFi, SAS DI Studio, or other data integration platforms. You can implement, deliver, and translate several data models, including unstructured data … and recognised standards to build solutions using various traditional or big data languages such as: SQL, PL/SQL, SAS Macro Language, Python, Scala, ApacheSpark, Java, JavaScript etc, using various tools including SAS, Hue (Hive/Impala), Kibana (Elastic Search). Knowledge of data management on Cloud more »
London, England, United Kingdom Hybrid / WFH Options
Morgan McKinley
NumPy, scikit-learn). Understanding of database technologies (ETL) and SQL proficiency for data manipulation, data mining and querying. Knowledge of Big Data Tools (Spark or Hadoop a plus). Power BI, Dashboard design/development. Regulatory Awareness/Compliance Uphold Regulatory/Compliance requirements relevant to your role more »
with Git for version control and project management, alongside some knowledge of Linux/Shell. data platform familiarity - previous experience of working with both ApacheSpark and MapReduce data processing and analytics frameworks. and reporting expertise - experience with Tableau, Power BI, Excel alongside notebooks for experiment documentation. What more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Intec Select
cross-functionally across the business to understand the requirements of the products Designing and implementing performance related data ingestion pipelines from multiple sources using ApacheSpark Integrating end-to-end data pipelines ensuring a high level of quality is maintained Working with an Agile delivery/DevOps methodology more »
Cheltenham, England, United Kingdom Hybrid / WFH Options
Yolk Recruitment Ltd
software solutions. Skills Required: In depth experience designing & building backend applications in Python. Familiarity with Big Data and Machine Learning technologies (NumPy, PyTorch, TensorFlow, Spark). Experience developing in a highly Agile/Scrum environment. Familiarity with CICD, containerisation, deployment technologies & cloud platforms (Jenkins, Kubernetes, Docker, AWS). Benefits more »
Birmingham, England, United Kingdom Hybrid / WFH Options
⭕️ Nimbus®
as Python, C#, .Net and/or JavaScript is highly desirable. Experience with cloud platforms (e.g., Azure) and data technologies (e.g., SQL, NoSQL, Hadoop, Spark). PLEASE NOTE: You must have either UK citizenship or permanent leave to remain in the UK. Due to the high volume of applications more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
ideally in a start-up or scale-up. - Machine learning libraries and frameworks (TensorFlow, PyTorch, scikit-learn). - Python - Big data processing tools (e.g., Spark). The role offers a salary range of between £70-100K depending on experience. The successful candidate must be able to work from more »
on Kubernetes with Helm/Terraform Good to have prior experience dealing with streaming and batch compute frameworks like Spring Kafka, Kafka Streams, Flink, Spark Streaming, Spark Experience with large-scale computing platforms, such as Hadoop, Hive, Spark, NoSQL stores Experience with developing large-scale data pipelines more »
role Good level of experience of Data Lake/Hadoop platform implementation Good level hands-on experience in implementation and performance tuning Hadoop/Spark implementations Experience Apache Hadoop and the Hadoop ecosystem Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, Hcatalog, Solr … Avro) Experience with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto) Experience developing software code in one or more programming languages (Java, Python, etc.) Preferred Qualifications: Masters or PhD in Computer Science, Physics, Engineering or Maths Hands on experience leading large-scale global data warehousing more »
SQL Server, Sybase, Snowflake) Document databases (e.g. Mongo, ArangoDB, Couchbase, Solr) Big Data (e.g. Hadoop ecosystem, Bigtable) Data streaming (e.g. Kafka, Flink, Pulsar, Beam, Spark) Cloud databases (e.g. Snowflake, CockroachDB) Other database genres (e.g. Graph, Columnar, time series) In return, we’ll give you… A competitive basic salary … scheme A high spec laptop (of course!) Need more reasons? Here's a few more... Work with some of the most exciting new technologies Spark off co-workers who’ll challenge your thinking and help you to achieve your potential Deal openly and honestly with customers Benefit from a more »
City of London, London, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
Manchester, North West, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
City of London, London, United Kingdom Hybrid / WFH Options
Oliver Bernard Ltd
Experience with a JVM language, Kotlin, Java, Scala, Clojure Knowledge of Typescript and React is beneficial Exposure to data pipelines using technologies such as Spark and Kafka Experience with cloud services (ideally AWS) Hybrid working 1-2 days per week in Central London. £110,000 depending on experience. Please more »
Penrith, Cumbria, United Kingdom Hybrid / WFH Options
Computer Futures
DataBricks, Azure SQL (Indicative experience = 5yrs+) Build and test processes supporting data extraction, data transformation, data structures, metadata, dependency and workload management. Knowledge on Spark architecture and modern Datawarehouse/Data-Lake/Lakehouse techniques Build transformation tables using SQL. Moderate level knowledge of Python/PySpark or equivalent more »
with JavaScript or Python Experience deploying software into the cloud and on-premise. Developing software products. Experience with EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi. Experience with microservices architectures. Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »