Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like ApacheSpark is a plus. ApacheSpark and Airflow would be a bonus. Role overview: If you're looking to work with more »
comfortable designing and constructing bespoke solutions and components from scratch to solve the hardest problems. Adept in Java, Scala, and big data technologies like Apache Kafka and ApacheSpark, they bring a deep understanding of engineering best practices. This role involves scoping and sizing, and indeed estimating … be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributed computing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of CI/CD pipelines for more »
Data Analytics in Azure Synapse Analytics, Azure Analysis Services Data Ingestion and Storage including Azure Data Factory, Azure Databricks, Azure Data Lake, Kafka and Spark Streaming, Azure EventHub/IoT Hub, and Azure Stream Analytics Experience with Object-oriented/object function scripting languages: Python preferred more »
working in an Azure environment with a focus on Azure Data Lake, Databricks, Azure Data Factory and Power BI architectures. Working with Python, SQL, Spark and Databricks, you will build, rebuild and maintain data pipelines as well as have the opportunity to work with CI/CD pipelines, Azure more »
Manchester, North West, United Kingdom Hybrid / WFH Options
Dupen Ltd
Linux, APIs, infrastructure design – load balancing, VMs, PostgreSQL, vector dbs. ML Learning Engineer – desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud – AWS, Google Cloud, Azure, and a knowledge of secure coding techniques – PCI-DSS, PA-DSS, ISO27001. Note: as there are actually two roles more »
Employment Type: Permanent
Salary: £50000 - £60000/annum To £60,000 + range of benefits
RDBMS environments: Sybase ASE/IQ, Oracle or DB2 It would be great if you have: Experience in Cluster Computing and Big Data solutions: Spark, Hadoop, HDSF, XRS using public cloud Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work more »
Manchester, North West, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Senitor Associates Limited
within Software Engineering to explore new technologies. Contribute to a team culture that prioritizes diversity, equity, inclusion, and respect. Required Skills Expertise in Java , Spark, SQL, Relational DB, Spark, NoSQL, focusing on performance optimization. A thorough understanding of the Software Development Life Cycle and agile methodologies, including CI more »
Manchester, England, United Kingdom Hybrid / WFH Options
Roku
in applied machine learning on real use cases (brownie points for productionised reinforcement learning use cases!) Experience with ML/distributed ML frameworks like Spark-MLlib, TensorFlow etc. Experience with real-time scoring/evaluation of models with low latency constraints Great coding skills and strong software development experience … we use Spark, Python and Java a lot) Can work with large scale computing frameworks, data analysis systems and modeling environments. Examples include technologies like Spark, Hive, NoSQL stores etc. Bachelors, Masters or PhD program in Computer Science/Statistics or a related field Ad-tech background is more »
scripting. * Minimum of 1 year experience in investment banking or the financial sector. * Performance Tuning of Oracle/MySQL/Hive SQL Queries/Spark SQL Statements. * Experience in working with large databases - multi terabytes (3+ Terabytes). * Minimum of 5 years' experience in Big Data Space (Hive, Impala … Spark Sql, HDFS etc). * Any cloud experience (AWS/Azure/Google/Oracle). * Solid experience with Oracle objects(Packages,Procedure,functions) * Very clear concepts on Oracle architecture. * Very strong debugging skills. * Proficient in query tuning. * Detail oriented * Strong written and verbal communication skills * Ability to work more »