About the job Scala Spark Developer - London Opening: Scala Spark Developer (Hybrid from London) Client Introduction: The company is a multinational Sweden SAAS product-based firm. Company Strength: 100+ Job Description: The Scala Spark Developer at HCL will be responsible for leading technical teams and projects related to apachespark, scala, and python. The role … involves overseeing the design, development, and implementation of scalable and efficient solutions using these technologies. Key Responsibilities 1. Lead technical teams in the design and implementation of solutions using apachespark, scala, and python 2. Provide technical expertise and guidance to team members in resolving complex technical issues 3. Collaborate with stakeholders to gather requirements and define project … 5. Conduct code reviews and performance optimization activities 6. Troubleshoot and debug technical issues to ensure seamless project delivery 7. Stay updated with the latest trends and advancements in apachespark, scala, and python technologies 8. Mentor team members and facilitate knowledge sharing within the team Skill Requirements 1. Strong proficiency in apachespark, scala, and More ❯
using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server, NOSQL (HBase/Cassandra, MongoDB) Experience in cloud data eco-system - AWS, Azure More ❯
to refine and monitor data collection systems using Scala and Java. Apply sound engineering principles such as test-driven development and modular design. Preferred Background Hands-on experience with Spark and Scala in commercial environments. Familiarity with Java and Python. Exposure to distributed data systems and cloud storage platforms. Experience designing data schemas and analytical databases. Use of AI More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Fortice
between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and provide More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
knowledge of Kafka, Confluent, Databricks, Unity Catalog, and cloud-native architecture. Skilled in Data Mesh, Data Fabric, and product-led data strategy design. Experience with big data tools (e.g., Spark), ETL/ELT, SQL/NoSQL, and data visualisation. Confident communicator with a background in consultancy, stakeholder management, and Agile delivery. Want to hear more? Message me anytime. Linked More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯
Strong track record delivering production-grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML Engineer (Professional) certified More ❯
or AI, including leadership roles. Deep expertise in machine learning, NLP, and predictive modelling. Proficient in Python or R, cloud platforms (AWS, GCP, Azure), and big data tools (e.g. Spark). Strong business acumen, communication skills, and stakeholder engagement. If this role looks of interest, please apply here. Please note - this role cannot offer visa sponsorship. More ❯
and applying best practices in security and compliance, this role offers both technical depth and impact. Key Responsibilities Design & Optimise Pipelines - Build and refine ETL/ELT workflows using Apache Airflow for orchestration. Data Ingestion - Create reliable ingestion processes from APIs and internal systems, leveraging tools such as Kafka, Spark, or AWS-native services. Cloud Data Platforms - Develop … DAGs and configurations. Security & Compliance - Apply encryption, access control (IAM), and GDPR-aligned data practices. Technical Skills & Experience Proficient in Python and SQL for data processing. Solid experience with Apache Airflow - writing and configuring DAGs. Strong AWS skills (S3, Redshift, etc.). Big data experience with Apache Spark. Knowledge of data modelling, schema design, and partitioning. Understanding of More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
AI to move faster and smarter. You will be experienced in AI and enjoy writing code. Responsibilities Build and maintain scalable distributed systems using Scala and Java Design complex Spark jobs, asynchronous APIs, and parallel processes Use Gen AI tools to enhance development speed and quality Collaborate in Agile teams to improve their data collection pipelines Apply best practices … structures, algorithms, and design patterns effectively Foster empathy and collaboration within the team and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
providing industry solutions for Financial Services, Manufacturing, Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Spark - Must have Scala - Must Have Hive & SQL - Must Have Recent hands-on experience with Scala coding is required. Banking/Capital Markets Domain - Good to have Interview includes coding … test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A More ❯