Senior Java Engineer's - Spark Belfast (Hybrid) Were looking for experienced Senior Java Engineer's to join an exciting data-driven project! This role involves active development (no support work) on a batch data processing platform, working across both greenfield and existing project streams. Some elements of the work … computing Big Data & Distributed Systems Knowledge: Strong understanding of how distributed systems function, particularly in large-scale data environments Hands-on experience with Hadoop, Apache Hive, or similar big data technologies ApacheSpark Expertise (Mandatory): Experience in batch data processing using ApacheSpark Writing and … optimizing Spark jobs for large datasets Performance tuning for Spark-based applications Database & Storage Experience: Comfortable working with relational databases (e.g., Oracle, PostgreSQL) Exposure to big data storage solutions Skills: Java spark java 11 Java Programming More ❯
systems. No boring support workjust pure, hands-on development withcutting-edge tech. The Tech Stack: - Java (11+) Youve got strong, commercial, hands-on experience - Spark Ideally, youve dabbled (or mastered) - Finance/Banking experience? Cool, but not a deal-breaker The Perks: - A modern Belfast office Close to great … world challenges, then this is your chance to make an impact while working with a fantastic, high-performance team. Interested? Skills: Java Java Programming sparkapachespark scala More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Yelp USA
to the experimentation and development of new ad products at Yelp. Design, build, and maintain efficient data pipelines using large-scale processing tools like ApacheSpark to transform ad-related data. Manage high-volume, real-time data streams using Apache Kafka and process them with frameworks like … Apache Flink. Estimate timelines for projects, feature enhancements, and bug fixes. Work with large-scale data storage solutions, including Apache Cassandra and various data lake systems. Collaborate with cross-functional teams, including engineers, product managers and data scientists, to understand business requirements and translate them into effective system … a proactive approach to identifying opportunities and recommending scalable, creative solutions. Exposure to some of the following technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you More ❯
for? Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Ability to closely with other engineering teams to More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
data within multiple EDRMS and Content Management Systems. Understanding of streaming data technologies and methodologies. Experience in mainstream Cloud Data Lakehousing platforms (such as ApacheSpark, Microsoft Fabric, Databricks, Snowflake) and associated industry standard/portable data formats (e.g., Delta Lake, Iceberg, Parquet, CSV, JSON, Avro, ORC, and More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Talent
Python, R, or SQL. • Experience with machine learning frameworks (e.g., Scikit-learn, TensorFlow, PyTorch). • Proficiency in data manipulation and analysis (e.g., Pandas, NumPy, Spark). • Knowledge of data visualization tools (e.g., Power BI, Tableau, Matplotlib). • Understanding of statistical modelling, hypothesis testing, and A/B testing. • Experience More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Investigo
advanced visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Investigo
advanced visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature More ❯
birmingham, midlands, united kingdom Hybrid / WFH Options
PA Consulting
to. Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Pythn, Scala, Spark, SQL. Perform tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Knowledge of data cleaning, wrangling, visualization and reporting, with More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
workflows professional motorsports organization. Experience using simulation tools to optimize vehicle performance. Experience with machine learning libraries. Experience with big data tools (e.g. Hadoop, Spark, SQL, and NoSQL Database experience). About GM Our vision is a world with Zero Crashes, Zero Emissions and Zero Congestion and we embrace More ❯
to join its innovative team. This role requires hands-on experience with machine learning techniques and proficiency in data manipulation libraries such as Pandas, Spark, and SQL. As a Data Scientist at PwC, you will work on cutting-edge projects, using data to drive strategic insights and business decisions. … e.g. Sklearn) and (Deep learning frameworks such as Pytorch and Tensorflow). Understanding of machine learning techniques. Experience with data manipulation libraries (e.g. Pandas, Spark, SQL). Git for version control. Cloud experience (we use Azure/GCP/AWS). Skills we'd also like to hear about More ❯
to join its innovative team. This role requires hands-on experience with machine learning techniques and proficiency in data manipulation libraries such as Pandas, Spark, and SQL. As a Data Scientist at PwC, you will work on cutting-edge projects, using data to drive strategic insights and business decisions. … e.g. Sklearn) and (Deep learning frameworks such as Pytorch and Tensorflow). Understanding of machine learning techniques. Experience with data manipulation libraries (e.g. Pandas, Spark, SQL). Git for version control. Cloud experience (we use Azure/GCP/AWS). Skills we’d also like to hear about More ❯