display, video, mobile, programmatic, social, native), considering viewability, interaction, and engagement metrics. Create dashboards and deliver usable insights to help steer product roadmaps. Utilize tools such as SQL, R, Hadoop, Excel to hypothesize and perform statistical analysis, AB tests, and experiments to measure the impact of product initiatives on revenue, technical performance, advertiser & reader engagement. Candidates should have analysis More ❯
streaming-oriented technology. Help build the DevOps strategy for hosting and managing our SDP microservice and connector infrastructure in AWS cloud. Design and implement big data technologies around ApacheHadoop, Kafka streaming, No SQL, Java/J2EE and distributed computing platforms. Participate in Agile development projects for enterprise-level systems component design and implementation. Apply enterprise software design for … cloud experience, including S3, EFS, MSK, ECS, and EMR. Experience with RDBMS. Experience with Jenkins CI/CD pipeline. Bachelor's degree in a technical discipline. Plus: Knowledge of Hadoop/Spark and various data formats like Parquet, CSV, etc. Additional Information Benefits/Perks: Great compensation package and bonus plan Core benefits including medical, dental, vision, and matching More ❯
to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
Advert Hadoop Engineer 6 Months Contract Remote working £300 to £350 a day A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems. The successful candidate should have … for Scripting Apache Spark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then this is the perfect opportunity for you. If the above seems of interest to you then please apply directly to the AD More ❯
data processing and predictive analytics. Role: Develop and implement machine learning models using Spark ML for predictive analytics Design and optimise training and inference pipelines for distributed systems (e.g., Hadoop) Process and analyse large-scale datasets to extract meaningful insights and features Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines Evaluate model performance … computing technologies Experience: Proficiency in Apache Spark and Spark MLlib for machine learning tasks Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering) Experience with distributed systems like Hadoop for data storage and processing Proficiency in Python, Scala, or Java for ML development Familiarity with data preprocessing techniques and feature engineering Knowledge of model evaluation metrics and techniques More ❯
Growth Revenue Management, Marketing Analytics, CLM/CRM Analytics and/or Risk Analytics. Conduct analyses in typical analytical tools ranging from SAS, SPSS, Eviews, R, Python, SQL, Teradata, Hadoop, Access, Excel, etc. Communicate analyses via compelling presentations. Solve problems, disaggregate issues, develop hypotheses and develop actionable recommendations from data and analysis. Prepare and facilitating workshops. Manage stakeholders and … An ability to think analytically, decompose problem sets, develop hypotheses and recommendations from data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a More ❯
/MOD or Enhanced DV Clearance. WE NEED THE PYTHON/DATA ENGINEER TO HAVE. Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯
Analyst, Global Network & Optimization Solutions • Conduct data analysis for customer engagements and strategic initiatives across Acceptance Solutions (pull data, interpret insights, make recommendations). • Serve as first line support to our sellers and customers during deal cycles. • Collaborate with the More ❯