/ELT tools.Experience with NoSQL type environments, Data Lakes, Lake-Houses (Cassandra, MongoDB or Neptune).Experience with distributed storage, processing engines such as ApacheHadoop and Apache Spark.Experience with message brokering/stream processing services such as Apache Kafka, Confluent, Azure Stream Analytics.Experience in Test Driven Development (TDD) and more »
innovative solutionsData engineering skills: Proficiency in designing, building, and optimizing data pipelines, as well as experience with big data processing tools like Apache Spark, Hadoop, and DataflowExperience in designing & operating Operational Datastore/Data Lake/Data Warehouse platforms at scale with high-availabilityData integration: Familiarity with data integration more »
relevant experience in building DW/BI systems · Demonstrated ability in data modeling, ETL development, and Data warehousing. · Strong experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) · Expertise in a BI solution like Power BI · Hands on experience in modelling databases (particularly nosql), working on indexes more »
AWS SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. more »
AWS SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. more »
AWS SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. more »
AWS SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. more »
AWS SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. more »
AWS SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. more »
good programming practices. Design, develop, and maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient more »
would be an advantage Data visualization – Tools like Tableau Master data management (MDM) – Concepts and expertise in tools like Informatica & Talend MDM Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL more »
Technical Discipline. Technical Expertise: Proficiency in SQL and experience with cloud-based data pipelines (Azure, AWS, GCP). Familiarity with big data tools like Hadoop and Spark. Data Management Skills: Hands-on experience working with large data sets, data pipelines, workflow management tools, and Azure cloud services. Exposure to more »
experience, Experience with cloud computing platforms such as AWS, Azure, or GCP (Google Cloud Platform). Familiarity with big data technologies such as ApacheHadoop, Spark, or Kafka. Experience deploying machine learning models in production environments. Contributions to open-source machine learning projects or research publications in relevant conferences more »
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Workday
algorithms and data structures A proactive mindset with excellent problem-solving and communication skills Experience with big data technologies such as Apache Kafka, Spark, Hadoop, or similar systems. Preferred Skills: Demonstrated experience with scripting languages like Python, Bash, etc Testing and troubleshooting skills with the ability to walk from more »
Master's preferred). Excellent problem-solving and communication. Can be advantageous if you have: Cloud platform experience (AWS, Azure, GCP), big data tech (Hadoop, Spark), containerization (Docker, Kubernetes), DevOps and CI/CD understanding. We regret to inform you that only shortlisted candidates will be notified/contacted. more »
learn). Understanding of database technologies (ETL) and SQL proficiency for data manipulation, data mining and querying. Knowledge of Big Data Tools (Spark or Hadoop a plus). Power BI, Dashboard design/development. Regulatory Awareness/Compliance Uphold Regulatory/Compliance requirements relevant to your role escalating areas more »
to plan work to maximize the team's productivity and effectiveness. • Deep understanding of the AI development lifecycle• Proficiency in big data technologies like Hadoop, Spark, or similar frameworks.• Excellent skills in data visualization and interpretation.• Demonstrated history of successfully delivering high-quality, data-driven solutions, including deploying production more »
software engineer in a globally distributed team working with Scala, Java programming language (preferably both) Experience with big-data technologies Spark/Databricks and Hadoop/ADLS is a must Experience in any one of the cloud platform Azure (Preferred), AWS or Google Experience building data lakes and data more »
a team environment.Preferred Qualifications:Previous experience in designing and implementing data engineering solutions in a cloud environment.Knowledge of big data technologies and frameworks like Hadoop, Spark, and Kafka.Come join our team and be part of an organisation that values innovation and empowers its employees to make a meaningful impact more »
of technology to automate data pipelines and build analytical warehouses· Deep understanding of cloud-based data platforms (Azure SQL DB, Azure Synapse, ADLS, AWS, Hadoop, Spark, Snowflake, No-SQL etc).· Proficient scripting in programming languages such as Java, Python, Scala· Expert in SQLMachine Learning· Good basic understanding of more »
Maidstone, Kent, United Kingdom Hybrid / WFH Options
Worley
of technology to automate data pipelines and build analytical warehouses· Deep understanding of cloud-based data platforms (Azure SQL DB, Azure Synapse, ADLS, AWS, Hadoop, Spark, Snowflake, No-SQL etc).· Proficient scripting in programming languages such as Java, Python, Scala· Expert in SQLMachine Learning· Good basic understanding of more »
data solutions (AWS, Azure or GCP), engineering languages including Python, SQL, Java, and pipeline management tools e.g., Apache Airflow. Familiarity with big data technologies, Hadoop, or Spark. If this opportunity is of interest, or you know anyone who would be interested in this role, please send your CV and more »
know one or more of the following tools: Informatica PowerCenter, SAS Data Integration Studio, Microsoft SSIS, Ab Initio, etc. • Ideally, you have experience in Hadoop ecosystem (Spark, Kafka, HDFS, Hive, HBase, …), Docker and orchestration platform (Kubernetes, Openshift, AKS, GKE...), and noSQL Databases (MongoDB, Cassandra, Neo4j) • Any experience with cloud more »
through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from various more »
of Python and SQL Exposure to developing in a cloud platform such as AWS, GCP or Azure Knowledge of big data technologies e.g., Trino, Hadoop or Pyspark Ability to build trusted and credible relationships with your peers, stakeholders, and customers. Analytical thinker and natural problem solver If this sounds more »