Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python (Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux More ❯
Knowledge of Data Warehouse/Data Lake architectures and technologies. Strong working knowledge of a language for data analysis and scripting, such as Python, Pyspark, R, Java, or Scala. Experience with any of the following would be desirable but not essential; Microsoft's Fabric data platform, Experience with ADF More ❯
to provid e feedback and v alidate the technical implementation for custom applications . Experience working with a scripting language like Python, SQL, Spark, PySpark or similar. Must be able to work on-site in Hendon, VA, with the ability to work in Springfield, VA as needed. Preferred Qualifications More ❯
to provid e feedback and v alidate the technical implementation for custom applications . Experience working with a scripting language like Python, SQL, Spark, PySpark or similar. Must be able to work on-site in Hendon, VA, with the ability to work in Springfield, VA as needed. Preferred Qualifications More ❯
apply! A degree in Mathematics, Engineering, Statistics, Computer Science, Physics, or a related field. An advanced degree is highly preferred. Proficient in Python and PySpark; experience with SQL or similar querying languages. Solid foundation in machine learning principles, including model evaluation, optimization, and deployment best practices. Self-motivated, collaborative More ❯
Analyst: Must hold experience with data analytic techniques, ideally applied to real-life objects. Must hold a year's professional experience using Python/Pyspark/Pandas. Experience with processing data and working with databases/datalakes (SQL). Strong understanding of data manipulation, analysis and processing. Ability to More ❯
Data Analyst: ' • Must hold experience with data analytic techniques, ideally applied to real life objects. ' • Must hold a years professional experience using Python/Pyspark/Pandas. ' • Experience with processing data and working with databases/datalakes (SQL). ' • Strong understanding of data manipulation, analysis and processing.' • Abiliity to More ❯
using data engineering, statistical, and ML/AI approaches to uncover data patterns and build models. We use Microsoft tech stack, including Azure Databricks (Pyspark, python), and we are expanding our data science capabilities. To be successful in the role, you will need to have extensive experience in data More ❯
you to have AWS/Cloud Linux (Centos7, Redhat8) Flask, Django, MariaDB, MongoDB Jira, Confluence Ni-Fi, Angular, Terraform Kubernetes, Docker, JSON, Ansible, PIG, PySpark About BigBear.ai BigBear.ai is a leading provider of AI-powered decision intelligence solutions for national security, supply chain management, and digital identity. Customers and More ❯
data into a unified and reliable asset. Projects natural confidence in communication and has strong stakeholder management skills. Has strong proficiency with Pandas, Numpy, PySpark or similar for Data Analysis & Cleaning Python Has some working knowledge of tools such as Beautiful Soup, Selenium and/or Scrapy Python Maintains More ❯
Glasgow, Renfrewshire, United Kingdom Hybrid / WFH Options
Cisco Systems, Inc
field. Experienced with cloud based data processing platforms such as AWS, and/or Databricks. You have firm software development skills with Python/PySpark, Terraform, Git, CI/CD, Docker. Comfortable with relational and NoSQL databases/datastores such as Elasticsearch. Familiar with the threat landscape and threat More ❯
aspect of working with data in ADLS is the transformation and modeling process. Companies can leverage Azure Data Factory or Databricks, using languages like PySpark and Scala , to create efficient data processing and transformation workflows. These workflows are designed to handle both batch and streaming data seamlessly. Furthermore, organizations More ❯
huge datasets. Has a solid understanding of blockchain ecosystem elements like DeFi, Exchanges, Wallets, Smart Contracts, mixers and privacy services. Bonus Experience: Databricks and PySpark Analysing blockchain data Building and maintaining data pipelines Deploying machine learning models Use of graph analytics and graph neural networks Following funds on chain More ❯
You have a good Experience with master data management You are familiar with data quality tool like Azure Purview (Collibra, Informatica, Soda) Python, Spark, PySpark, Spark SQL Other Security protocols like CLS (column Level Security, Object Level Security) You are fluent in Dutch and/or French and English. More ❯
Leicestershire, England, United Kingdom Hybrid / WFH Options
iO Associates - UK/EU
machine learning. Experience with deep learning or generative AI is a plus but not essential. Proficiency in (Spark)SQL and Python . Experience with PySpark is beneficial but not required. Experience designing and implementing robust testing frameworks . Strong analytical skills with keen attention to detail. Excellent communication skills More ❯
Crewe, Cheshire, United Kingdom Hybrid / WFH Options
Manchester Digital
machine learning. Experience with deep learning or generative AI is a plus but not essential. Proficiency in (Spark)SQL and Python . Experience with PySpark is beneficial but not required. Experience designing and implementing robust testing frameworks . Strong analytical skills with keen attention to detail. Excellent communication skills More ❯
Leicester, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
IO Associates
machine learning. Experience with deep learning or generative AI is a plus but not essential. Proficiency in (Spark)SQL and Python . Experience with PySpark is beneficial but not required. Experience designing and implementing robust testing frameworks . Strong analytical skills with keen attention to detail. Excellent communication skills More ❯
coventry, midlands, united kingdom Hybrid / WFH Options
IO Associates
machine learning. Experience with deep learning or generative AI is a plus but not essential. Proficiency in (Spark)SQL and Python . Experience with PySpark is beneficial but not required. Experience designing and implementing robust testing frameworks . Strong analytical skills with keen attention to detail. Excellent communication skills More ❯
loughborough, midlands, united kingdom Hybrid / WFH Options
IO Associates
machine learning. Experience with deep learning or generative AI is a plus but not essential. Proficiency in (Spark)SQL and Python . Experience with PySpark is beneficial but not required. Experience designing and implementing robust testing frameworks . Strong analytical skills with keen attention to detail. Excellent communication skills More ❯
on Job Description Responsibilities Data Engineers will be data mining, writing code, and creating tables/environments in S3 Qualifications Engineer required - SQL, Python, Pyspark, S3, AWS, environment creation Preferable for both - retail business knowledge, technical documentation More ❯
personal development. What's in it for you? Up to £90k Bonus scheme Skills and Experience Experience in using modern technologies such as Python, Pyspark, Databricks. Experience in using advanced SQL. Experience with Cloud computing, preferably Azure. Experience in working on loyalty scheme projects. If you would like to More ❯
technical knowledge, educate stakeholders on our research, data science, and ML practice, and deliver actionable insights and recommendations. Develop code to analyze data (SQL, PySpark, Scala, etc.) and build statistical and machine learning models and algorithms (Python, R, Scala, etc.). Collaborate with business and operational stakeholders and product More ❯
data mining, databases, and data visualization -Ability to create data-driven solutions, data models, and visualizations Preferred: -Experience with Databricks -Experience with Python and PySpark -ETL experience -Experience with SQL The work is located in Arlington, VA and requires a TS/SCI More ❯
and deliver data solutions to solve important problems. Prior experience in the Palantir AIP, Foundry and Gotham would be advantageous. Previous development experience in PySpark, Typescript and front end frameworks like React would also be advantageous. As a Consultant Engineer your responsibilities will include Assist in the design, development More ❯
data technologies Experience in designing, managing and overseeing task assignment for technical teams. Mentoring data engineers Strong Exposure to SQL, Azure Data Factory, Databricks & PySpark is a must have. Experience in Medallion Silver Layer modelling Experience in Agile project environment Insurance experience – Policy and Claims Understanding of DevOps, continuous More ❯