problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as ApacheHadoop, Apache Spark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security More ❯
Skills Programming: Proficiency in Python, Java, Scala, or similar languages. Big Data Technologies: Hands-on experience with big data tools e.g. (Databricks, Apache Spark, Hadoop). Cloud Platforms: Familiarity with AWS, Azure, GCP, or other cloud ecosystems for data engineering tasks. Expertise in relational databases (e.g. postgres, sql server More ❯
Skills Programming: Proficiency in Python, Java, Scala, or similar languages. Big Data Technologies: Hands-on experience with big data tools e.g. (Databricks, Apache Spark, Hadoop). Cloud Platforms: Familiarity with AWS, Azure, GCP, or other cloud ecosystems for data engineering tasks. Expertise in relational databases (e.g. postgres, sql server More ❯
tools, and statistical packages. Strong analytical and problem-solving skills. Experience with social media analytics and user behavior. Familiarity with big data technologies like Hadoop, Spark, Kafka. Knowledge of AWS machine learning services like SageMaker and Comprehend. Understanding of data governance and security in AWS. Excellent communication and teamwork More ❯
and data visualization tools. Strong analytical and problem-solving skills. Knowledge of social media analytics and user behavior. Familiarity with big data technologies like Hadoop, Spark, Kafka. Knowledge of AWS machine learning services like SageMaker and Comprehend. Understanding of data governance and security in AWS. Excellent communication skills and More ❯
Docker, Kubernetes) and DevOps pipelines. · Exposure to security operations center (SOC) tools and SIEM platforms. · Experience working with big data platforms such as Spark, Hadoop, or Elastic Stack. #J-18808-Ljbffr More ❯
Experience with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with More ❯
Experience with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with More ❯
Experience with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with More ❯
learning, mobile, etc.) Experience in Computer Science, Engineering, Mathematics, or a related field and expertise in technology disciplines Exposure to big data frameworks (Spark, Hadoop etc.) used for scalable distributed processing Ability to collaborate effectively with Data Scientists to translate analytical insights into technical solutions Preferred Qualifications, Capabilities, And More ❯
frameworks like TensorFlow, Keras, or PyTorch. Knowledge of data analysis and visualization tools (e.g., Pandas, NumPy, Matplotlib). Familiarity with big data technologies (e.g., Hadoop, Spark). Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a team. Preferred Qualifications: Experience with More ❯
in at least one programming language commonly used in data engineering (e.g., Python, Scala, Java). Strong experience with big data technologies (e.g., Spark, Hadoop, Flink) and distributed data processing frameworks. Proven experience with cloud data platforms and services (e.g., Azure Data Factory, Azure Databricks, AWS Glue, Google Cloud More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Widen the Net Limited
data pipelines, ensure data quality, and support business decision-making with high-quality datasets. -Work across technology stack: SQL, Python, ETL, Big Query, Spark, Hadoop, Git, Apache Airflow, Data Architecture, Data Warehousing -Design and develop scalable ETL pipelines to automate data processes and optimize delivery -Implement and manage data More ❯
on software engineering concepts and applied experience Experience in dealing with large amount of data, Data Engineering skills are desired Proven experience in Spark, Hadoop, Databricks and Snowflake Hands-on practical experience delivering system design, application development, testing, and operational stability Advanced in one or more programming language(s More ❯
Understanding of agile methodologies, including CI/CD, application resiliency, and security. Additional Qualifications, Capabilities, and Skills: Experience with big data technologies such as Hadoop, Spark, or Kafka. Familiarity with tools like GitHub Copilot or Codeium. Knowledge or practical experience with cloud technologies. Understanding of orchestration technologies like Prefect More ❯
and Skills: Experience as a full stack developer, including proficiency in front-end technologies such as React. Proficiency in big data technologies such as Hadoop, Spark, or Kafka for handling large-scale data processing Experience using tools like GitHub Copilot or Codeium. In-depth knowledge of the financial services More ❯
Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy, etc. Experience with large scale distributed systems such as Hadoop, Spark, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace More ❯
oriented design skills, including OOA/OOD; Experience with multi-tier architectures and service-oriented architecture; Exposure to and understanding of RDBMS, NoSQL, and Hadoop is desirable; Knowledge of the software development lifecycle and agile practices, including TDD/BDD; Strategic thinking, collaboration, and consensus-building skills. Please note More ❯
Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace More ❯
Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. - PhD in math/statistics/engineering or other equivalent quantitative discipline - Experience with conducting research in a corporate setting - Experience in More ❯
ZACHMAN, FEAF) Cloud Experience: AWS or GCP preferred, particularly around migrations and cloud architecture Good technical knowledge and understanding of big data frameworks like Hadoop, Cloudera etc. Deep technical knowledge of database development, design and migration Experience of deployment in cloud using Terraform or CloudFormation Automation or Scripting experience … and cloud data solutions Working with a variety of enterprise level organisations to understand and analyse existing on-prem environments such as Oracle, Teradata & Hadoop etc., and be able to design and plan migrations to AWS or GCP Deep understanding of high and low level designs and architecture solutions More ❯
you will design, develop, and maintain Scala applications for Big Data purposes. In this role, you will be responsible for migrating an on-premise Hadoop system onto the AWS cloud platform. Designing and implementing ETL pipelines, and using a combination of Big Data technologies and modern cloud stack will … be your day-to-day. Requirements: Strong Scala Programming Experience (Data) Hadoop Experience AWS Experience This role is an urgent requirement; please do not hesitate to apply or you could miss this opportunity! Get in touch by contacting me at j.shaw-bollands@tenthrevolution.com or on 0191 338 6641! Keywords … Big Data, Hadoop, Scala, Spark, AWS, Migration, Data Engineer, Consultancy, Banking, Finance #J-18808-Ljbffr More ❯