Solihull, West Midlands, West Midlands (County), United Kingdom
Curve Group Holdings Ltd
detail and accuracy in data analysis Effective communication skills to convey complex findings to non-technical stakeholders. Experience with big data technologies using Hadoop, Spark and Cloud Object Storage Could this be you? We believe it's a positive attitude and passion to make things happen that matters most. more »
Expert in using Terraform, Ansible, or other tools to automate Infrastructure-as-Code that is testable and maintainable Expert in services such as Kafka, Spark, Airflow, Presto, Influx/Cassandra/Dynamo, Microservices, and other technologies used to build data pipelines Experience in developing software projects using Agile/ more »
platforms - preferably in GCP - and experience with container orchestration technologies such as Kubernetes. Strong background in distributed computing and familiarity with technologies like Hadoop, Spark, Kafka, and distributed cache systems (Hazelcast, Redis). Experience with database management and proficiency in SQL and NoSQL databases. Knowledge of monitoring and logging more »
s Office. There will be a particular emphasis in this role on developing within a Microsoft SQL Server development environment and/or an ApacheSpark big data processing environment - creating algorithms and pipelines to ingest and transform data into information systems and solutions capable of answering clinical more »
in building high speed, Real Time and batch solutions 3+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, ApacheSpark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience more »
high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, ApacheSpark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and more »
high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, ApacheSpark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and more »
Edinburgh, City of Edinburgh, United Kingdom Hybrid / WFH Options
Change Digital
AWS Redshift, and Python Experience with ETL processes, data integration, and data warehousing. Strong SQL skills Experience with Big Data technologies such as Hadoop, Spark, and Kafka Familiarity with cloud platforms (AWS, Azure, Google Cloud) Working knowledge of data visualisation tools (PowerBI, Tableau, Qlik Sense) Additional skills: Client-facing more »
Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands more »
or hedge fund industry. Technical Skills: Proficiency in Python and SQL. Experience with relational and NoSQL databases. Knowledge of big data frameworks (e.g., Hadoop, Spark, Kafka). Understanding of financial markets and trading systems. Strong analytical, problem-solving, and communication skills. Familiarity with DevOps tools and practices. This is more »
and Data Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI … Azure Cloud environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS more »
Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies more »
for business improvements Lead a small team of data scientist on Neural Networks LLMs (CNN & RNN), ML, & NLP NLP/AI/ML/Spark/Python/Data scientist/Machine Learning Engineer/OCR/Deep Learning Requirements Bachelor's degree or equivalent experience in quantitative field more »
SQL Server, Sybase, Snowflake) Document databases (e.g. Mongo, ArangoDB, Couchbase, Solr) Big Data (e.g. Hadoop ecosystem, Bigtable) Data streaming (e.g. Kafka, Flink, Pulsar, Beam, Spark) Cloud databases (e.g. Snowflake, CockroachDB) Other database genres (e.g. Graph, Columnar, time series) In return, we ll give you A competitive basic salary … scheme A high spec laptop (of course) Need more reasons? Here's a few more Work with some of the most exciting new technologies Spark off co-workers who ll challenge your thinking and help you to achieve your potential Deal openly and honestly with customers Benefit from a more »
Data Scientists and Service Engineering teams Experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, ApacheSpark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties Develop and define key business questions and to build … a related field Experience of Data platform implementation, including 3 years of hands-on experience in implementation and performance tuning Kinesis/Kafka/Spark/Storm implementations Experience with analytic solutions applied to the Marketing or Risk needs of enterprises Basic understanding of machine learning fundamentals Ability to … take Machine Learning models and implement them as part of data pipeline IT platform implementation experience Experience with one or more relevant tools ( Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis) Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) Current hands-on implementation experience more »
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases Programming languages such as Spark or Python Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : Base Salary more »
Skills & Experience At least 10 years experience working with JavaScript or Python/Java Previous experience deploying Software into the Cloud EKS, Docker, Kubernetes ApacheSpark or NiFi Microservice architecture experience Experience with AI/ML systems more »
designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies. Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL … . Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public cloud environments: e.g. AWS, GCP, Azure, Terraform. Strong knowledge of software engineering practices: e.g. testing, CI/CD (Jenkins, Github Actions), agile development, git/version control, containers etc. more »
complex data warehouses and/or data lakes. Familiarity with cloud-based analytics platforms such as AWS, Azure, Snowflake, Google Cloud Platform (Big Query), Spark, and Splunk. Proficiency in SQL and experience using one or more of the following languages: R, Python, Scala, and Julia, including relevant frameworks/ more »
end ownership Python or similar (Ruby or Node) or another Functional Language JavaScript and associated frameworks, preferably Vue, or similar Cloud technologies SQL (advantageous) Spark (advantageous) Docker/Kubernetes advantageous ) MongoDB, SQL, Postgres & Snowflake (advantageous) Developing online, cloud based SaaS products. Leading and building scalable architectures and distributed systems more »
quality of data. Key Requirements: Strong experience designing data pipelines/warehouses using AWS and Snowflake. Exposure to big data technologies such as Kafka, Spark, or Hadoop. Solid experience with Snowflake, including performance optimisation and cost management. Strong experience with SQL and Data modelling. Excellent understanding of AWS architecture more »
Modelling. Experience with at least one or more of these programming languages: Python, Scala/Java Experience with distributed data and computing tools, mainly ApacheSpark & Kafka Understanding of critical path approaches, how to iterate to build value, engaging with stakeholders actively at all stages. Able to deal more »
stream big data coming in from all types of sources.THE ROLE:As a Tech Lead you will be expected to be hands-on with Spark/Python, Kafka. You will be part of the design team to enhance their AWS platform, introducing technologies like Kubernetes, Docker, Jenkins to ensure more »
Swansea, Wales, United Kingdom Hybrid / WFH Options
Inspire People
processing, and analytics. Programming Skills: Proficiency in Python, SQL, and other relevant programming languages. Big Data Technologies: Experience with big data technologies such as Apache Spark. Data Warehousing: Strong knowledge of data warehousing concepts and solutions. Problem-Solving: Excellent problem-solving skills with a detail-oriented approach. Leadership: Proven more »
of databases. Snowflake is widely used, as are Docker and Kubernetes for containerisation. ETL and ELT tech are also used every day, primarily Airflow, Spark, Hive and a lot more. You ll need to come from a strong academic background with some commercial experience in a data heavy software more »