s Office. There will be a particular emphasis in this role on developing within a Microsoft SQL Server development environment and/or an ApacheSpark big data processing environment - creating algorithms and pipelines to ingest and transform data into information systems and solutions capable of answering clinical more »
Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands more »
or hedge fund industry. Technical Skills: Proficiency in Python and SQL. Experience with relational and NoSQL databases. Knowledge of big data frameworks (e.g., Hadoop, Spark, Kafka). Understanding of financial markets and trading systems. Strong analytical, problem-solving, and communication skills. Familiarity with DevOps tools and practices. This is more »
and Data Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI … Azure Cloud environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS more »
for business improvements Lead a small team of data scientist on Neural Networks LLMs (CNN & RNN), ML, & NLP NLP/AI/ML/Spark/Python/Data scientist/Machine Learning Engineer/OCR/Deep Learning Requirements Bachelor's degree or equivalent experience in quantitative field more »
SQL Server, Sybase, Snowflake) Document databases (e.g. Mongo, ArangoDB, Couchbase, Solr) Big Data (e.g. Hadoop ecosystem, Bigtable) Data streaming (e.g. Kafka, Flink, Pulsar, Beam, Spark) Cloud databases (e.g. Snowflake, CockroachDB) Other database genres (e.g. Graph, Columnar, time series) In return, we ll give you A competitive basic salary … scheme A high spec laptop (of course) Need more reasons? Here's a few more Work with some of the most exciting new technologies Spark off co-workers who ll challenge your thinking and help you to achieve your potential Deal openly and honestly with customers Benefit from a more »
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases Programming languages such as Spark or Python Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : Base Salary more »
quality of data. Key Requirements: Strong experience designing data pipelines/warehouses using AWS and Snowflake. Exposure to big data technologies such as Kafka, Spark, or Hadoop. Solid experience with Snowflake, including performance optimisation and cost management. Strong experience with SQL and Data modelling. Excellent understanding of AWS architecture more »
stream big data coming in from all types of sources.THE ROLE:As a Tech Lead you will be expected to be hands-on with Spark/Python, Kafka. You will be part of the design team to enhance their AWS platform, introducing technologies like Kubernetes, Docker, Jenkins to ensure more »
of databases. Snowflake is widely used, as are Docker and Kubernetes for containerisation. ETL and ELT tech are also used every day, primarily Airflow, Spark, Hive and a lot more. You ll need to come from a strong academic background with some commercial experience in a data heavy software more »
equity financing to mid-market and late-stage companies. Liquidity Group is backed by leading global financial institutions including Japan s largest bank, MUFG, Spark Capital, and Apollo Asset Management. Reporting to the Chief Marketing Officer, the Head of Communications will be responsible for developing and executing comprehensive communication more »
environment. Experience in log management tools to troubleshoot issues as well as identify useful analytics data. Preferred Experience in Microsoft Azure services and Databricks Spark, Redshift, Hadoop Map-Reduce or other Big Data frameworks Code management tools (Git, Sbt, Maven) Pyspark, Scala or other functional programming languages Analytics tools more »
equity financing to mid-market and late-stage companies. Liquidity Group is backed by leading global financial institutions including Japan s largest bank, MUFG, Spark Capital, and Apollo Asset Management. About the role We're on the lookout for accomplished credit professionals to assume the role of Director within more »
with JavaScript or Python Experience deploying software into the cloud and on-premise. Developing software products. Experience with EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi. Experience with microservices architectures. Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
major advantage Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
design. Cloud data products such as: Data factory Events Hubs Data Lake Synapse Azure SQL Server Experience developing Databricks and coding with PySpark and Spark SQL. Proficient in ETL coding standards Data encryption techniques and standards Knowledge of relevant legislation such as: Data Protection Act, EU Procurement Directives, Freedom more »
stack is not essential, but it would make things easier. The main technologies we use include: Elixir, Erlang, Python, Terraform, Ansible, Packer, EMR/Spark, ApacheSpark, Apache Druid, BigQuery and Redis Familiarity with cloud technologies. Ideally AWS and technologies such as EC2, ECS, EMR, AWS … Lambda, DynamoDB, S3, Kinesis, SQS, SES, Cloudwatch.... Knowledge or experience with big data technologies such as Spark, Hadoop, Redshift, Snowflake, Kafka, Flink, Druid, Clickhouse... is highly desirable. Benefits Flexible work arrangements to support a healthy work-life balance 25 vacation days (excluding bank holidays) and 5 days of carers more »
ML libraries (TensorFlow, pytorch, scikit-learn, transformers, XGBoost, ResNet), geospatial libraries (shapely, geopandas, rasterio), CV libraries (scikit-image, OpenCV, yolo, Detectron2). AWS, Postgres, Apache Airflow, Apache kafka, ApacheSpark Mandatory requirements: You have at least 5 years of experience in the DS role, deploying models … You have expertise on applications focusing on geospatial data and mobility analytics (highly desirable). You have proven experience with big data technologies, specifically Spark and Kafka. You have experience working with state-of-art ML pipeline technologies (such as MLflow, Sagemaker...) or building a ML pipeline by yourself more »
development (ideally AWS) Knowledge and ideally hands-on experience with data streaming, event-based architectures and Kafka Strong communication and interpersonal skills Experience with ApacheSpark or Apache Flink would be ideal, but not essential Please note, this role is unable to provide sponsorship. If this role more »
business value in collaboration with an Account Executive and a Senior Solutions Architect. Gain excitement from clients about Databricks through hands-on evaluation and Spark programming, integrating with the wider cloud ecosystem and 3rd party applications. Contribute to building the Databricks technical through engagement at workshops, seminars, and meet … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Our Commitment to Diversity and Inclusion At more »
Data mining, Data warehousing, ETL Experience in handling large volumes of data on SQL, NoSQL and Big Data databases Experience in Hadoop ecosystem: Hadoop, Spark, Hive, and/or Scala Experience in programming languages: PHP, Python, C++/Java Experience in Web development in Laravel MVC Framework Comfortable working more »
Comfort with rapid prototyping and disciplined software development processes Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.)data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras) Demonstrated ability to work on multi-disciplinary teams with diverse more »
ADF, SSIS) Data governance (Purview, Unity Catalogue) Databricks Delta Lake Storage Azure Dev OPS DESIRED SKILLS Advanced Analytics Data Technologies Databricks, Delta Lake, Synapse Spark SQL, Pyspark Azure Data Explorer Logic Apps, Key Vault Semi structured data processing Integration Runtime Coding experience: Python, C#, Java for Data analysis purpose more »
or similar, with 6+ years of professional experience. A good understanding of modern lakehouse architectures and corresponding technologies, such as Dremio, Snowflake, Iceberg, (Py)Spark/Glue/EMR, dbt and Airflow/Dagster. Experience with Cloud providers. Familiarity with AWS S3, ECS and EC2/Fargate would be more »