KVM, Kubernetes. Experience with tools like Ansible, Terraform, Docker, Kafka, Nexus. Experience with observability platforms: InfluxDB, Prometheus, ELK, Jaeger, Grafana, Nagios, Zabbix. Familiarity with Big Data tools: Hadoop, HDFS, Spark, HBase. Ability to write code in Go, Python, Bash, or Perl for automation. Work Experience 6-8 years of proven experience in previous roles or one of the following More ❯
Join to apply for the Data Engineer role at Nucleus Global, an Inizio Company Inizio, the world’s leading healthcare and communications group providing marketing and medical communications services to healthcare clients. We have 5 main divisions within the group More ❯
Inizio, the world’s leading healthcare and communications group providing marketing and medical communications services to healthcare clients. We have 5 main divisions within the group Medical, Advisory, Engage, Evoke and Biotech. Our Medical Division focuses on communicating evidence on More ❯
It has come to our notice that Fractal Analytics’ name and logo are being misused by certain unscrupulous persons masquerading as Fractal’s authorized representatives to approach job seekers to part with sensitive personal information and/or money in More ❯
Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, MySQL, Cassandra) ️ Familiarity with cloud data platforms such as AWS, Google Cloud, or Azure ️ Experience with data processing frameworks (e.g., ApacheSpark, Apache Kafka) and ETL tools #J-18808-Ljbffr More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for More ❯
platform teams at scale, ideally in consumer-facing or marketplace environments. Strong knowledge of distributed systems and modern data ecosystems, with hands-on experience using technologies such as Databricks, ApacheSpark, Apache Kafka, and DBT. Proven success in building and managing data platforms supporting both batch and real-time processing architectures. Deep understanding of data warehousing, ETL More ❯
London, England, United Kingdom Hybrid / WFH Options
Autodesk
architecture, and processing skills with varied unstructured data representations Processing unstructured data, such as 3D geometric data Large scale, data-intensive systems in production Distributed computing frameworks, such as Spark, Dask, Ray Data etc. Cloud platforms such as AWS, Azure, or GCP Docker Documenting code, architectures, and experiments Linux systems and bash terminals Preferred Qualifications Databases and/or … data warehousing technologies, such as Apache Hive, Iceberg etc. Data transformation via SQL and DBT. Orchestration platforms such as Apache Airflow, Argo Workflows, etc. Data catalogs and metadata management tools Vector databases Relational and object databases Kubernetes computational geometry such as mesh or boundary representation data processing analyzing data and communicate results using tools such as Pandas, Matplotlib More ❯
experience leading data or platform teams in a production environment Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Familiarity with data warehousing, ETL/ELT processes, and analytics engineering Programming proficiency in Python, Scala or Java Experience operating in a More ❯
modelling: Experience integrating structural data, e.g. CryoEM or HDX-MS with computational models in automated ways Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Open-source contributions or publications More ❯
the development and adherence to data governance standards. Data-Driven Culture Champion : Advocate for the strategic use of data across the organization. Skills-wise, you'll definitely: Expertise in ApacheSpark Advanced proficiency in Python and Pyspark Extensive experience with Databricks Advanced SQL knowledge Proven leadership abilities in data engineering Strong experience in building and managing CI/ More ❯
Experience applying QM techniques to synthesis prediction, including using QM toolkits (e.g., PSI4, Orca, Gaussian). Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Open-source contributions or publications More ❯
Data Analytics, Snowflake, Talend, Databricks, Salesforce, HubSpot, SaaS, Data Lakes, APIs, AdTech, GDPR, CCPA, B2B, Sales Consultant, Account Manager, DataOps, CICD, DevOps, CI/CD, GenAI, RAG, AWS, Azure, Apache, Spark, Kafka – UK Wide - £80,000-£95,000 + OTE Our large Microsoft Partner client requires a Senior Data Analytics Sales Consultant to help spearhead their data sales More ❯
systems). Experience with AWS services such as Lambda, SNS, S3, EKS, API Gateway. Knowledge of data warehouse design, ETL/ELT processes, and big data technologies (e.g., Snowflake, Spark). Understanding of data governance and compliance frameworks (e.g., GDPR, HIPAA). Strong communication and stakeholder management skills. Analytical mindset with attention to detail. Leadership and mentoring abilities in … with interface/API data modeling. Knowledge of CI/CD tools like GitHub Actions or similar. AWS certifications such as AWS Certified Data Engineer. Knowledge of Snowflake, SQL, Apache Airflow, and DBT. Familiarity with Atlan for data cataloging and metadata management. Understanding of iceberg tables. Who we are: We're a global business empowering local teams with exciting More ❯
of large-scale distributed data processing. Experience with developing extract-transform-load (ETL). Experience with distributed messaging systems like Kafka and RabbitMQ Experience with distributed computing frameworks like ApacheSpark and Flink. Bonus Points Experience working with AWS or Google Cloud Platform (GCP) Experience in building a data warehouse and data lake. Knowledge of advertising platforms. About More ❯
Experience applying QM techniques to synthesis prediction, including using QM toolkits (e.g., PSI4, Orca, Gaussian). Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Open-source contributions or publications More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
. Experience in AWS cloud services, particularly Lambda, SNS, S3, and EKS, API Gateway Knowledge of data warehouse design, ETL/ELT processes, and big data technologies (e.g., Snowflake, Spark). Familiarity with data governance and compliance frameworks (e.g., GDPR, HIPAA). Strong communication and stakeholder management skills. Analytical mindset with attention to detail. Ability to lead and mentor … developing and implementing enterprise data models. Experience with Interface/API data modelling. Experience with CI/CD GITHUB Actions (or similar) Knowledge of Snowflake/SQL Knowledge of Apache Airflow Knowledge of DBT Familiarity with Atlan for data catalog and metadata management Understanding of iceberg tables Who we are: We’re a business with a global reach that More ❯
management and data governance open source platform that we will teach you. Read more on Bloomberg . Other technologies in use in our space: RESTful services, Maven/Gradle, ApacheSpark, BigData, HTML 5, AngularJs/ReactJs, IntelliJ, Gitlab, Jira. Cloud Technologies: You'll be involved in building the next generation of finance systems onto the cloud platforms More ❯
London, England, United Kingdom Hybrid / WFH Options
FSP
including engagement with exec level sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge of any other enterprise product More ❯
data pre-processing, feature engineering, and model evaluation o Understanding of software engineering principles (version control, CI/CD, containerization) o Familiarity with distributed computing and big data tools (Spark, Hadoop) o Ability to optimize models for performance and scalability o Experience with Azure AI Search More ❯
City of London, London, United Kingdom Hybrid / WFH Options
CONQUER IT
data pre-processing, feature engineering, and model evaluation o Understanding of software engineering principles (version control, CI/CD, containerization) o Familiarity with distributed computing and big data tools (Spark, Hadoop) o Ability to optimize models for performance and scalability o Experience with Azure AI Search More ❯