etc.) Have experience productionising machine learning models Are an expert in one of predictive modeling, classification, regression, optimisation or recommendation system Have experience with Spark Have knowledge of DevOps technologies such as Docker and Terraform and ML Ops practices and platforms like ML Flow Have experience with agile delivery more »
Products tools (e.g. BigQuery, Dataflow, DataProc, AI Building Blocks, Looker, Cloud Data Fusion, Data prep, etc.) to build solutions for our customers Experience in Spark (Scala/Python/Java) and Kafka. Experience in MDM, Metadata Management, Data Quality and Data Lineage tools. E2E Data Engineering and Lifecycle (including … Cloud Data Fusion, Data prep, etc.) - Construction Industry Sector Preferred Google Cloud Platform Time-Series Data Building Control and Monitoring Systems Java, Scala, Python, Spark, SQL Experience of developing enterprise grade ETL/ELT data pipelines. Deep understanding of data manipulation/wrangling techniques Demonstrable knowledge of applying Data … DB/Neo4j/Elastic, Google Cloud Datastore. Snowflake Data Warehouse/Platform Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc Experience and knowledge of application Containerisation, Docker, Kubernetes more »
Skills & Experience At least 10 years experience working with JavaScript or Python/Java Previous experience deploying Software into the Cloud EKS, Docker, Kubernetes ApacheSpark or NiFi Microservice architecture experience Experience with AI/ML systems more »
as R, Python, Azure, Machine Learning (ML), and Databricks. Essential criteria and experience Proficiency in one or more analytical tools eg R, Python, Tableau, ApacheSpark, etc. Proficiency in Azure Machine Learning and Azure Data Bricks. Pro-activity and self-starting attitude. Excellent analytical and problem-solving ability. more »
City of London, London, United Kingdom Hybrid / WFH Options
Oliver Bernard Ltd
Experience with a JVM language, Kotlin, Java, Scala, Clojure Knowledge of Typescript and React is beneficial Exposure to data pipelines using technologies such as Spark and Kafka Experience with cloud services (ideally AWS) Hybrid working 1-2 days per week in Central London. £110,000 depending on experience. Please more »
design. Cloud data products such as: Data factory Events Hubs Data Lake Synapse Azure SQL Server Experience developing Databricks and coding with PySpark and Spark SQL. Proficient in ETL coding standards Data encryption techniques and standards Knowledge of relevant legislation such as: Data Protection Act, EU Procurement Directives, Freedom more »
Familiarity with data warehousing concepts and technologies, including star and snowflake schema designs. Big Data Technologies: Understanding of big data platforms such as Hadoop, Spark, and tools like Hive, Pig, and HBase. Data Integration: Ability to integrate data from disparate sources using middleware or integration tools as well as more »
data quality issues and enhancing model performance. Creative Solutions : A history of innovative problem-solving and independent solution development. Big Data Technologies : Knowledge of Spark or PySpark for handling large-scale data processing. ML Expertise : A thorough understanding of machine learning techniques and best practices in model development. Desirable more »
data quality issues and enhancing model performance. Creative Solutions : A history of innovative problem-solving and independent solution development. Big Data Technologies : Knowledge of Spark or PySpark for handling large-scale data processing. ML Expertise : A thorough understanding of machine learning techniques and best practices in model development. Desirable more »
with JavaScript or Python Experience deploying software into the cloud and on-premise. Developing software products. Experience with EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi. Experience with microservices architectures. Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »
learning frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, etc, Experience with cloud platforms (e.g. AWS, Azure, GCP) and distributed computing frameworks (e.g., Spark) is a plus, Familiarity with robotics principles and technologies is highly desirable, Publications in leading AI, ML or robotics conferences, Strong proficiency in programming more »
developing real-time analytics solutions, preferably with experience in time-series databases. Proficiency in technologies relevant to real-time analytics, such as KX, Kafka, Spark, Flink, and real-time visualization tools. Demonstrated ability to lead and mentor software engineering teams. Excellent problem-solving skills and the ability to work more »
Southampton, Hampshire, South East, United Kingdom
Ordnance Survey Limited
development. Track record in problem resolution & selection of technical solutions. Qualified to relevant development certification (or equivalent experience). Experience working with Databricks/Apache Spark. Experience working with infrastructure-as-code. Desirable: Experience working with Geospatial Data. Experience using Terraform. The rewards: We want you to love what more »
designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies. Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL … . Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public cloud environments: e.g. AWS, GCP, Azure, Terraform. Strong knowledge of software engineering practices: e.g. testing, CI/CD (Jenkins, Github Actions), agile development, git/version control, containers etc. more »
Warwick, Warwickshire, West Midlands, United Kingdom
Tata Technologies Europe Ltd
data analysis techniques Ability to produce clear graphical representations and data visualisations. Knowledge of data analysis tools, for example, R Programming, Tableau Public, SAS, ApacheSpark, Excel, RapidMiner Knowledge of data modelling, data cleansing, and data enrichment techniques An understanding of data protection issues Experience of working with more »
for business improvements Lead a small team of data scientist on Neural Networks LLMs (CNN & RNN), ML, & NLP NLP/AI/ML/Spark/Python/Data scientist/Machine Learning Engineer/OCR/Deep Learning Requirements Bachelor's degree or equivalent experience in quantitative field more »
products like Data Factory, Event Hubs, Data Lake, Synapse, Azure SQL server. Detailed knowledge in developing in Databricks and experience in coding with PySpark. Spark SQL ETL coding standards: ensuring that code is standardised, self-documenting and can be reliably tested Knowledge of best practice data encryption techniques and more »
end ownership Python or similar (Ruby or Node) or another Functional Language JavaScript and associated frameworks, preferably Vue, or similar Cloud technologies SQL (advantageous) Spark (advantageous) Docker/Kubernetes advantageous ) MongoDB, SQL, Postgres & Snowflake (advantageous) Developing online, cloud based SaaS products. Leading and building scalable architectures and distributed systems more »
stream big data coming in from all types of sources.THE ROLE:As a Tech Lead you will be expected to be hands-on with Spark/Python, Kafka. You will be part of the design team to enhance their AWS platform, introducing technologies like Kubernetes, Docker, Jenkins to ensure more »
of databases. Snowflake is widely used, as are Docker and Kubernetes for containerisation. ETL and ELT tech are also used every day, primarily Airflow, Spark, Hive and a lot more. You ll need to come from a strong academic background with some commercial experience in a data heavy software more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Redshift, DynamoDB, Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline more »
Saffron Walden, Essex, South East, United Kingdom Hybrid / WFH Options
EMBL-EBI
least 6 years professional software development experience using Java and Spring framework. Experience using workflow management systems and distributed data processing software (e.g., Hadoop, Spark). At least 6 years professional experience with large scale databases (e.g., Oracle, MongoDB). Excellent knowledge of the UNIX command line. Experience with more »
a fast-changing environment with rapid release cycles. Knowledge of Java, Cucumber DynamoDB, Redis and Redshift Cloud: AWS (S3, EC2, Lambda, AWS Glue/Spark, IAM, Cloudwatch, MSK, Managed Airflow, Athena, Kenesis) Experience of writing and taking responsibility for technical documentation. Knowledge and experience of working with Python and more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . #J-18808-Ljbffr more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Our Commitment to Diversity and Inclusion At more »