prithivida/bert-for-patents-64d
Motivation
This model is based on anferico/bert-for-patents – a BERTLARGE model (See next section for details below). By default, the pre-trained model’s output embeddings with size 768 (base-models) or with size 1024 (large-models). However, when you store Millions of embeddings, this can require quite a lot of memory/storage. So have reduced the embedding dimension to 64 i.e 1/16th of 1024 using Principle Component Analysis (PCA) and it still gives a comparable performance. Yes! PCA gives better performance than NMF. Note: This process neither improves the runtime, nor the memory requirement for running the model. It only reduces the needed space to store embeddings, for example, for semantic search using vector databases.
BERT for Patents
BERT for Patents is a model trained by Google on 100M+ patents (not just US patents).
If you want to learn more about the model, check out the blog post, white paper and GitHub page containing the original TensorFlow checkpoint.
Projects using this model (or variants of it):
- Patents4IPPC (carried out by Pi School and commissioned by the Joint Research Centre (JRC) of the European Commission)
數(shù)據(jù)評估
本站OpenI提供的prithivida/bert-for-patents-64d都來源于網(wǎng)絡(luò),不保證外部鏈接的準(zhǔn)確性和完整性,同時,對于該外部鏈接的指向,不由OpenI實際控制,在2023年 5月 26日 下午6:01收錄時,該網(wǎng)頁上的內(nèi)容,都屬于合規(guī)合法,后期網(wǎng)頁的內(nèi)容如出現(xiàn)違規(guī),可以直接聯(lián)系網(wǎng)站管理員進(jìn)行刪除,OpenI不承擔(dān)任何責(zé)任。