BloombergGPT: How We Built a 50 Billion Parameter Financial Language Model - YouTube

 add   What are Main Use Cases of AI?

Published Jun 13 '23. Last edited Jul 09 '24

Tutorial   #llm #bloomberggpt  

This talk by David Rosenberg, Head of ML Strategy, Office of the CTO, Bloomberg covers #BloombergGPT, an experimental project by Bloomberg to create a ChatGPT-like large-language-model (#LLM) that serves both general purpose as well as domain-specific purpose.

BloombergGPT is a 50-billion parameter LLM built using 570 billion tokens of language data, half of data are public, the other half are private.

Areas that BloombergGPT performed better than peers are:

  • NER (named entity recognition) + NED (named entity disambiguation) task such as matching company mention with stock ticker
  • Text-to-BQL, BQL is Bloomberg in-house query language, for example: with input of Get me the last price and market cap for Apple, expect output of get(px_last, cur_mkt_cap) for (['AAPL US Equity']). Without previous knowledge of BQL, with few-shot learning of 3 example pairs of input and output, BloombergGPT can produce expected output correctly given input subsequently.
  • language interface for Bloomberg Terminal, for example, bring up chart given instruction of market cap of AAPL vs. MSFT
full text available (38308 bytes)

 

Terms of Use: You are in agreement with our Terms of Services and Privacy Policy. If you have any question or concern to any information published on SaveNowClub, please feel free to write to us at savenowclub@gmail.com