Thoughts on LLMs

5 minute read.

Af­ter skip­ping the Cryp­to and NFT dis­cus­sions and most­ly re­strain­ing from of­fer­ing an opin­ion, I did end up run­ning down a rab­bit hole on the cur­rent AI trends.

I’m get­ting close to two decades of cod­ing ex­pe­ri­ence and re­cent­ly spent about three days worth of work­ing time (en­tre­pre­neur days) build­ing a com­pa­ny-in­ter­nal chat­bot that is able to cre­ate and man­age Git­lab Is­sues, CI/CD pipe­line fail­ures, draft emails and re­mem­ber pref­er­ences. I end­ed up with a su­per­fi­cial un­der­stand­ing on what is pos­si­ble and an in­tu­ition on what it is use­ful for and what not.

I was baf­fled at how easy it was to build these things on top of the Ope­nAI APIs.

In the end, I can sum­ma­rize for my­self: AI as­sis­tants are ex­treme­ly ap­peal­ing be­cause you can pro­vide tools and they just know when to use them. They can take over mun­dane tasks that no­body en­joys much, like sum­ma­riz­ing con­ver­sa­tions in­to a tick­et and as­sign­ing the right per­son and mile­stones. And they do it much cheap­er and faster than hu­mans could.

They are al­so able to source the right in­for­ma­tion at the right time, com­pared to a hu­man who will in the in­ter­est of time not eval­u­ate seem­ing­ly un­re­lat­ed con­tent. An AI mod­el can­not avoid pro­cess­ing the re­lat­ed con­text fed to it by em­bed­dings.

# Note on AI Star­tups

Many star­tups out there are on a very high lev­el “ex­pos­ing” AI mod­els to users in in­ter­est­ing ways. Their main tasks is UX and mar­ket­ing, lit­tle on the AI mod­el in­no­va­tion side. But the pletho­ra of po­ten­tial use cas­es make this in­ter­est­ing nev­er­the­less… some­times.

Huge val­ue is cre­at­ed when one mod­el does ev­ery­thing rather than hav­ing a mod­el or sep­a­rate AI prod­uct for ev­ery­thing.