Figure out how to live in the worst-case. 
Or play Rambo in the woods, and max out your privilege. 

Your thoughts?

Main Menu

The big AI revolution

Started by monsta666, Feb 20, 2024, 01:50 PM

Previous topic - Next topic

K-Dog

#30
Quote from: RE on May 14, 2025, 06:05 PM
Quote from: K-Dog on May 14, 2025, 02:36 PMIt seems to me a guy could make serious bank right now doing that.  Businesses do not know how to do it themselves.


Well then, make a biz card and drop in on some local biz you could write some AI app to improve their biz, explain how it will improve proffits and bill them @ $200/hr.  Call it K9-AI.  You can quit stocking shelves.

RE

Yeahhhhhhh,



This is an image, it runs in another window on my browser.

I just accomplished this within the last hour,  It is the result of so much effort I won't get into it.  But it mixes python php javascript code on a local web page that talks to a local server which runs a large language model.  I can switch the running model in and out.  Interesting to watch the personality of the responses change.  Nothing uses the web, (but I could not have built this without the web).

The leading edge of tech is referred to by those who actually are involved in the work as the bleeding edge.  I said I won't get into it, but it is like Columbus landing on Hispaniola for me.  The concept is simple.  The language model is considered to be a bad student who has a cheat sheet.  In this case the entire Doomstead codebase has been placed into a vector database.  The Question is first sent to this database where semantic matches are made and returned.  This data becomes the cheat sheet which is then presented to the llm along with the original question, and like a cheating student the llm which knows nothing of the Doomstead codebase can answer the question. 

Easy as pie.

Not so easy to actually do,  The number of software pieces that have to dance together to make this all work is huge.  Gigabytes of library code are used, and the webpage has about 1000 lines of code, most of it fancy-assed.  Getting all the software to all dance together for the first time is a major accomplishment.  And the vector database is written essentially from scratch using the fancy-assed code.  The buttons at the top do not do anything yet but are ready to wire up.  The send button sends and the toolbar text reports connection to the the LLM.  Essential operation achieved after days of wheel spinning.



* RAG  -- Retrieval Augmented Generation


A demo I ran across (and ran) uses a text 2000 lines long which is nothing but random facts about cats.  A simple chat LLM uses this text to be an expert on cats.  It may be fun to make an expert on doom and have people ask the doomstead questions.  We be an oracle!

RE

It seems cool, but how will you use it to get going on the new K9-AI Consulting Biz?

RE

K-Dog

Quote from: RE on May 18, 2025, 06:02 PMIt seems cool, but how will you use it to get going on the new K9-AI Consulting Biz?

RE

It is cool and I am using it.  To be a consultant on this I will have to ride the horse for a while.  Learn how to integrate raw text, pdf docs, a database, images, all the options. On top of that the simple formula of exposing a question to the vector database and then exposing the results of that search with the raw question to the llm needs refinement to be useful.  Commands like 'format the results into a list in .md format' have to be injected in along with the question.  Prompt engineering is a thing.   

Adding the buttons to the top is going to be useful.