Tuesday 01/07/2025
Took a short walk in the morning, mostly to pick up a prescription at CVS. I thought really hard on the writers group cue, “celebrations” and came up completely dry. So just sat through the meeting.
After a nap I practiced some music. I’m really feeling good about my voice and delivery. Drove out at 3 to do a couple of errands. That was about it.
Except that yesterday and today, I was doing a bunch of emails on our in-house AI mailing list. This is developing the idea for a ResBot. We have a website we call ResWeb (resident web) that has tons of info but isn’t that easy to navigate. We read about a group at SJSU who had built an AI to help students find their way through their website — asking natural language questions about courses, hours, and university events. Could we do something like that? Here’s the summary I put together,
Channing House has an in-house website we refer to as ResWeb. (URL is intranet.channinghouse.org, it is quite distinct from the public facing site channinghouse.org). ResWeb is rich in info and we frequently refer residents to “check ResWeb” for something or other.
It would be nice to have a chat-type interface to ResWeb so residents could query it with questions like —
* what room for Mrs. Galenson
* which resident was a provost at Stanford
* what’s the number of the activity director in the lee center
* what’s for lunch thursday
* is there a video of that talk about housing in palo alto
* who runs the bridge games and what day do they play
— all of which are answered on ResWeb, at some level or other.
It appears that what would work for this is LLM access, augmented with the scraped contents of ResWeb (RAG or “resource augmented generation”). The LLM could be accessed via API to OpenAi or Anthropic, or probably more economically by implementing a local model such as LLama3.
So that’s the general idea. It appears all the tools and concepts and all the open-source libraries and components are available to make it work — but there are many organizational hoops to jump through to make it happen. Although claude.ai showed me what it said was the Python code to make it happen (and it was quite readable), I will never try to implement it myself, because if I did I would be stuck maintaining it. But I would like to see it happen. Especially if we could do voice input it would help a lot of residents.