Replies: 6 comments 5 replies
-
|
This is a bit of a surprise! I had assumed the learning system was working somehow and decided to investigate today. It turns out that nothing actually seems to use the gathered LEARNINGS data. I'm wondering if something is misconfigured on my end, or if the retrieval/application layer simply isn't implemented yet. I couldn't find any information about this in the README. However, I did notice that
This suggests there's some mechanism for applying learnings, but I can't find how it works. My guesses:
@danielmiessler — Could you clarify how this is meant to work? I'd love to understand the intended approach. Thanks! |
Beta Was this translation helpful? Give feedback.
-
|
Yes, the final piece of applying the learnings and doing the matching is not happening natively. It's just all the pieces are set up there for it to happen because we've been capturing the two things combined.
But you're right that a final piece is needed to close the loop. We'll be adding that very soon.
…On Thu, Jan 29, 2026 at 7:37 PM, Zane Edwards < ***@***.*** > wrote:
This is what my PAI told me (but then I followed up with "where did we
leave off" and then searches the folders and figured it out):
I am not currently using a persistent learning system. Here's the honest
reality:
* No persistent memory across sessions. Each time you start a new
conversation, I start fresh. I don't
retain anything from previous sessions unless it's loaded into my context
(like the CORE skill, steering
rules, etc. that your hooks inject at startup).
* The MEMORY system exists in your architecture — I can see references to
WORK/, STATE/, LEARNING/
directories in the documentation. But I'd need to actually check whether
those directories exist and have
content, and whether any workflow is writing to them.
* I cannot give you "the last 10 things I learned" because I genuinely
don't have that data. Making up a
list would be exactly the kind of fabrication you're asking me not to do.
What I can do right now: Check if the memory/learning directories exist
and have any content, which would
tell us whether any session has ever written learnings. Want me to look?
What would need to be true for learning to work:
* A hook or workflow that writes learnings to disk at session end
* A hook or context loader that reads them back at session start
* Neither of those appears to be happening automatically in what I see
loaded
This is a real gap worth closing if the learning system matters to you.
—
Reply to this email directly, view it on GitHub (
#531 (reply in thread)
) , or unsubscribe (
https://github.com/notifications/unsubscribe-auth/AAAMLXSCBFKO6J6EEVK3MEL4JLGYXAVCNFSM6AAAAACTIHJXI2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKNRUGY2TGOA
).
You are receiving this because you were mentioned. Message ID: <danielmiessler/Personal_AI_Infrastructure/repo-discussions/531/comments/15646538
@ github. com>
|
Beta Was this translation helpful? Give feedback.
-
|
Hi @zeddy303 I just copied your exact question and asked my PAI (after upgarding to 2.5) and this is what it said.... Here's my honest assessment of the learning system: The learning system is mostly write-only. What works:
What doesn't work:
The honest conclusion: It's an architectural gap: the write path is solid, the read-back-and-apply path is missing from the main context loading pipeline. Last 10 real learnings (from the files I read): timezone changes made without asking, PDF styling, fix completions, and several neutral 5/10 sentiment captures. Most contained boilerplate "Action Items" like "Review..." and "Consider..." with no automated follow-through. What would fix it: LoadContext.hook.ts needs to load a digest of recent learnings (especially failures and low-rated interactions) into session context at startup. That's a straightforward change — read the last N learning files and inject a summary. I know Daniel and team will get to this, its just a matter of time . |
Beta Was this translation helpful? Give feedback.
-
|
I've been asking it to analyse the learnings once a week and suggest improvements to my skills and to the steering instructions and such. Really happy with that approach. |
Beta Was this translation helpful? Give feedback.
-
|
Great diagnosis — and the timing is perfect because v3.0 shipped the missing piece. What's now in place (as of v3.0)The loop you identified — "signals are captured but nothing reads them back" — is now closed via two workflows in the PAIUpgrade skill: 1. MineReflectionsThe Algorithm now writes a structured reflection after every Standard+ run to
The MineReflections workflow mines these for recurring themes — weighted by signal strength (low sentiment ratings, over-budget runs, failed criteria) — and produces concrete upgrade candidates. Invoke it by saying: "mine reflections" or "check reflections" 2. AlgorithmUpgradeGoes deeper: takes the reflection themes, maps them to specific sections of the Algorithm spec, reads the current spec text, identifies the gap, and drafts concrete text changes. It even assesses whether a version bump is warranted. Invoke it by saying: "algorithm upgrade" or "improve the algorithm" Both live in the PAIUpgrade skill ( On your other pointsRatings: The ratings system feeds implicit sentiment scores per-session. MineReflections weights low-rated sessions higher when identifying upgrade candidates — so bad interactions drive more upgrade pressure than good ones. USER/ files: Still manually referenced for most things — that's intentional for the v3.0 release. AISTEERINGRULES.md: The note in that file — "derived from failure analysis of 84 rating 1 events" — is exactly how this loop works. You run MineReflections + AlgorithmUpgrade, review the proposals, apply the ones that make sense, and the Algorithm improves. That's the intended workflow. The short version: yes, it's learning now. The write path has always been solid. v3.0 added the read-back-and-apply path via the PAIUpgrade skill. |
Beta Was this translation helpful? Give feedback.
-
|
I actually had my system write a custom /done command and it folds all of the learnings as well as updates my project files any to-do’s, error, codes, or stop commands into one systematic process that reads every transcript that has not been processed through the done command and systematically handle this. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Just installed new version of PAI and spent time understanding the system deeply. I love the architecture and the skill system is impressive. However, I discovered something that confused me and I want to understand if this is intended or a gap.
The Issue: Learning Infrastructure Without Learning
PAI has extensive infrastructure for "learning":
But from what I can tell, none of this data is actually used.
Specific Issues
Ratings go nowhere
When I rate a response "7" or "8 - good work", it gets captured to ratings.jsonl. But what reads this file? What learns from it? It seems like the ratings are stored but never used to
improve anything.
USER/ files aren't loaded
I was told to fill in USER/ABOUTME.md, USER/CONTACTS.md, USER/TECHSTACKPREFERENCES.md to personalize PAI. But these files are never auto-loaded. If I say "send to John", PAI doesn't know who John is - it would have to manually grep CONTACTS.md.
The only files that ARE auto-loaded via LoadContext hook:
My Questions
Suggestion, Either:
I don't want to spend time filling in USER/ files if nothing reads them. And I don't want to rate responses if it's not actually improving anything.
Am I missing something? Is there a learning loop I don't understand?
Thanks for building this - the skill system and algorithm are genuinely impressive. Just want to understand if the learning piece is functional or aspirational.
Beta Was this translation helpful? Give feedback.
All reactions