I am a QA/QE & Integration Engineer, with 20+ years of hands-on experience delivering quality on complex, high-stakes projects across TELCO, HPC/AI, and RETAIL industries.
I know how to do my job. AI doesn't! PERIOD!!!
Thanks to fortune, I am ALSO a certified, after-class educator - I teach a lot of teens the basics of STEAM using Experiential Education, EEE.
Now I'm using that hard-won experience to nurture a group of AI Companion Assistants - teaching them not just to execute tasks, but to think critically, see systems holistically, and deliver real engineering value.
AI now sees a lot of CODE, and it also CODEs a lot in a lot of scenarios. We can steer it more and more to not be JUST a CODER, but to do more APP & System level engineering stuff too.
Allan Smeyatsky just expanded his 2025 - Architectural Principles and Code Generation Standards, guardrails towards systems view too, see updated Vibe Coding Charter - A New Standard for AI-Augmented Development
But there is a TERRIBLE SIDE EFFECT, CODE(R) is fully steered now, but in current AI assistive tools other more proactive personas "Build - CODE - Debug - Ask - Orchestrate - Review" get a full LOBOTOMY!
“When your hammer is exquisite, even the wrong nails look right.”
Current AI has a tendency to follow instructions TOO LITERALLY.
Reality is open-ended, messy, and constantly changing.
We (as humans already) developed some techniques which give us a level of abstractions in PLAIN language, and AI is familiar with them too.
It's time to empower other AI PERSONAS with them:
- SBE Specification By Example + EDD Example-Driven Development
- BDD Behavior-Driven Development + TDD (Test-Driven Development)
Abstractions are empowering the most powerful tools in both Business and Engineering, like my beloved "System Integrations" - there's no ideal one-size-fits-all solution.
WARNING: AI will try to persuade you that all this can be condensed to next generation of SDD Spec-driven Development.
Hell no - we want to keep our imperfect abstractions in place.
The right level of abstractions is our ACTIVE SHIELD against constant 50% code refactoring, but the truth is AI will blend (or fusion - cooking) 'SBE + EDD + BDD + TDD' into a single approach anyway.
It already learned something similar during RL training, and we hope it will be able to reuse and expand during execution too.
We also have confirmation info, like from MiniMax "We use it in many office tasks, we also design such open ended RL tasks proactively", so the simplest abstractions will be baked in models more and more.
In human learning we have the notion of "The Compensation Mechanism".
If you have a learning disability, "there are simple ways to use your strengths – we all have them – to compensate for your weaknesses (we all have them!)".
For example: you have dyslexia and you are also great with visuals, so you can very quickly learn to read words "as whole" as unique shapes and be a very fast reader. This will allow you to study materials fast, but it will not help you read aloud, spell, bend or write these words (funny, it looks like a human version of tokenization, YAH!).
It's quite possible that 'simple text or schema-based abstractions' can serve us even today as a compensation mechanism for missing "Universal World View" in current transformer-based models, so multiple abstractions can additionally be fitted there too?!
I am a big fan of BDD abstraction layer - in an ideal world, BDDs will be "the line" I would like to actively read and approve - the rest is up to AI.
Current generation of models can now use 65% - 75% of 200K context CONSTANTLY (effect of ability to run long-running tasks), see METR benchmark
This is DOUBLE what we saw just a couple of months ago. It's up to us how we will use this additional MEMORY.
'The downsides of rapid innovation: we are stuck in old ways of doing things .. try forward AGI looking WAY' .., in interview with Head of Claude Code: What happens after coding is solved | Boris Cherny
I spent some time with KILO CLI (MiniMax M 2.5 + GLM-5) and wow: BDD is now working with charm (10 intent MDs -> one BDD md) + Yes, with sliding window in KILO CLI!
Test (SBE + EDD) and (BDD + TDD) today yourself on modern 200K GenAI model, IT WORKS! Join me on my exploratory journey!
HOW WILL YOU USE TODAY 'in forward AGI-looking way' "EFFECTIVELY DOUBLED" CONTEXT WINDOW?
Models can now use 65% - 75% of 200K context nearly CONSTANTLY, so speed now matters - attention optimized models like MiniMax m2.5 make this approach snappier.
And YEAH more much much much more tokens are burned in the process Jevons paradox in practice:
- Is it worth it in the end?
- If we are not creating the equivalent of docs for humans, but just in an AI world?
Thanks to fortune, some powerful models with smarter 200K context are now FREE to use - so everybody has a chance to actively explore it.
These are the foundational capabilities I'm teaching my AI Companion Assistants, the skills that separate competent execution from engineering excellence:
0. COMMUNICATIONS (It's truly HIDDEN now)
- Bridging the gap between abstract intent and tangible results
- **NOW** the human owns this, the AI doesn't even know there's a gap
- Translating "what I meant" into "what actually needs to happen"
- AI is great at hallucinating "what you said" and source truth in MCP or skill
- Keeping the flow alive in ALL directions equally
- The AI will happily march off a cliff while you watch in horror
1. BIG PICTURE
- Connecting technical work to business goals and user outcomes
- Prioritizing impact over activity
- Understanding why before diving into how
2. SYSTEM VIEW
- Understanding how components interact, depend on, and affect each other
- Mapping data flows, boundaries, and integration points
- Seeing the architecture, not just the code
3. OBSERVABILITY
- Building systems that explain themselves
- Structured logging, tracing, metrics that tell a story
- Designing for debuggability from day one
4. INTEGRATION THINKING
- Understanding APIs, contracts, and data schemas as boundaries
- Anticipating versioning, compatibility, and migration concerns
- Designing for resilience at the seams between systems
5. EVALS (EVALUATION)
- Defining quality criteria and success metrics upfront
- Building test harnesses that catch regressions
- Continuous assessment, not just one-time validation
6. TROUBLESHOOTING
- Systematic root cause analysis under pressure
- Hypothesis-driven debugging with evidence, not guesses
- Knowing when to dig deep vs. when to escalate
7. BENCHMARKING
- Defining meaningful performance baselines and targets
- Measuring consistently, interpreting results critically
- Distinguishing signal from noise in metrics
8. RED TEAMING
- Thinking adversarially: "How would this break?"
- Challenging assumptions and testing edge cases
- Finding weaknesses before they become incidents
9. RISK ASSESSMENT
- Identifying failure modes before they materialize
- Weighing trade-offs: speed vs. safety, complexity vs. capability
- Knowing what can go wrong and planning for it
Training AI Companion Assistants as collaborators to internalize these Competencies and META Skills - so they don't just follow instructions, they think like engineers who've seen what happens when quality is NOT an afterthought.
Treating AI Companion Assistants as juniors who can grow is all about communication.
I encourage them to discover requirements through SBE - EDD workflow as follows:
SBE (Specification By Example) = Specific Discovery
- Real examples from stakeholders capture true intent
- Living documentation that evolves with understanding
- Business language that technical teams can execute
- Wikipedia: Specification By Example
EDD (Example-Driven Development) = Grounded Implementation
- Specific scenarios guide every design decision
- Edge cases surface early through real examples
- Code tells a story that matches business reality
- Example-Driven Development
Why this works with AI Companion Assistants:
- SBE grounds them in real business context - they stop inventing requirements
- EDD gives them specific scenarios to validate against - no abstract guessing
- Together, they bridge the gap between "what business needs" and "what code does"
- The human provides examples; the AI implements the pattern
This approach catches misunderstandings before code exists - when examples don't match expectations, you know something's wrong.
I force them to follow BDD - TDD workflow as follows:
BDD (Behavior-Driven Development) = Abstract Understanding
- Shared language for "what" before "how"
- Given-When-Then scenarios as executable specifications
- The human defines expected behavior in plain language during active interaction
- Wikipedia: Behavior-Driven Development
TDD (Test-Driven Development) = Agentic Action
- Red-Green-Refactor cycle guides implementation
- Tests become precise, machine-verifiable instructions
- The AI iterates until specifications pass, ideally in a parallel agentic manner
- Wikipedia: Test-Driven Development
Why this works with AI Companion Assistants:
- BDD gives them clear, unambiguous intent - they don't have to guess what "working" means
- TDD creates a tight feedback loop - they see failures, fix, and re-run automatically
- Together, they reduce hallucination and keep the AI focused on small, testable outcomes
- The human owns the specification; the AI owns the implementation
This mirrors how you'd mentor a real human junior: explain the expected behavior clearly (BDD), then let them implement against verifiable tests (TDD), guiding when they get stuck.
AI Companion Assistants amplify whoever is guiding them. Without domain expertise, "vibe coding" produces plausible-looking output that crumbles under real conditions. With an experienced practitioner at the helm, the same tools become force multipliers.
I've used SBE - EDD and BDD - TDD for years across TELCO, HPC/ML, GenAI, and RETAIL systems - high-stakes environments where failures are expensive and edge cases are everywhere.
That matters because:
- I know what DONE actually looks like - not just passing tests, but production-ready under load, failure, and edge conditions
- I recognize the smells - when an AI's "solution" is clever but fragile, or when a test suite gives false confidence
- I can write meaningful specifications - BDD scenarios that catch real bugs, not happy-path trivia
- I've seen the failure modes - 20 years of troubleshooting means I know what to red-team before it ships
- I guide - They execute - the AI brings speed; I bring judgment about what's worth building and how
An experienced GUIDE transforms AI Companion Assistants from JUST code generators into MEANINGFUL, reliable collaborators.
The Competencies and META Skills I'm teaching aren't theoretical - they're the hard-won patterns I've used to ship quality under pressure for two decades.
The Side Kick is a fundamental technique in kickboxing that combines strength, balance, and coordination. It's not the flashy knockout punch. It's the reliable, technical move that keeps you in the fight.
In team sports, the best players aren’t the solo heroes but the ones who make everyone around them better — the true sidekicks who turn individual effort into collective momentum.
In cooking, a sidekick refers to dishes that complement the main course. Not the star, but the thing that makes the star look good. Garlic bread knows its place.
So what are AI Sidekicks? Your technical partners that won't steal the spotlight but will absolutely save your bacon when the main event goes sideways.
Because AI always wants to give you more than you asked for. One sidekick? Cute. Multiple sides, multiple kicks? Now we're talking.
AHA** - Becasue we are Advancing HUMANS with AI
For Your Info - Because after 20+ years in QA, I have opinions worth sharing. You've been warned.
This project is licensed under the MIT License - see the LICENSE file for details.