Deeper control over shortcuts, answer strategy, model integration, voice tuning, and cloud capabilities
Advanced Usage
Who should read this
This page is best for users who are already stable on the defaults and want to improve efficiency, reduce conflicts, or fine-tune behavior.
Defaults are already the recommended starting point
Our default strategy is designed for real interview workflows out of the box and is already very close to best practice. For most users, one full rehearsal on defaults plus 1-2 careful tweaks is more reliable than deep customization on day one.
Treat these as progressive changes. Do not turn on too many advanced options at once right before a real interview.
Run one full round on defaults first
Keep the default hotkeys first
The default hotkeys already cover the highest-frequency actions. Only remap when you have a real conflict with your IDE, browser, or meeting tool.
Keep General for non-coding
Most non-coding interviews do not need LeetCode or ACM mode. General is usually the safest starting point.
Let voice start stable first
Start with the default voice workflow and current recommended route, then decide whether hotwords, silence tuning, or auto-trigger rhythm really need adjustment.
Improve materials before parameters
If you want answers to sound more like you, improving the cloud resume and knowledge base usually matters more than touching lots of advanced switches.
Recommended order for advanced tuning
Start with the options that affect speed the most
Shortcut layout, answer constraints, and voice filtering usually affect your interview rhythm first.
Then move to model and API settings
These directly affect tone, latency, and answer style, so validate them separately.
Enable cloud and planning features last
Sync and review features are useful, but they are not the first things you want to change before a real session.
Advanced capability overview
Custom mapping
If the default keys conflict with your IDE, browser, or meeting tool, adjust them carefully and only where it really matters.
Adjustment order
Keep the highest-frequency keys stable first, then tune scrolling, opacity, zoom, and secondary controls.
Platform reminder
The default mental model still matters: Windows users think in Ctrl, macOS users think in Cmd, even after minor customization.
Answer preferences
Examples include "explain the approach before code", "include complexity", or "keep it concise and speakable".
Scenario-driven prompts
Translation, English interviews, behavioral rounds, system design, and coding interviews each benefit from different instructions for tone, structure, and format.
Writing guidance
Keep rules short, clear, and non-conflicting. Dense rule stacks usually create worse outputs, not better ones.
Custom API
Supports OpenAI-compatible configuration including Display Name, Base URL, API Key, Model Name, and extra Header / Body fields.
Before live use
Test stability, first-token latency, and answer style before relying on a custom model during a real interview.
Smart route and filters
Tune Smart route, hotwords, message filters, recent message count, whether to include the last Q&A pair, and silence thresholds.
Auto vs manual rhythm
Interviewer only works better when you want auto-triggered Q&A from interviewer questions; long or multi-speaker rounds usually benefit from manual control.
Tuning advice
Change one important parameter at a time so you can tell what actually helped or hurt.
Cloud resume and planning
These include cloud session sync, interview review scoring, interview plans, and cloud resume plus knowledge-base coordination.
Usage advice
The recommended order is: complete the cloud resume first, sync it into the knowledge base, then enable review and planning features.
Example answer-preference templates
Useful for non-native English speakers who want answers that are easier to say out loud.
Use concise spoken English.
Start with the conclusion, then give 1-2 supporting points.
Keep technical terms in English.
Avoid overly formal written language.Useful when you want the product to turn your own ideas into more natural interview-ready English.
Keep technical terms and product names in English.
Give a natural spoken answer instead of literal translation.
Keep the answer short enough to say in 20-40 seconds.Useful for introductions, conflict stories, project highlights, and "why did you leave" questions.
Answer in STAR order but keep it short.
Lead with the result first.
Include one concrete metric or business outcome when possible.Useful when you want a more predictable output contract during technical rounds.
For system design, give conclusion, core components, trade-offs, and risk control.
For coding questions, explain approach and complexity first.
Use LeetCode-style output for function-only questions.
Use ACM/OJ-style output with main/stdin/stdout when required.Smoother combos for voice, screenshots, and notes
Live voice Q&A + knowledge base notes
Project follow-ups, behavioral rounds, and English self-introductions work especially well when the knowledge base panel stays open as a stealth-friendly reference layer.
Live voice Q&A + screenshot Q&A
Use Cmd/Ctrl + 1 for live transcription Q&A and Cmd/Ctrl + 2 for screenshot Q&A when the round mixes spoken questions with shared code, diagrams, or whiteboards.
Cloud resume first, Q&A docs second
If you want answers to sound more like you, complete the cloud resume first, then add role-specific Q&A docs, system-design notes, and project reviews.
Context Memory Range
You do not need to switch this manually right now
The context memory range is managed automatically by the system. It prioritizes the most relevant recent Q&A, current screenshot or voice context, and knowledge-base retrieval results, so most users do not need a manual memory-range toggle.
Default behavior
The system prefers the most relevant recent context instead of blindly stuffing in older history. That usually keeps answers more stable and less likely to drift.
When you will feel it working
You will notice it most during follow-up questions, project deep-dives, multi-turn system design discussions, and repeated debugging on the same coding task.
What you should do instead
If answers start pulling in the wrong earlier context, use Start Over, clear screenshots, or refocus the current prompt instead of looking for a memory-range switch.
How to think about it:
- It is best understood as the system automatically deciding how much current context to keep, not as a high-frequency parameter you need to manage.
- The higher-leverage user actions are still material quality, clear prompting, and resetting cleanly when you move to a new question or a new round.
The 4 advanced items most worth trying first
Shortcut refinement
Only remap what genuinely conflicts. Your highest-value keys should still feel automatic under pressure.
Answer constraints
If answers feel too long, too generic, or not enough like your speaking style, this is often the first place to improve.
Voice filtering and hotwords
Especially useful for technical interviews with domain terms, acronyms, or mixed-language phrasing.
Model and API
One of the strongest levers on tone and latency. Validate carefully before trusting it in a real round.
Usage notes
- Advanced settings are not "more settings = more professional". They work best as small adjustments on top of a stable default setup.
- Change one category at a time and run one realistic rehearsal after each change.
- The default platform mapping is still Windows =
Ctrl, macOS =Cmd. - Do not switch custom models or APIs minutes before the interview.
- Validate voice-related settings on the target platform or /tools, especially Smart route, silence threshold, hotwords, and message filter.
- Only enable compatibility helpers or plugin-like behavior when you actually need them, then verify both input fields and shortcuts.
Continue reading
AI Reading Assistant
Send to your preferred AI
Smart Summary
Deep Analysis
Key Topics
Insights