This article concerns real-time and knowledgeable JavaScript Scenario-Based Questions 2025. It is drafted with the interview theme in mind to provide maximum support for your interview. Go through these JavaScript Scenario-Based Questions 2025 to the end, as all scenarios have their importance and learning potential.
To check out other Scenarios Based Questions:- Click Here.
Disclaimer:
These solutions are based on my experience and best effort. Actual results may vary depending on your setup. Codes may need some tweaking.
1) Your SPA shows inconsistent UI after rapid clicks. How would you reason about microtasks vs macrotasks to fix state flicker?
- I first map what schedules what: Promises/
queueMicrotask
go to microtasks; timers, DOM events go to macrotasks. - If state updates are split across both, microtasks flush first and can starve rendering until empty, causing odd “late” paints.
- I batch state changes into the same microtask, or defer heavy work to a macrotask to let the browser render.
- I remove accidental microtask chains (e.g., nested Promise thenables) that reorder effects.
- For UI reads/writes, I gate them to the same tick or use
requestAnimationFrame
for paint-aligned work. - I prove the fix by logging queue order and checking paint timing in DevTools Performance.
- Goal: one predictable flush per user action, not multiple interleaved queues.
- This avoids “double render” illusions under bursty input.
2) A Node.js service spikes CPU during peak traffic though I/O is minimal. How do you check for event-loop blocking?
- I confirm event-loop health with metrics (lag/
clinic
, flamegraphs) and look for long JavaScript frames. - If single heavy sync work runs per request, it blocks the loop and queues callbacks, spiking latency.
- I offload CPU chunks to workers/child processes or stream chunked work instead of big arrays/maps.
- I replace sync crypto/compression with async or native bindings if needed.
- I avoid
JSON.stringify
on large objects in hot paths; pre-serialize or stream. - I check
process.nextTick
abuse which can starve other phases. - I re-test with load +
—inspect
/profiler to ensure short tasks dominate. - The end state: loop stays responsive; long tasks are isolated.
3) A junior added optional chaining everywhere and performance dipped. When do you avoid/allow it?
- Use it to guard optional paths at boundaries (API, config), not in tight inner loops.
- Overuse can hide real nullability bugs in core domain objects.
- Prefer local invariants: validate once, then access directly in hot code.
- Combine with nullish coalescing for fallbacks only where needed.
- Measure; engines optimize common patterns, but unnecessary guards in loops still add overhead.
- Keep it in readability-critical code; drop it where data is guaranteed.
- Educate with examples comparing guarded vs validated code paths.
- Code review should ask: “Is this value truly optional here?”
4) After migrating to ESM, some setTimeout
vs Promise ordering changed and tests broke. What’s your mental model?
- Promises/microtasks always flush before the next macrotask, regardless of modules.
- So
Promise.resolve().then(...)
runs beforesetTimeout(...,0)
. - Test suites relying on timer order are brittle; prefer explicit
await
on Promises your code produces. - If you need “after paint/IO”, use a macrotask (
setTimeout
/setImmediate
) intentionally. - If you need “right after current task”, use microtasks via resolved Promise or
queueMicrotask
. - Align tests with actual scheduling guarantees, not incidental ordering.
- Document the chosen scheduling for contributors.
- Keep module migration separate from timing-sensitive refactors.
5) Users report memory growth after long sessions. What JS-side leaks do you hunt first?
- Event listeners kept on detached DOM nodes; I verify with heapprofiles and “Retainers”.
- Global caches/maps without eviction; I add size caps and time-to-live.
- Observers (Intersection/Mutation/Resize) not disconnected on teardown.
- Closures capturing large objects; I break references or move to weak references if suitable.
- Timers/Intervals never cleared on route change.
- Third-party widgets keeping hidden references; sandbox or lazy-mount them.
- I reproduce via scripted interactions, then compare heap snapshots.
- “Fix one class of leak” per patch to keep changes safe.
6) Your bundle didn’t shrink after enabling tree-shaking. What’s the likely cause and practical fix?
- CommonJS or transpiled modules (Babel turned ESM into CJS) block static analysis; keep
modules:false
. - Side effects prevent pruning; mark
"sideEffects": false
or list files with effects. - Library not ESM or mis-flagged; prefer ESM builds or switch packages.
- Re-check minifier settings; dead-code elimination might be disabled.
- Verify imports are per-function (
import { x }
) instead of namespace. - Confirm the tool (webpack/Rollup) version supports module-level DCE.
- Measure diff with analyzer to see what’s retained and why.
- Educate team about “no top-level side effects” in shared utils.
7) A product owner asks “Why is strict mode useful in 2025 modules?” How do you answer simply?
- ES modules run in strict mode by default, which surfaces silent errors early.
- It forbids sloppy constructs so engines can optimize better.
- It protects from accidental globals and duplicate params.
- In legacy scripts, opting in reduces weird edge behavior across browsers.
- With linters and TS you still benefit at runtime in non-typed areas.
- For third-party scripts, strict mode catches footguns fast in QA.
- It’s not a silver bullet; tooling still matters.
- But default-strict in modules is a net win for safety.
8) Your click handler fires twice in React/Vue under strict/dev builds. How do you explain and de-flake?
- Some frameworks intentionally double-invoke in dev to surface side effects.
- If your handler isn’t idempotent, you’ll see duplicates.
- Make handlers pure or guard with flags that reset per tick.
- Avoid doing persistent mutations in render paths.
- Move side effects to lifecycle hooks designed for them.
- In tests, simulate the production behavior or disable dev-only double calls.
- The aim is predictability, not clever fixes.
- Document this in the team’s testing guide. (General practice; framework docs discuss dev double-invoke.)
9) A teammate uses process.nextTick
everywhere. Latency climbs. What trade-off do you explain?
nextTick
runs before other microtasks and can starve the loop if chained.- Overuse delays timers, I/O callbacks, and fairness across requests.
- Prefer Promises for microtasks and
setImmediate
for “after I/O” work. - Keep batches bounded; break large queues into chunks.
- If it’s a back-pressure situation, use streams or queues.
- Measure with loop-lag metrics before and after change.
- Update a small policy: “no unbounded nextTick in hot paths.”
- This balances responsiveness with throughput.
10) Product asks for “instant search” while typing; perf drops on low-end phones. How do you manage it without code golf?
- Debounce keystrokes to one pipeline per frame or 150–250ms.
- Move heavy filtering off the main thread via Worker.
- Avoid full re-renders; diff and update only the list slice in view.
- Pre-index data (e.g., basic token maps) at idle time.
- Avoid JSON parse/stringify loops in the hot path.
- Cache last query result for prefix typing.
- Measure input latency and frame time; ship only if both are stable.
- Keep UX fallbacks for very slow devices (manual submit). (General performance practice.)
11) After a refactor, hot paths got slower. You suspect V8 deopts. What’s your “coach” guidance?
- Keep object shapes stable: create properties in same order and avoid adding later.
- Avoid polymorphic call sites in hot loops when possible.
- Don’t mix types in arrays used in performance-critical code.
- Hoist allocations out of loops; reuse small structs.
- Use perf tooling/flamegraphs to catch hidden megamorphic sites.
- Micro-optimize only where profiled, not across the codebase.
- Document “shape hygiene” in contributing.md.
- Re-profile to verify ICs are hot again.
12) Your team debates “Vanilla JS + small libs” vs “framework.” How do you decide with business impact?
- Look at team skills and hiring pool; delivery speed matters more than theoretical bytes.
- If the app is long-lived and complex, frameworks bring conventions that reduce decision fatigue.
- For small widgets or constrained pages, lightweight + web platform APIs win.
- Consider ecosystem: routing, i18n, testing, SSR, a11y—do you need them now?
- Measure perf budget; many frameworks can meet <100KB gz goals with code-splitting.
- Factor maintainability cost, not just initial LOC.
- Choose one path; write guardrails and upgrade plan.
- Revisit once a year as requirements shift. (Community practice; industry surveys back longevity.)
13) You suspect a leak from listeners after route changes. What’s your cleanup checklist?
- Confirm with heap snapshots across navigations.
- Inspect Nodes panel for “detached” nodes still retained by listeners.
- Verify framework unmount hooks remove subscriptions/observers.
- Centralize event bus and ensure unsubscribe on teardown.
- For custom add/remove patterns, wrap with helpers that auto-track.
- Use
AbortController
to cancel DOM/event listeners on scope exit. - Add E2E tests that navigate repeatedly and watch memory ceiling.
- Ship a dashboard alarm on heap size after N minutes.
14) Your team moved to ESM but tree-shaking still misses dead code. What else breaks DCE?
- Re-export barrels that re-execute side-effectful modules.
- Top-level work (polyfills, logging) inside shared utils.
- Mixing CJS and ESM in a chain forces fallback.
- Babel/TS transpiling modules to CJS.
- Dynamic
require()
orimport()
patterns that hide static usage. - Misconfigured package
exports
/sideEffects
. - Vendor libs without proper ESM builds.
- Use bundle analyzer to pinpoint kept modules.
15) A PM asks why their bug “goes away” when adding setTimeout(..., 0)
. How do you explain safely?
setTimeout
defers work to a later macrotask, letting microtasks and rendering catch up.- It can mask ordering bugs caused by microtask chains.
- We prefer fixing the root cause: co-locate state updates in one tick.
- For paint-aligned updates use
requestAnimationFrame
, not blind timeouts. - Timeouts vary across browsers and tabs; not robust.
- Temporary patches risk race conditions elsewhere.
- We keep a doc: “When to defer, and why.”
- Then we add tests around the corrected scheduling.
16) Backend says “just parse the whole 10MB JSON and filter.” You’re targeting cheap Androids. Your answer?
- Don’t block the main thread on huge parse + filter; use a Worker or stream if possible.
- Ask backend for server-side filtering/pagination to shrink payload.
- If you must parse, chunk work and yield to the event loop.
- Cache last page and diff incremental results.
- Consider compressed payloads with proper cache headers.
- Measure parse time and memory on a reference low-end device.
- Agree on an SLO for input latency before shipping.
- Keep a circuit-breaker for oversized responses. (General perf guidance.)
17) Security review flags accidental globals in legacy scripts. What’s your modernization plan?
- Wrap legacy code into modules to get strict-mode semantics by default.
- Lint for implied globals and ban
with
/eval
patterns. - Introduce build steps gradually; don’t rewrite all at once.
- Add smoke tests to catch behavior changes early.
- Document allowed globals and freeze critical ones.
- Train devs on safer patterns for inter-script communication.
- Track error rates post-migration.
- Use CSP reports to spot eval-like usage.
18) Design wants animations everywhere; FPS tanks on scroll. How do you keep it smooth?
- Animate transform/opacity, avoid layout-triggering properties.
- Batch DOM writes/reads and use
requestAnimationFrame
. - Prefer CSS transitions for simple effects; JS only for complex timelines.
- Reduce simultaneous animations; stagger them.
- Use
will-change
sparingly to promote at the right time. - Test on mid-range Android with Performance panel.
- Defer non-critical animations until idle.
- Treat 60fps as a feature with acceptance criteria. (General platform best practices.)
19) A library adds 100KB but only one function is used. You import by name; still no size win. Why?
- The library might publish only CJS, so the bundler can’t shake it.
- The package may mark files as side-effectful, blocking DCE.
- Your import path could hit a barrel that re-exports too much.
- Some bundlers need ESM + production mode for full DCE.
- Try a micro-lib alternative or write the tiny function in-house.
- Confirm with analyzer which sub-modules remain.
- File an issue or switch to an ESM-friendly fork.
- Document dependency acceptance criteria in the repo.
20) You’re asked to “optimize everything.” What’s your JS-specific prioritization checklist?
- Start with user-visible metrics: input latency, TTI, FPS, memory ceiling.
- Profile first; only optimize hot paths confirmed by traces.
- Fix correctness (ordering, leaks) before micro-ops.
- Right-size dependencies; prefer ESM, no side effects, code-split.
- Keep event loop responsive; move CPU to workers.
- Bake in budgets and CI checks (bundle size, loop lag).
- Educate team with short docs and recurring perf reviews.
- Re-measure after every change and capture regressions early.
21) Your SSR site feels fast on first paint but interactions lag after hydration. What would you check and fix?
- I check hydration waterfalls: too many islands mounting at once will block input responsiveness.
- I defer non-interactive widgets and hydrate only above-the-fold interactive parts first.
- I batch state wiring to a single tick per island instead of scattered microtasks.
- I lazy-load heavy event handlers and third-party widgets after first user input.
- I guard effects to run only on the client and only once per mount.
- I verify with Performance panel that Long Tasks drop below 50ms.
- I set a budget for hydration time on mid-range Android and track it in CI.
- I document a hydration order so future components don’t pile in blindly.
22) Your SPA uses infinite scroll; memory keeps rising over a long session. How do you control it?
- I window the list: keep only visible items plus a small buffer in memory.
- I release images and object URLs when items scroll far out of view.
- I clear timers, observers, and listeners tied to removed rows.
- I cap caches and evict least-recently-used data when thresholds hit.
- I move heavy parsing to a Worker and transfer ownership when possible.
- I add a “clear history” control for user-initiated reset.
- I watch heap snapshots after 10+ scroll cycles to confirm stability.
- I add a guardrail alert if the heap exceeds a defined ceiling.
23) Payments form sometimes double-charges when users tap quickly. What’s your JS-side mitigation?
- I make the submit path idempotent in UI: disable the button once the request fires.
- I gate with a per-intent token so repeated taps reuse the same in-flight promise.
- I display “processing…” feedback to reduce user retries.
- I handle late responses by ignoring results after the first success.
- I align backend idempotency keys, but keep UI protections regardless.
- I write an automated test that simulates rapid taps and network jitter.
- I log duplicate attempts to catch regressions early.
- I verify this works on slow devices where taps are more frantic.
24) Search suggestions feel laggy on low-end phones. Team wants “more caching.” What’s your response?
- I add keystroke debouncing and minimum character thresholds to cut noise.
- I cache only stable suggestions and set short TTLs to keep relevance fresh.
- I precompute cheap client-side indexes during idle time, not in keystrokes.
- I batch DOM updates so we paint once per input, not per item.
- I move expensive filtering to a Worker to keep the main thread free.
- I test with real device traces and target consistent <100ms input delay.
- I prefer correctness and smoothness over aggressive caching that goes stale.
- I expose a manual “search” option for ultra-slow devices.
25) A/B experiment adds analytics in each click handler; app stutters. What trade-offs do you apply?
- I buffer events in memory and flush on a timer or on idle, not per click.
- I avoid heavy object cloning; keep payloads minimal and typed.
- I sample high-volume events instead of logging every single one.
- I move serialization/compression off the main thread when possible.
- I guard analytics behind feature flags so we can turn it down quickly.
- I set SLAs: if input latency regresses, the experiment auto-disables.
- I monitor before/after traces to prove no frame drops.
- I document a lightweight client analytics contract for future tests.
26) A third-party widget blocks the main thread during load. What’s the safe integration pattern?
- I lazy-load it after first user interaction or when it scrolls into view.
- I sandbox in an iframe if it does heavy synchronous work.
- I isolate styles and events so it can’t leak listeners into our app.
- I enforce timeouts and fallbacks if its script fails or loads slowly.
- I track its size and CPU cost in our performance budgets.
- I prefer a “stub + async init” approach rather than blocking the page.
- I keep a kill-switch flag for emergency rollbacks.
- I review periodic updates because vendor scripts change over time.
27) Your app relies on localStorage
for cart state. Users report lost items across tabs. How do you stabilize?
- I switch to a more reliable store like IndexedDB for larger, transactional data.
- I synchronize changes with
storage
events or a BroadcastChannel for multi-tab consistency. - I write to storage in one place, not scattered across components.
- I debounce writes to reduce thrashing and corruption risks.
- I add versioning and migration to avoid breaking older data.
- I validate data shape on load and repair or reset if corrupted.
- I test with private mode and quota-limited environments.
- I keep minimal state in memory and rehydrate predictably.
28) Users navigate away mid-request; your app still mutates state with late responses. What’s your fix?
- I use an abort signal to cancel fetches on route change or unmount.
- I ignore responses tied to obsolete views by checking a “still-mounted” token.
- I centralize request lifecycles so cleanup is guaranteed.
- I avoid setting state from unmounted components in any framework.
- I decouple user intent from network completion; UI can proceed while work cancels.
- I add tests that navigate rapidly during slow network to prove safety.
- I keep telemetry for aborted vs completed requests.
- I prefer precise cancellation over timeouts where supported.
29) A teammate added Promise chains that starve rendering. How do you keep UI responsive?
- I avoid unbounded microtask loops; I yield to the macrotask queue periodically.
- I move CPU pieces into Workers and stream results back.
- I align UI mutations to
requestAnimationFrame
so paints aren’t blocked. - I split long work into chunks with cooperative scheduling.
- I ensure event handlers finish quickly and defer heavy parts.
- I watch Long Tasks and reduce any >50ms frames.
- I verify input delay and “first interaction OK” on target devices.
- I document “no infinite microtask chains” in our code guidelines.
30) Your Node service shows high memory after peak traffic but doesn’t crash. What is your leak triage?
- I confirm it’s retained memory, not just fragmentation, using heap snapshots.
- I inspect global caches and add TTL or size caps.
- I watch for listeners or closures retaining request objects.
- I check for long-lived timers/Intervals never cleared.
- I stream large responses instead of buffering them fully.
- I move CPU heavy steps to Workers to shorten object lifetimes.
- I enable alarms on heap growth rate, not just absolute size.
- I fix one leak at a time to keep rollouts safe.
31) Product wants offline support for a ticketing app. How do you approach it without over-engineering?
- I cache shell assets with a cautious Service Worker strategy.
- I store draft tickets in IndexedDB and sync when online.
- I design conflict resolution early: newest wins or server rules decide.
- I mark offline UI clearly and avoid false promises.
- I limit cache size and set up eviction policies.
- I build a small sync queue with retries and back-off.
- I test airplane mode, flaky networks, and device restarts.
- I add telemetry to see offline usage before investing more.
32) An image-heavy page janks while scrolling. What are your JS-side wins?
- I lazy-load images with good placeholders to stabilize layout.
- I avoid forced reflows by batching style reads and writes.
- I throttle scroll handlers and prefer passive listeners.
- I prefetch images for the next viewport slice during idle.
- I use IntersectionObserver instead of polling in scroll.
- I downscale oversized images server-side; don’t rely on CSS alone.
- I measure with dropped-frame metrics on mid-range devices.
- I document a media budget for future pages.
33) A dashboard polls every 2 seconds and hammers the backend. What’s your smarter refresh strategy?
- I switch to server-pushed updates when feasible, else exponential back-off.
- I pause polling when the tab is hidden; resume on focus.
- I coalesce multiple widgets into one request and fan out locally.
- I let users manually refresh for non-critical widgets.
- I cache responses briefly to absorb rapid re-renders.
- I align polling to even seconds to reduce herd effects.
- I track QPS and fall back gracefully under load.
- I publish refresh rules so new widgets don’t copy bad defaults.
34) Feature flags added everywhere; users hit contradictory states. How do you make flags sustainable?
- I centralize flag reads and expose simple booleans per feature.
- I snapshot flags at page load to avoid mid-session shifts unless needed.
- I define dependencies and mutual exclusions up front.
- I set a default strategy for unknown flags (safe off).
- I log flag combinations used in production for audits.
- I add kill-switches for risky experiments.
- I clean up retired flags with an agreed timeline.
- I test with realistic flag matrices before rollout.
35) A graph rendering library freezes on big datasets. What are your JS options without replacing it?
- I downsample or aggregate data before rendering; users rarely need every point.
- I paginate or window the visualization and render on demand.
- I precompute scales and layouts off the main thread.
- I defer labels and tooltips until interaction.
- I turn off non-essential effects like gradients or shadows.
- I cache computed paths if the view doesn’t change.
- I set performance budgets for max nodes/edges.
- I file a vendor issue with repros and our constraints.
36) Multi-tab users overwrite each other’s edits. How do you coordinate safely in JS?
- I use BroadcastChannel to announce edit sessions to other tabs.
- I show a “read-only/locked by you elsewhere” banner to avoid surprises.
- I merge non-conflicting fields and highlight conflicts for user choice.
- I timestamp edits and prefer last-writer-wins only if business is OK.
- I save drafts locally per tab with clear recovery flows.
- I expire stale locks to avoid deadlocks.
- I record cross-tab collisions to measure frequency.
- I educate users with lightweight messaging when it happens.
37) A new dev added global error handlers that swallow exceptions. What’s your correction?
- I log errors with enough context, then show a user-friendly message.
- I avoid “catch and continue” unless we can truly recover.
- I surface critical paths to an alerting system, not just console.
- I keep source maps and link stack traces back to readable code.
- I distinguish operational errors from programmer bugs.
- I write chaos tests that simulate failures and verify UX.
- I guard third-party boundaries with timeouts and retries.
- I document when to fail fast vs degrade gracefully.
38) Your Node API sometimes times out under burst traffic. Where do you look first in JS?
- I check event-loop lag and blocky synchronous work.
- I confirm DB/HTTP clients are using pooling, not new sockets per call.
- I stream large payloads and avoid full buffering.
- I apply circuit breakers and bulkheads per dependency.
- I queue or shed load gracefully with back-pressure.
- I use Worker Threads for CPU-heavy routes.
- I tune timeouts and retries to avoid thundering herds.
- I re-load-test to validate p95/p99 improvements.
39) Your CSP blocks inline scripts; legacy pages break. How do you modernize safely in JS land?
- I move inline logic into hashed or external scripts with nonces.
- I eliminate
eval
and dynamic code where possible. - I adopt strict-dynamic only if we fully understand the risks.
- I preload critical scripts to reduce perceived regressions.
- I stage rollout per page and monitor error reports.
- I document allowed script sources and review changes.
- I keep a temporary report-only mode to gather findings.
- I treat CSP as a living control, not a one-time setup.
40) Data formats changed from the backend; UI crashes occasionally. How do you make JS more resilient?
- I validate inputs at boundaries and fail softly with placeholders.
- I keep defensive defaults for optional fields.
- I log schema mismatches to spot trends.
- I version API contracts and handle v-to-v transitions.
- I avoid deep property access without checks in non-hot paths.
- I add contract tests in CI against mock servers.
- I surface partial data rather than blank screens.
- I align on a deprecation window with backend teams.
41) A date-heavy report shows different totals per timezone. What’s your JS-level discipline?
- I keep storage in UTC and convert at the edges for display.
- I treat date ranges carefully: inclusive/exclusive boundaries are explicit.
- I avoid “midnight local” assumptions; use calendar-aware math.
- I test around DST transitions and leap days.
- I avoid string parsing quirks; prefer reliable date utilities.
- I show users the timezone used for totals.
- I snapshot filters with zone info for reproducibility.
- I write tests for tricky boundaries like end-of-month.
42) A marketing popup slows first input. What do you do without killing it outright?
- I lazy-load the logic and assets after initial interaction or idle.
- I render a lightweight stub and hydrate details later.
- I cap the work per frame and avoid layout thrashing.
- I limit animation to transform/opacity and keep it brief.
- I sample its display rate to reduce frequency.
- I run an A/B test to measure conversion vs latency.
- I add a budget so future popups meet the same bar.
- I keep a hard switch to disable it if it regresses metrics.
43) Users upload big files; the UI looks frozen. What’s the JS approach to keep it humane?
- I stream uploads in chunks with visual progress, not a single opaque call.
- I move checksum/compression to Workers to avoid blocking.
- I allow resume/retry and chunk deduplication where supported.
- I cap concurrency so we don’t saturate the device.
- I provide clear states: queued, uploading, verifying, complete.
- I handle background/foreground tab transitions gracefully.
- I log failures by step to debug real-world issues.
- I keep UI responsive with small, frequent updates.
44) A new accessibility audit found keyboard traps in modals. What’s your JS-specific fix?
- I ensure focus is trapped within the modal while it’s open.
- I restore focus to the trigger on close for continuity.
- I disable background scroll and interactions cleanly.
- I provide clear, programmatic close actions for Escape and buttons.
- I avoid dynamic focus jumps that confuse screen readers.
- I test with keyboard only and common reader setups.
- I document a modal utility so teams reuse the safe pattern.
- I add a linter rule to flag anti-patterns.
45) Your bundle analyzer shows a huge “utils” file used everywhere. How do you reduce the cost?
- I split utilities into granular modules with pure, side-effect-free functions.
- I publish ESM so bundlers can tree-shake effectively.
- I remove legacy polyfills and feature-detect at runtime.
- I inline trivial helpers in hot paths if they block DCE.
- I set size budgets and fail CI if utils regresses.
- I document approved imports and forbid the “barrel” path.
- I build a migration map for teams to adopt smaller imports.
- I re-measure after each step to prove impact.
46) A chat widget sometimes loses messages during reconnection. JS-side reliability plan?
- I queue outbound messages locally until the connection is confirmed.
- I de-duplicate by client IDs so retries don’t double-send.
- I request missed history on reconnect based on last acked marker.
- I back-off reconnects to avoid hammering the server.
- I show a clear “reconnecting” state and prevent user confusion.
- I persist a short message buffer in storage for crash recovery.
- I test with network link conditioner for edge cases.
- I log gaps to troubleshoot real-world losses.
47) Your Node server renders templates; occasional spikes point to GC pressure. What do you change?
- I reuse template instances or caches instead of recreating large objects each request.
- I stream responses and avoid building giant strings in memory.
- I precompile templates at startup where possible.
- I reduce temporary allocations in hot loops.
- I tune concurrency to match CPU and memory limits.
- I profile allocations to identify churny hotspots.
- I stage changes and compare GC pause times.
- I set alarms on allocation rate to catch regressions.
48) A teammate added synchronous zlib and crypto calls in request handlers. Latency jumped. Your move?
- I replace sync calls with async equivalents or move to Workers.
- I cap concurrency for CPU-heavy paths to protect the event loop.
- I pre-compute or cache where inputs repeat.
- I offload non-critical work to background jobs.
- I add tracing around these steps to prove impact.
- I agree on a rule: “no blocking crypto/compression in hot endpoints.”
- I load-test before and after to quantify improvement.
- I keep a fallback plan if async versions behave differently.
49) An analytics SDK keeps the app awake on background tabs. How do you tame it in JS?
- I pause non-critical timers when the tab is hidden.
- I throttle network calls with the Page Visibility API.
- I buffer events and flush on focus or at longer intervals.
- I gate heavy work behind user interaction where acceptable.
- I warn the vendor and track their updates.
- I keep a hard toggle to disable the SDK if needed.
- I measure battery impact on laptops and mobile.
- I document background behavior expectations for all SDKs.
50) Users report “Save” works, but the toast appears late or not at all. Where do you fix feedback?
- I optimistically reflect the save in UI while the network call proceeds.
- I show a determinate or indeterminate progress indicator.
- I handle slow paths with a gentle “still saving…” status.
- I rollback UI if the save fails and explain next steps.
- I avoid stacking toasts; show one concise status.
- I log outcomes for SLOs on save confirmation times.
- I test with slow 3G and packet loss profiles.
- I keep copy simple and reassuring.
51) A form validates on every keystroke and lags. What’s your JS-friendly validation strategy?
- I validate on blur or submit, not on each keystroke for heavy rules.
- I debounce expensive checks and run cheap ones inline.
- I batch DOM updates for error messages.
- I move complex checks, like regex over big text, off the main thread.
- I keep validation messages short and near the fields.
- I avoid blocking the user while secondary checks run.
- I track error rates to spot unclear rules.
- I let users submit with warnings when strictness hurts usability.
52) A canvas-based game drops frames sporadically. What are your JS remediation steps?
- I do all game loop work inside
requestAnimationFrame
, not timers. - I split AI/physics into smaller chunks or Workers.
- I pool objects to reduce GC spikes during play.
- I pre-load assets and avoid decoding mid-frame.
- I limit draw calls and batch where possible.
- I measure frame budget and keep logic under ~8–10ms.
- I test on mid-range hardware; high-end hides issues.
- I record and analyze frame timelines to target fixes.
53) A complex table re-renders entirely on small edits. How do you minimize JS-driven churn?
- I memoize row renderers and update only changed cells.
- I virtualize long lists so only visible rows exist.
- I keep state local to rows where possible.
- I avoid recreating column definitions every render.
- I batch updates and commit once per interaction.
- I profile which props trigger re-renders and stabilize them.
- I document patterns for future tables to follow.
- I add tests around editing performance thresholds.
54) After migrating to modules, some old scripts rely on global variables. How do you bridge safely?
- I wrap legacy code into small adapters that expose explicit APIs.
- I avoid polluting
window
; instead, import where needed. - I load legacy bundles only on pages that still need them.
- I schedule a deprecation plan to remove globals entirely.
- I write smoke tests to prevent accidental re-globalization.
- I document the adapter contract for teams still migrating.
- I track error rates during the transition.
- I refuse new features that depend on old globals.
55) A “remember me” feature stores tokens in the wrong place. What’s your JS guidance?
- I avoid putting sensitive tokens in localStorage due to XSS risk.
- I prefer http-only, secure cookies managed by the server.
- I minimize what the client stores—only non-sensitive session hints.
- I handle token refresh with short lifetimes and rotation.
- I show clear sign-out that wipes client traces.
- I test across private mode and cleared storage.
- I add telemetry to detect token misuse patterns.
- I educate the team on storage threat models.
56) A dashboard includes dozens of small charts; initial render is sluggish. What’s your JS approach?
- I defer non-critical charts until they are visible.
- I precompute datasets server-side when possible.
- I share scales and legends to avoid repeated work.
- I render simplified placeholders first, then enhance on idle.
- I reuse canvases and contexts to limit setup cost.
- I cap the number of concurrent chart initializations.
- I test data volume thresholds and enforce limits.
- I document chart performance dos and don’ts.
57) Your service worker sometimes serves stale JS after a deploy. How do you keep updates safe?
- I version assets with content hashes and update the manifest atomically.
- I show a subtle “Refresh to update” prompt when a new SW is ready.
- I avoid
skipWaiting
unless I’m sure it’s safe for in-flight pages. - I clear old caches in the activate step with care.
- I verify integrity before activating new bundles.
- I add telemetry for update adoption rates.
- I test multi-tab update behavior explicitly.
- I document an emergency revert plan.
58) A CSV export freezes the page on large datasets. How do you deliver it smoothly in JS?
- I stream or chunk the export generation off the main thread.
- I pre-filter on the server to shrink the payload early.
- I provide a background export with a notification when ready.
- I show progress and allow cancellation to respect the user.
- I compress or paginate when full export isn’t necessary.
- I enforce size limits with clear messaging.
- I log export durations and failures to guide improvements.
- I keep the UI usable throughout the process.
59) Your micro-frontend shell loads several apps; sometimes one failure breaks the page. JS-side isolation plan?
- I load each micro-app with error boundaries and clear fallbacks.
- I sandbox risky ones in iframes to isolate crashes and CSS.
- I time-box initialization so laggards don’t block others.
- I standardize contracts: mount, unmount, and health pings.
- I keep shared dependencies version-pinned or fully isolated.
- I degrade gracefully if a region can be hidden.
- I monitor per-app errors and load times.
- I rehearse partial-failure drills before big launches.
60) Leadership asks, “What does good JS engineering look like here?” Your concise, scenario-based manifesto?
- We prioritize user-visible performance: smooth input, quick paint, stable memory.
- We design for failure: retries, timeouts, cancellations, and friendly fallbacks.
- We keep the main thread light: Workers for CPU, streaming for data.
- We ship lean bundles: ESM, tree-shaking, strict budgets, and analyzers.
- We commit to accessibility and internationalization from day one.
- We measure everything and roll out safely with flags and kill-switches.
- We document patterns so teams don’t reinvent risky wheels.
- We learn from incidents and turn fixes into shared guardrails.