This article concerns real-time and knowledgeable C# Scenario-Based Questions 2025. It is drafted with the interview theme in mind to provide maximum support for your interview. Go through these C# Scenario-Based Questions 2025 to the end, as all scenarios have their importance and learning potential.
To check out other Scenarios Based Questions:- Click Here.
Disclaimer:
These solutions are based on my experience and best effort. Actual results may vary depending on your setup. Codes may need some tweaking.
1) Your API is slow under peak load. How would you reason about async/await vs. synchronous calls in C# to improve throughput?
- I’d profile first to confirm if we’re I/O-bound or CPU-bound before changing anything.
- If it’s I/O-bound, I’d push endpoints and data calls to async to free threads and reduce queueing.
- I’d remove
.Result
/.Wait()
usage to avoid deadlocks and thread-pool starvation. - For CPU-heavy paths, I’d consider background workers or batching instead of sprinkling async everywhere.
- I’d ensure the data access library is truly async end-to-end; fake async just adds overhead.
- I’d add bounded concurrency with cancellation to protect downstream services.
- I’d validate gains with load tests and watch thread-pool growth and latency percentiles.
2) You keep hitting timeouts with HttpClient. What are the safe patterns to avoid socket exhaustion and unstable calls?
- I’d use a single long-lived HttpClient or a factory-based client to reuse handlers safely.
- I’d set reasonable timeouts per call and avoid infinite retries that amplify traffic.
- I’d add circuit-breakers and jittered retries to tame spikes and transient faults.
- I’d pool or limit connections per host to stay within service quotas.
- I’d stream large responses to reduce memory pressure instead of buffering all at once.
- I’d log request ids and timing to pinpoint slow hops and failing dependencies.
- I’d prove stability with chaos tests before shipping to prod.
3) Memory keeps climbing during batch jobs. How do you approach diagnosing GC pressure in .NET?
- I’d capture a memory dump at high-watermark and inspect large object heap and roots.
- I’d look for accidental caching, large arrays, or big strings retained via static references.
- I’d check event handlers and timers that keep objects alive unexpectedly.
- I’d prefer pooled buffers (e.g., ArrayPool) and streaming to avoid huge allocations.
- I’d review serializer settings that create temporary graphs, switching to spans where safe.
- I’d test Server GC vs. Workstation GC depending on workload characteristics.
- I’d set performance counters and compare Gen2 collections before/after fixes.
4) A service occasionally deadlocks after a refactor. What C# patterns help you reason about deadlocks quickly?
- I’d search for sync-over-async (e.g., .Result) and fix it to true async flow.
- I’d ensure locks are acquired in a consistent global order to avoid circular waits.
- I’d minimize lock scope and avoid doing I/O while holding locks.
- I’d try
SemaphoreSlim
for async coordination and keep awaits outside critical sections. - I’d validate with trace logs showing thread ids and lock ownership timeline.
- I’d isolate reproduction with a stress test that amplifies the race.
- I’d add timeouts and fail-fast behaviors to avoid permanent stalls.
5) Your team debates record
vs class
for DTOs. How do you decide?
- If immutability, value-based equality, and with-style updates help, records fit nicely.
- For domain entities with identity and lifecycle, classes usually map better.
- I’d consider serialization behavior and versioning needs before choosing.
- I’d check memory footprint if millions of instances are expected.
- I’d keep API DTOs stable; changing equality semantics later is risky.
- I’d standardize per layer to avoid cognitive overhead.
- I’d document the choice and tests that guard future regressions.
6) You must reduce p99 latency without throwing more hardware. What C#/.NET tweaks are practical?
- I’d remove sync blocking on I/O and use async end-to-end where it truly helps.
- I’d pre-warm caches and thread pools before traffic spikes.
- I’d cut allocations in hot paths using
Span<T>
, pooling, and streaming. - I’d batch round-trips to databases or queues wherever safe.
- I’d add cancellation to abandon hopeless work early.
- I’d log p95/p99 per dependency to target real culprits.
- I’d confirm wins with canary releases and rollback plans.
7) A critical job suffers from jittery throughput. Would you use parallel loops, tasks, or channels?
- If work items are independent and CPU-bound,
Parallel.ForEach
with limits is simple. - For mixed I/O and CPU, task-based pipelines with bounded concurrency are safer.
- If you need backpressure and ordering, channels give you structured flow control.
- I’d avoid unbounded Task creation; it floods the scheduler.
- I’d size concurrency from perf tests, not guesswork.
- I’d include cancellation tokens so we can drain gracefully.
- I’d monitor queue length and latency across pipeline stages.
8) Your EF Core queries look clean, but the DB is crying. How do you tune without rewriting the app?
- I’d profile generated SQL and watch for N+1 queries to collapse into joins.
- I’d shape queries with
Select
to bring only needed fields. - I’d leverage compiled queries for hot paths to cut translation cost.
- I’d evaluate tracking vs. no-tracking to reduce change-tracking overhead.
- I’d pre-compute filters and indexes with the DBA, not just code-side tweaks.
- I’d cache reference data responsibly with expirations.
- I’d lock in improvements with query performance tests.
9) A background service sometimes falls behind. How do you keep up without losing reliability?
- I’d confirm the load pattern and define an acceptable lag SLO.
- I’d scale out consumers and partition work to reduce head-of-line blocking.
- I’d batch operations for throughput while keeping item-level error handling.
- I’d add dead-letter queues and retries with exponential backoff.
- I’d checkpoint progress to resume cleanly after crashes.
- I’d expose health and lag metrics to trigger autoscaling.
- I’d do a load shed strategy when downstreams are degraded.
10) Your team added caching and bugs appeared. What’s your safe caching checklist?
- Cache only pure, deterministic results or include context keys to avoid cross-talk.
- Pick the right TTL and invalidation trigger; stale is often worse than slow.
- Validate serialization and versioning of cached values.
- Use distributed cache for multi-instance services to avoid split-brain.
- Guard cache misses so a spike doesn’t stampede the backend.
- Add cache metrics: hit rate, evictions, size, and latency.
- Keep a feature flag to disable caching quickly if needed.
11) A customer complains about wrong timestamps. How do you pick between DateTime
and DateTimeOffset
?
- For absolute points in time across zones, I’d store
DateTimeOffset
. - For purely local schedules (like store hours), local times plus zone info can work.
- I’d normalize persistence to UTC and convert at the edges.
- I’d avoid ambiguous times around DST transitions by using offsets explicitly.
- I’d ensure API contracts specify timezone semantics clearly.
- I’d add tests for DST boundaries and leap seconds assumptions.
- I’d fix logs to include offsets for reliable correlation.
12) You’re reviewing a PR that uses lock
heavily. When is lock-free or concurrent collections a better call?
- If contention is high and work is small, try concurrent collections to reduce blocking.
- If correctness depends on atomic multi-step operations, simple lock may be clearer.
- Lock-free often boosts throughput but can complicate correctness and debugging.
- I’d measure hot spots before changing; premature lock-free is risky.
- I’d keep state local and immutable where possible to dodge locks entirely.
- I’d document invariants guarded by locks to avoid accidental breakage.
- I’d add stress tests to catch regressions.
13) Your logging bill exploded. How do you keep observability strong while cutting cost?
- I’d set log levels intentionally and stop info-spamming hot paths.
- I’d use structured logs and sample high-volume events.
- I’d push noisy diagnostics behind feature flags.
- I’d route debug logs to cheaper storage with short retention.
- I’d enrich only what’s useful for triage: ids, durations, key inputs.
- I’d add tracing to replace walls of logs for request flow.
- I’d review retention and export rules with the platform team.
14) Security review flags secrets in config files. What’s your remediation plan?
- I’d rotate exposed credentials immediately, then remove them from history.
- I’d move secrets to a managed vault and fetch via managed identity if possible.
- I’d enforce secret scanning in CI to block future leaks.
- I’d switch to short-lived tokens instead of static keys.
- I’d restrict blast radius via least privilege per service.
- I’d audit logs to confirm no misuse during exposure window.
- I’d write a runbook so this fix becomes standard practice.
15) A team wants to switch to minimal APIs. What do you check before approving?
- I’d check if simpler hosting and lower ceremony benefit our use case.
- I’d look at routing complexity, filters, and middleware we rely on today.
- I’d confirm testability, validation patterns, and versioning approach.
- I’d ensure we keep consistent conventions across services.
- I’d benchmark startup and request overhead to validate the win.
- I’d plan migration in slices, not a big-bang rewrite.
- I’d document guardrails so handler sprawl doesn’t happen.
16) A partner requests gRPC for performance. When would you still keep REST?
- If cross-browser support and easy tooling matter, REST stays practical.
- For low-latency, strongly typed streaming, gRPC shines service-to-service.
- I’d evaluate gateway needs for customers who can’t use HTTP/2.
- I’d measure real payload sizes and serialization cost before deciding.
- I’d consider contract governance and backward compatibility.
- I’d keep REST for public APIs and gRPC internally if it balances needs.
- I’d pilot one critical path to validate the benefit.
17) Your team added CancellationToken
but cancellations don’t actually stop work. What’s missing?
- I’d ensure downstream calls pass the token; stopping at the top isn’t enough.
- I’d check loops and long operations poll for cancellation periodically.
- I’d halt retries when cancellation is requested to avoid waste.
- I’d make sure finally blocks don’t re-queue cancelled work.
- I’d add logs when cancellation happens to confirm coverage.
- I’d avoid swallowing
OperationCanceledException
silently. - I’d test with forced cancellations under load, not just unit tests.
18) Stakeholders ask for “real-time” updates. How do you set expectations and choose tech?
- I’d clarify what “real-time” means in seconds and which events qualify.
- For server-push to browsers, SignalR or WebSockets fit; for polling, negotiate intervals.
- I’d balance frequency with backend capacity and user value.
- I’d add backoff and offline handling on clients to avoid meltdowns.
- I’d protect updates with auth and rate limits.
- I’d pilot with a subset of users and measure perceived latency.
- I’d publish an SLO and monitor it publicly within the team.
19) Serialization breaks after a model rename. How do you minimize versioning pain?
- I’d introduce additive changes first and keep contracts backward-compatible.
- I’d map old names to new with custom converters or attributes.
- I’d avoid breaking required fields; use defaults for new ones.
- I’d version endpoints or messages and phase out old clients slowly.
- I’d add contract tests to freeze the wire shape intentionally.
- I’d document the deprecation timeline and telemetry to track stragglers.
- I’d provide migration samples to partners early.
20) An internal library uses reflection heavily and hurts performance. What would you try?
- I’d cache reflection results instead of repeating lookups.
- I’d switch to source generators or compiled expression trees where possible.
- I’d reduce dynamic shape and prefer explicit contracts for hot paths.
- I’d measure wins with profiling, not just assumptions.
- I’d isolate reflection to startup or build steps.
- I’d keep a fallback path for rare edge cases.
- I’d educate contributors to avoid reflection in loops.
21) A junior dev used exceptions for control flow. When are exceptions appropriate?
- For truly exceptional, unexpected conditions, not common branches.
- I’d return results or discriminated outcomes for expected failures.
- I’d keep exception hierarchies clear and meaningful.
- I’d avoid catching broad exceptions that hide real faults.
- I’d add context to thrown exceptions for supportability.
- I’d measure exception rates; spikes usually mean logic bugs.
- I’d treat exceptions like sirens, not normal traffic.
22) A microservice leaks memory after weeks. What C# patterns are usual suspects?
- Event handlers not unsubscribed keep objects alive.
IDisposable
not called on timers, streams, or HttpResponse objects.- Caches without limits grow until OOM occurs.
- Async lambdas capturing large closures unintentionally.
- Static references holding onto big graphs.
- Large object heap fragmentation from big buffers.
- I’d fix root causes and add memory budgets plus alerts.
23) Your team wants “faster startup.” Where do you look first?
- I’d measure cold start vs. warm start; they have different fixes.
- I’d trim DI graphs and lazy-init heavy services.
- I’d pre-compile or cache configurations and compiled queries.
- I’d delay non-critical background jobs until after readiness.
- I’d reduce reflection and big file reads at startup.
- I’d test single-file publish and trimming where feasible.
- I’d prove results with startup time dashboards.
24) You’re asked to add retries everywhere. What risks do you highlight?
- Blind retries can amplify outages and DDOS your own dependencies.
- Some operations aren’t idempotent and will duplicate side effects.
- Retry storms hide real root causes and waste compute.
- I’d use capped, jittered backoff and circuit-breakers.
- I’d mark idempotency with keys where possible.
- I’d log retry context to spot unhealthy patterns.
- I’d apply retries selectively per error type.
25) A new feature needs “exactly once” processing. How do you approach it realistically?
- I’d aim for “at least once” with idempotency to get practical reliability.
- I’d use dedupe keys or projections to ignore repeats.
- I’d checkpoint progress to recover after failures.
- I’d design operations to be commutative where possible.
- I’d keep poison messages away with dead-letter queues.
- I’d test with fault injection across service boundaries.
- I’d publish guarantees we actually meet, not aspirational ones.
26) An API returns out-of-order data after parallelization. How do you fix correctness?
- I’d check if ordering matters to the business outcome first.
- If yes, I’d track original indices and sort at the end.
- I’d constrain concurrency for segments needing order.
- I’d add unit tests asserting order contracts explicitly.
- I’d document which endpoints preserve ordering and which don’t.
- I’d keep a clear boundary between unordered work and ordered output.
- I’d measure the cost of ordering to justify it.
27) A customer hit a localization bug. How do you make your C# app culture-aware safely?
- I’d use invariant culture for data formats and culture-specific for display.
- I’d set explicit culture for threads in hosted services.
- I’d avoid implicit
ToString()
in logs and data. - I’d externalize resources and date/number formats.
- I’d add tests with multiple cultures including RTL and complex scripts.
- I’d document what’s stored vs. presented to users.
- I’d ensure user preferences override server defaults.
28) A spike shows big GC pauses under load. What trade-offs can tame it?
- Fewer and smaller allocations reduce pause frequency.
- I’d consider Server GC for throughput; Workstation for interactive feel.
- I’d avoid large object heap churn with pooling.
- I’d stream data rather than build giant in-memory lists.
- I’d trim logging allocations in hot paths.
- I’d set realistic latency budgets for background work.
- I’d confirm improvements with GC event traces.
29) You need to expose a new public API version. How do you avoid breaking existing clients?
- I’d keep the old version running with clear deprecation windows.
- I’d add new fields additively and avoid removing or renaming critical ones.
- I’d publish change logs and examples early to consumers.
- I’d monitor adoption by version to plan retirement.
- I’d offer a compatibility shim if the change is minor.
- I’d add contract tests to validate both versions.
- I’d align rate limits separately per version to avoid contention.
30) Your domain rules are sprawling across controllers. How do you pull them back into shape?
- I’d introduce an application layer so controllers stay thin.
- I’d consolidate rules into services or domain objects, not scattered helpers.
- I’d validate inputs with a single, consistent pattern.
- I’d write use-case tests to cover scenarios end-to-end.
- I’d remove cross-cutting concerns to middleware or filters.
- I’d document a reference architecture and enforce in PRs.
- I’d refactor gradually to keep release risk low.
31) A component using Span<T>
gave a subtle bug. What care is needed with spans?
- Spans are stack-bound; don’t capture them in async or store beyond their lifetime.
- I’d avoid slicing into invalid ranges and always validate bounds.
- I’d prefer spans in tight loops where allocation reduction is measurable.
- I’d keep clear ownership of underlying buffers.
- I’d add property-based tests for tricky parsing logic.
- I’d profile to ensure complexity is worth the micro-optimizations.
- I’d fallback to safer constructs if the team is uncomfortable.
32) A senior suggests using struct
to avoid GC. When is that smart vs risky?
- Small, short-lived value types can be a win in hot paths.
- Large structs copied around can hurt performance and clarity.
- I’d keep them immutable and under a few fields where possible.
- I’d measure copy costs and boxing risks when used in collections.
- I’d avoid mixing value semantics where reference semantics are expected.
- I’d document usage guidelines and review carefully in PRs.
- I’d only adopt after profiling shows benefit.
33) Your auth flow mixes concerns across layers. How do you straighten it out?
- I’d keep authN/authZ in middleware/filters, not business logic.
- I’d pass only identity claims needed by the domain layer.
- I’d add policy-based authorization to avoid magic strings.
- I’d standardize error shapes for forbidden vs. unauthorized.
- I’d log audit events with minimal PII.
- I’d create integration tests for key roles and resources.
- I’d document threat models tied to critical endpoints.
34) Production shows rare “ghost” duplicate orders. What’s your defense?
- I’d add idempotency keys at the API boundary.
- I’d ensure database uniqueness per business key.
- I’d wrap external calls with idempotent design where possible.
- I’d detect retries by correlating request ids.
- I’d reconcile with a compensating workflow for edge cases.
- I’d replay test with chaos to validate the fix.
- I’d improve client guidance on retries and timeouts.
35) A library upgrade improved speed but broke behavior. How do you mitigate?
- I’d pin versions and upgrade in a controlled branch.
- I’d run contract and performance tests before merging.
- I’d check release notes for breaking changes and flags.
- I’d feature-flag the new lib in production canaries.
- I’d keep a rollback plan with cached artifacts.
- I’d document migration steps for future upgrades.
- I’d share findings with other teams to prevent repeat pain.
36) Your team wants to adopt background queue processing. What risks do you raise?
- Message loss if acking is wrong or the queue is misconfigured.
- Poison messages clogging the pipe without a dead-letter plan.
- Exactly-once illusions leading to duplicates.
- Out-of-order handling surprises business logic.
- Operational overhead: scaling, monitoring, and retries.
- Schema changes across producers/consumers need governance.
- I’d pilot with a low-risk workflow first.
37) A service must process large files. What C# practices keep it stable?
- I’d stream reads and writes to avoid huge memory spikes.
- I’d validate file size and type early to reject bad input.
- I’d use temp storage with quotas and cleanup policies.
- I’d add partial progress and resumability to survive failures.
- I’d throttle concurrency to avoid I/O contention.
- I’d offload heavy CPU work to dedicated workers.
- I’d track throughput and error patterns per file type.
38) After enabling compression, CPU usage jumped. How do you balance it?
- I’d compress only content types that benefit materially.
- I’d tune compression levels to match latency goals.
- I’d skip tiny payloads where header overhead dominates.
- I’d try hardware or platform offloads if available.
- I’d cache compressed variants for popular responses.
- I’d measure end-to-end gains including client side.
- I’d roll back if the cost outweighs perceived benefits.
39) Your API is “chatty” with the DB. How do you reduce round-trips?
- I’d batch reads/writes where correctness allows.
- I’d push some logic into set-based operations.
- I’d cache stable reference data with expirations.
- I’d prefetch related entities to avoid N+1 surprises.
- I’d denormalize cautiously for hot read paths.
- I’d validate improvements with query and lock metrics.
- I’d keep transactions short and purposeful.
40) A feature requires pluggable business rules. How would you structure it?
- I’d define clear interfaces and a registry for rule discovery.
- I’d prefer DI-based composition over reflection magic.
- I’d keep rules stateless and idempotent when possible.
- I’d isolate each rule’s config and validation.
- I’d add targeted tests per rule and end-to-end scenarios.
- I’d document rule ordering and conflict resolution.
- I’d monitor rule outcomes to detect regressions.
41) A batch job occasionally “disappears.” What resiliency steps do you take?
- I’d add durable, idempotent checkpoints and status reporting.
- I’d put heartbeats and alerts on missed schedules.
- I’d capture inputs/outputs for replay and audit.
- I’d wrap externals with retries and circuit-breakers.
- I’d add kill switches and safe restarts.
- I’d prove recovery with game-day drills.
- I’d document an operator runbook with clear steps.
42) The team uses too many singletons. What risks and alternatives do you discuss?
- Global singletons hide dependencies and complicate tests.
- They can create hidden shared state and race conditions.
- I’d convert to scoped services where lifetimes demand it.
- I’d inject interfaces to make seams testable.
- I’d keep immutable configuration and thread-safe caches.
- I’d refactor hotspots first to show value.
- I’d add design reviews to prevent future creep.
43) A front-end team wants server push for notifications. What do you check?
- Auth and multi-tenant isolation on channels.
- Message fan-out strategy under spikes.
- Backpressure when clients disconnect frequently.
- Offline behavior and missed message recovery.
- Observability: delivery metrics and user ids.
- Cost per persistent connection at scale.
- Rollout plan with a fallback (polling/long-poll).
44) You suspect boxing allocations in LINQ. How do you handle it sanely?
- I’d profile to prove boxing exists and matters.
- I’d use typed generics and avoid
object
in hot paths. - I’d switch to value-friendly operators or manual loops if justified.
- I’d avoid closures capturing structs by value unnecessarily.
- I’d benchmark alternatives to avoid micro-premature optimization.
- I’d document hotspots so new code doesn’t regress.
- I’d add performance tests to guard future changes.
45) A vendor SDK spawns threads and conflicts with your scheduler. What’s your plan?
- I’d isolate the SDK behind an adapter and limit concurrent calls.
- I’d coordinate lifecycle and disposal to avoid leaks.
- I’d request config options for thread usage or callbacks.
- I’d measure impact under load with synthetic tests.
- I’d set timeouts and fallbacks to protect our service.
- I’d escalate with the vendor and keep a pinned version.
- I’d document the integration contract and known caveats.
46) You must ship a feature fast, but tech debt risk is high. How do you balance it?
- I’d define a small, safe slice that delivers user value.
- I’d keep seams and interfaces so we can refactor later.
- I’d add monitoring and limits to contain blast radius.
- I’d record explicit debts in the backlog with owners and dates.
- I’d write key tests for behavior we can’t afford to lose.
- I’d schedule a follow-up refactor window, not “someday.”
- I’d communicate trade-offs openly with stakeholders.
47) A production outage was caused by a missed null check. How do you prevent repeats?
- I’d enable nullable reference types and fix warnings meaningfully.
- I’d add guard clauses at boundaries and validate inputs centrally.
- I’d write property-based tests for tricky shapes.
- I’d improve logs to show which field was null and why.
- I’d use analyzers to block risky patterns in CI.
- I’d run a blameless postmortem and share learnings.
- I’d add synthetic checks that exercise fragile paths.
48) Two services disagree on decimal precision. How do you protect financial accuracy?
- I’d standardize on decimal with explicit scale in contracts.
- I’d validate rounding modes and carry them end-to-end.
- I’d avoid binary floating types for money.
- I’d add schema and contract tests to lock behavior.
- I’d provide examples for edge cases like half-cents.
- I’d reconcile old data with a scripted migration.
- I’d monitor deltas and alert on drift.
49) A feature needs client-side and server-side validation. How do you avoid drift?
- I’d define rules once and reuse them on both sides when possible.
- I’d keep server as the source of truth and return clear error details.
- I’d align error codes and messages for UX consistency.
- I’d add contract tests to ensure parity.
- I’d version rules with feature flags for safe rollout.
- I’d log validation failures to catch surprises early.
- I’d document edge cases for support teams.
50) An incident revealed weak health checks. What’s a robust health model?
- I’d separate liveness (process up) from readiness (can serve traffic).
- I’d include dependency probes with timeouts and budgets.
- I’d fail fast when critical dependencies are down.
- I’d avoid expensive checks that become an attack surface.
- I’d expose health history to track flapping behaviors.
- I’d integrate with orchestrator for graceful draining.
- I’d test failure modes regularly, not just once.
51) Your API is hammered by sudden traffic spikes. How do you handle rate limiting safely in C#?
- I’d implement middleware with sliding window or token bucket algorithms.
- I’d tune limits per client type: internal, partner, or anonymous.
- I’d provide clear error responses (
429 Too Many Requests
) with retry hints. - I’d log rejected requests to detect abuse vs. real growth.
- I’d add distributed rate limiting if running in multiple nodes.
- I’d protect downstream services with separate quotas.
- I’d test under simulated spikes before going live.
52) You need to send millions of messages daily. Would you use async streams or channels?
- Async streams shine when you consume sequentially with backpressure.
- Channels give more control for fan-in/fan-out and bounded queues.
- I’d check memory use: channels prevent unbounded buffering.
- I’d add cancellation tokens for graceful shutdowns.
- I’d monitor queue depth to tune producers vs. consumers.
- I’d benchmark throughput before deciding the abstraction.
- I’d ensure faulted messages don’t silently vanish.
53) A service must run in containers with tiny memory budgets. What .NET features help?
- I’d enable Server GC vs. Workstation GC depending on load shape.
- I’d use trimming and ReadyToRun publishing to cut footprint.
- I’d stream data instead of buffering large graphs.
- I’d prefer structs carefully for small hot-path objects.
- I’d configure thread pool min/max based on container cores.
- I’d log to stdout/stderr only, not giant file buffers.
- I’d test OOM resilience with constrained container runs.
54) A team is debating between domain events and integration events. How do you compare them?
- Domain events live inside the bounded context; they describe state changes.
- Integration events cross service boundaries and need versioning.
- Domain events can be in-process or persisted with outbox.
- Integration events require contracts, retries, and observability.
- I’d use both but not mix responsibilities.
- I’d document mapping rules clearly between them.
- I’d add tests to ensure integration events are published reliably.
55) Your app requires large JSON payloads. How do you optimize serialization in C#?
- I’d switch to
System.Text.Json
for performance and spans. - I’d stream deserialization for very large payloads instead of loading fully.
- I’d pre-define converters for tricky types to cut reflection.
- I’d avoid deep recursion and flatten models when possible.
- I’d compress payloads if network is the bottleneck.
- I’d benchmark serialization vs. deserialization separately.
- I’d monitor CPU vs. memory trade-offs after changes.
56) A feature needs real-time dashboards. What streaming design works best in C#?
- For server push, I’d use SignalR with WebSockets for live updates.
- I’d batch updates per second instead of per event to reduce noise.
- I’d add backpressure when client is slow to consume.
- I’d serialize lightweight DTOs, not full domain objects.
- I’d ensure authorization per stream channel.
- I’d add metrics like dropped updates to track stability.
- I’d test dashboards under hundreds of concurrent viewers.
57) Your team wants to adopt Ahead-of-Time (AOT) publishing in .NET. What trade-offs do you highlight?
- Startup and memory can improve significantly with AOT.
- Reflection-heavy code may fail unless source generators are used.
- Binary size can shrink, but sometimes grows with aggressive optimizations.
- Debugging stack traces may be harder with trimming.
- CI/CD builds may take longer with extra compile steps.
- I’d pilot AOT with one microservice before full adoption.
- I’d document unsupported patterns for devs.
58) A queue consumer must avoid overloading the database. How do you apply backpressure?
- I’d cap concurrent tasks based on DB connection pool size.
- I’d use channels with bounded capacity to slow producers.
- I’d drop or delay low-priority work when lag exceeds thresholds.
- I’d monitor commit latency and tune consumer speed dynamically.
- I’d use bulk insert/update patterns where safe.
- I’d apply retry with jitter for transient DB slowdowns.
- I’d expose backlog metrics for scaling decisions.
59) A team mixes synchronous APIs and async APIs. What pitfalls do you point out?
- Blocking async calls with
.Result
or.Wait()
risks deadlocks. - Mixing sync and async chains can cause thread starvation.
- Async adds overhead if the path is always CPU-bound.
- Sync APIs may limit scalability under load.
- I’d standardize async at boundaries to keep consistency.
- I’d test latency and throughput for both paths.
- I’d educate devs with patterns to avoid hidden traps.
60) You must design a global service with multiple regions. How do you handle time, culture, and data safely in C#?
- Store all timestamps in UTC, convert at edges for users.
- Use
DateTimeOffset
for absolute time with offsets intact. - Handle culture explicitly for formatting and parsing.
- Use resource files and localization libraries for UI text.
- Normalize numeric and decimal formats across services.
- Add tests for DST boundaries, leap years, and RTL languages.
- Document standards so teams in all regions follow one playbook.