Snippset

Snipps

...see more

When PowerShell receives data from a web request, the real value comes from understanding what the response contains. Most web services return JSON, and PowerShell automatically converts this JSON into objects you can work with directly. This removes the need for manual parsing and allows you to focus on extracting useful information.

Learning how to explore these objects helps you debug faster, avoid mistakes, and confidently build automation that depends on external data.

What Happens to JSON in PowerShell

When you run a web request using Invoke-RestMethod, the JSON response becomes a structured object. You can:

  • Discover which fields are available
  • Read values using simple property names
  • Navigate nested data easily

Think of the response as a structured document instead of plain text.

Example: Store and Explore a Response

$response = Invoke-RestMethod -Uri "https://api.example.com/resource"

# See all available properties
$response | Get-Member

# Access a few common fields (example names)
$response.id
$response.status
$response.details.name

View the Response as Formatted JSON

Sometimes it is easier to understand the structure when you see the data as formatted JSON again. PowerShell can convert the object back into readable JSON.

$response | ConvertTo-Json -Depth 10

This is especially useful when the data contains nested objects.

Preview Only the First Lines

Large responses can be overwhelming. You can limit the output to only the first few lines for a quick preview.

$response | ConvertTo-Json -Depth 10 |
    Out-String |
    Select-Object -First 20

Key Takeaway

PowerShell automatically transforms JSON into usable objects. By exploring properties, viewing formatted JSON, and limiting output for quick previews, you can understand any response quickly and safely reuse the data in your scripts.

 

...see more

Have you ever run a command in PowerShell and wondered if it really worked or silently failed? Exit codes give you a simple way to know what happened. They are small numbers returned by a program when it finishes, and they tell you whether the task succeeded or not.

✔️ Exit Code 0 — Success
An exit code of 0 means everything worked as expected. The command or script completed without errors. This is the standard way most programs say, “All good.”

Exit Code 1 — Error
An exit code of 1 usually means something went wrong. It does not always tell you exactly what failed, but it signals that the command did not complete successfully. Different tools may use this code for different kinds of errors.

How to check the exit code in PowerShell
After running an external command, you can read the last exit code with:

$LASTEXITCODE

How to set your own exit code
In a script, you can control the result:

exit 0   # success
exit 1   # error

Understanding exit codes helps you automate tasks, detect problems early, and build more reliable scripts. Even beginners can use this small feature to make smarter decisions in their workflows.

...see more

Introduction

Good spacing makes a page easier to read and more pleasant to scan. But adding space before every heading can create unwanted gaps — especially when headings follow each other or appear at the top of a section. In this guide, you’ll learn a simple CSS technique to add space before headings only when it actually improves readability.

The Idea

We want to add top spacing to headings when:

  • The heading is not the first element in its container
  • The element before it is not another heading

This keeps related headings visually grouped while still separating them from normal content like text, images, or lists.

The Solution

Use the adjacent sibling selector (+) together with :not() to target only the headings that need spacing:

.app :not(h1, h2, h3) + h1,
.app :not(h1, h2, h3) + h2,
.app :not(h1, h2, h3) + h3 {
  margin-top: 20px;
}
How it works:
  • :not(h1, h2, h3) selects any element that is not a heading.
  • + h1, + h2, + h3 selects a heading that comes directly after that element.
  • The margin is applied only in this situation.

This means:

  • A heading after text gets spacing
  • A heading after another heading stays close
  • The first heading in a container gets no extra space

Optional: Different Spacing per Heading

You can fine-tune the spacing for each heading level:

.app :not(h1, h2, h3) + h1 { margin-top: 32px; }
.app :not(h1, h2, h3) + h2 { margin-top: 24px; }
.app :not(h1, h2, h3) + h3 { margin-top: 16px; }

This gives you more control over visual hierarchy.

Why This Is Useful

  • Improves readability and visual structure
  • Avoids unnecessary empty space
  • Keeps your layout clean and consistent
  • Works in all modern browsers

A small CSS rule like this can make a big difference in how professional and readable your pages feel.

...see more

Calling web services is common in automation, monitoring, and integration tasks. Many APIs expect extra information in the request, such as authentication tokens, data formats, or custom settings. This information is sent through headers. Once you understand how headers work in PowerShell, you can safely connect to most modern services and build reliable scripts with confidence.

Why Headers Matter

Headers describe how the server should handle your request. They can:

  • Identify who you are (authentication)
  • Tell the server what data format you send or expect
  • Enable special features or versions of an API

Without the correct headers, a request may fail or return unexpected data.

How PowerShell Handles Headers

PowerShell uses a simple key-value structure called a hashtable. Each key is the header name, and the value is the header content. This hashtable is passed to the request using the -Headers parameter.

Example: Add One Header

$headers = @{
    "Authorization" = "Bearer YOUR_TOKEN"
}

Invoke-RestMethod -Uri "https://api.example.com/data" -Headers $headers

Example: Add Multiple Headers

$headers = @{
    "Authorization" = "Bearer YOUR_TOKEN"
    "Content-Type"  = "application/json"
    "Accept"        = "application/json"
}

Invoke-RestMethod -Uri "https://api.example.com/data" -Method Get -Headers $headers

Example: Send Data with Headers (POST)

$headers = @{
    "Content-Type" = "application/json"
}

$body = @{
    name = "Sample"
    value = 123
} | ConvertTo-Json

Invoke-RestMethod -Uri "https://api.example.com/items" -Method Post -Headers $headers -Body $body

Key Takeaway

Create a hashtable for headers and attach it using -Headers. This approach works for most APIs and keeps your scripts clean, readable, and easy to maintain.

 

...see more

For advanced users, the Windows Registry can be modified to disable biometric features:

  1. Press Windows + R, type regedit, and press Enter to open the Registry Editor.

  2. Navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Biometrics.

  3. If the Biometrics key doesn't exist, create it:

    • Right-click on Microsoft, select New > Key, and name it Biometrics.

  4. Within the Biometrics key, create a new DWORD (32-bit) value named Enabled.

  5. Set the value of Enabled to 0.

Setting this value to 0 disables all biometric features, including facial recognition. To re-enable, change the value back to1.

...see more

Need to quickly see the current date and time in your Windows terminal? This simple how-to guide shows you exactly which commands to use in both Command Prompt and PowerShell. It is useful for beginners, scripting, logging, and everyday tasks.

Step 1: Open the Terminal

  • Press Windows + R, type cmd, and press Enter for Command Prompt.
  • Or search for PowerShell in the Start menu.

Step 2: Show the Date and Time (Command Prompt)

Print only the date:

date /t

Print only the time:

time /t

Print both together:

echo %date% %time%

This is helpful when you want a quick timestamp in a script or log file.

Step 3: Show the Date and Time (PowerShell)

Display the current date and time:

Get-Date

Format the output:

Get-Date -Format "yyyy-MM-dd HH:mm:ss"

This creates a clean, readable timestamp like 2026-01-19 14:45:30.

💡 Tip

You can redirect these commands into a file to create simple logs.

Learning these small commands improves productivity and makes working in the Windows terminal easier and more efficient for daily tasks.

...see more

Sometimes data is created only to be logged and never used again. Creating intermediate lists or arrays in those cases increases memory usage and adds unnecessary complexity.

A summary string can often be built directly from the source data:

var summary =
    string.Join(", ",
        source.Select(x => x.Name));

logger.LogInformation("Items=[{Items}]", summary);

If the source might be null, a safe fallback avoids runtime errors:

var safeSource = source ?? Enumerable.Empty<Item>();

var summary =
    string.Join(", ",
        safeSource.Select(x => x.Name));

logger.LogInformation("Items=[{Items}]", summary);

This approach keeps the code lean while still producing clear, useful log output.

...see more

The choice between arrays and lists communicates intent and affects performance and safety.

When data is only needed for display or logging, an array represents a fixed snapshot and avoids accidental changes:

string[] names =
    source.Select(x => x.Name).ToArray();

logger.LogInformation(
    "Names=[{Names}]",
    string.Join(", ", names));

Lists are useful when the collection must be modified or extended later:

var names =
    source.Select(x => x.Name).ToList();

names.Add("NewItem");

Choosing the right type makes the code easier to understand and prevents unintended side effects.

...see more

When collections are logged directly, many logging systems only display the type name instead of the actual values. This makes it hard to understand what data was processed.

Converting the collection into a readable string solves this problem:

string[] values = { "One", "Two", "Three" };

var text = string.Join(", ", values);

logger.LogInformation("Values=[{Values}]", text);

To give additional context, the number of elements can also be logged:

logger.LogInformation(
    "Count={Count}, Values=[{Values}]",
    values.Length,
    string.Join(", ", values));

Readable output improves troubleshooting and reduces the need to reproduce issues locally.

...see more

Before using a custom dimension, it helps to know what keys actually exist in your data.

View Raw Custom Dimensions

Start by inspecting a few records:

traces
| take 5
| project customDimensions

This shows the full dynamic object so you can see available keys and example values.

List All Keys Found in the Data

traces
| summarize by tostring(bag_keys(customDimensions))

This returns a list of keys across your dataset.

Correct Way to Access a Key

tostring(customDimensions["UserId"])

Avoid this pattern — it does not work:

customDimensions.UserId

Tips

  • Key names are case-sensitive.
  • Some records may not contain the same keys.
  • Always test with a small sample before building complex queries.

These discovery steps prevent mistakes and make your queries more reliable from the start.

...see more

Complex objects and dictionaries often contain far more data than a log entry really needs. Logging them directly can flood the log with noise or produce unreadable output.

A better approach is to extract only the most relevant values and create a compact summary:

var summary = dataMap.Select(x => new
{
    Key = x.Key,
    Count = x.Value.Count,
    Status = x.Value.Status
});

logger.LogInformation("DataSummary={Summary}", summary);

This keeps the log focused on what matters: identifiers, counts, and simple status values. The result is easier to scan, easier to search, and more useful during debugging or monitoring.

...see more

When data is loaded from a list or a database, there is always a chance that nothing is found. If the code still tries to access properties on a missing object, the log statement itself can crash the application.

A simple null check makes the behavior explicit and keeps the log stable:

var item = items.FirstOrDefault(x => x.Id == id);

if (item == null)
{
    logger.LogWarning("No item found for Id={Id}", id);
}
else
{
    logger.LogInformation(
        "Id={Id}, Type={Type}",
        item.Id,
        item.Type);
}

This version clearly separates the “not found” case from the normal case and produces meaningful log messages for both situations.

When a more compact style is preferred, null operators can be used instead:

logger.LogInformation(
    "Id={Id}, Type={Type}",
    item?.Id ?? "<unknown>",
    item?.Type ?? "<unknown>");

Both approaches prevent runtime errors and ensure that logging remains reliable even when data is incomplete.

...see more

Once a custom dimension is extracted, you can filter and analyze it like any normal column.

Filter by Text Value

requests
| where tostring(customDimensions["Region"]) == "EU"

This keeps only rows where the Region custom dimension matches the value.

Filter by Numeric Value

If the value represents a number, convert it first:

requests
| extend DurationMs = todouble(customDimensions["DurationMs"])
| where DurationMs > 1000

Reuse Extracted Values

Using extend lets you reuse the value multiple times:

traces
| extend UserId = tostring(customDimensions["UserId"])
| where UserId != ""
| summarize Count = count() by UserId

Tips

  • Use extend when the value appears more than once in your query.
  • Always convert to the correct type before filtering.
  • Avoid comparing raw dynamic values directly.

These patterns help you build fast, readable queries that work reliably across dashboards and alerts.

...see more

Good logging is one of the most underrated tools in software development.
When done right, logs explain what your application is doing — even when things go wrong.

Logging is not just about writing messages to a file or console.
It’s about choosing what to log and how to log it safely and clearly.

Common pitfalls include:

  • Accessing properties on objects that may be null

  • Logging complex data structures without readable output

  • Producing logs that are too verbose or too vague

  • Creating unnecessary data just for logging purposes

This collection focuses on practical, everyday logging patterns:

  • Writing null-safe log statements

  • Turning collections into human-readable output

  • Logging only the information that matters

  • Choosing simple and efficient data structures for log data

Each example is intentionally small and generic, so the ideas can be reused in any .NET project.

Value
These patterns help you create logs that are stable, readable, and genuinely useful — especially when debugging production issues.

...see more

Selecting a custom dimension means extracting a value from the dynamic field and showing it as a normal column.

Basic Example

If your logs contain a custom dimension called UserId, use this query:

 
traces
| project
    timestamp,
    message,
    UserId = tostring(customDimensions["UserId"])

What this does:

  • Reads the value using square brackets.
  • Converts it to a string.
  • Creates a new column named UserId.

You can select multiple custom dimensions in the same query:

 
requests
| project
    timestamp,
    name,
    Region  = tostring(customDimensions["Region"]),
    OrderId = tostring(customDimensions["OrderId"])

Tips

  • Always use tostring() unless you know the value is numeric or boolean.
  • Rename the extracted value to keep your results readable.
  • Use project to control exactly what columns appear in the output.

This pattern is ideal for building reports or exporting data because it turns hidden metadata into visible columns that anyone can understand.

...see more

Testing HttpClient setup is a task many teams underestimate until something breaks in production. Modern .NET applications rely heavily on HttpClientFactory to add features such as retries, logging, authentication, or caching. These behaviors are implemented through message handlers that form a pipeline around every outgoing request.

If one handler is missing or misordered, the entire behavior changes—sometimes silently. A retry handler that never runs or a logging handler that is skipped can lead to confusing and costly issues. That’s why verifying the correct handlers are attached during application startup is essential.

However, developers quickly discover that it is not straightforward to test this. The built-in HttpClient does not expose its handler chain publicly, and typical unit-testing approaches cannot reveal what the factory actually constructs.

This Snipp explains the entire picture:
• the problem developers face when trying to validate HttpClient pipelines
• the cause, which is rooted in .NET’s internal design
• the resolution, with a practical reflection-based method to inspect handlers exactly as the runtime creates them

Following these Snipps, you will be able to reliably confirm that your handlers—such as retry and logging—are attached and working as intended.

...see more

If you've ever thought about trying Linux but felt overwhelmed by technical jargon or complex setup, Linux Mint might be exactly what you’re looking for. Designed for everyday users, it offers a clean and familiar interface, quick setup, and a focus on stability and ease of use.

Linux Mint is based on Ubuntu, one of the most widely used Linux distributions. This means you benefit from a huge software ecosystem, long-term support, and a strong community — without needing to dive deep into command-line tools unless you want to.

Why Linux Mint?

  • Easy to learn: The desktop layout feels familiar to users coming from Windows or macOS.
  • Reliable and secure: Regular updates help keep your system fast and safe.
  • Works on older hardware: A great option for breathing new life into older devices.
  • Comes ready to use: Essential software like a web browser, media player, and office suite are included from the start.

What makes it unique?

Linux Mint offers different desktop environments (such as Cinnamon, MATE, or Xfce), each balancing speed and visual appearance differently. You can choose what fits your hardware and personal preference best. The Software Manager also makes finding and installing applications as easy as in any modern app store.

Linux Mint is a practical starting point for anyone who wants a stable, user-friendly Linux experience — whether you're learning something new, switching operating systems entirely, or simply exploring alternatives.

...see more

We all like to believe we’d speak up against injustice — until the moment comes. The video presents a powerful real-life experiment: A professor unexpectedly orders a student, Alexis, to leave the lecture hall. No explanation. No misconduct. And no objections from her classmates.

Only after the door closes does the professor reveal the purpose of his shocking act. He asks the class why no one defended their peer. The uncomfortable truth: people stay silent when they aren’t personally affected.

The message hits hard — laws and justice aren’t self-sustaining. They rely on individuals willing to stand up for what’s right. If we ignore injustice simply because it doesn’t target us, we risk facing it ourselves with no one left to defend us.

This short demonstration challenges us to reflect on our own behavior:

  • Would we have spoken up?
  • What holds us back — fear, indifference, or convenience?
  • How can we develop the courage to act before it’s too late?

Justice needs voices. Silence only protects the unjust.

Video: One of The Greatest Lectures in The World. - GROWTH™ - YouTube

...see more

Many talk about inflation — but the data tells a very different story. Switzerland, once again, offers one of the clearest signals of what is really happening in the global economy.

What’s happening in Switzerland

  • Several consecutive months of declining consumer prices
  • Weak economic growth and a gradual rise in unemployment
  • Interest rate cuts failing to stimulate the economy

Why this matters
Switzerland is a global safe haven. In times of uncertainty, capital flows into the country, pushing up the Swiss franc and weighing on economic activity. This pattern often appears earlier in Switzerland than elsewhere, making it a reliable early indicator of broader global weakness.

Key insight
Central banks publicly warn about inflation, but in reality they are responding to economic slowdown. Rate cuts are not a sign of strength — they are a symptom of underlying weakness. Markets and consumers already see this: inflation expectations remain low, while concerns about jobs and income are rising.

Bottom line
The real risk is not inflation, but prolonged economic stagnation. To understand where the global economy is heading, it’s better to focus on data — and Switzerland provides one of the clearest views.

Link: A Massive Warning Just Came Out of Switzerland… Here’s What It Means – Eurodollar University - YouTube

...see more

To test HttpClient handlers effectively, you need to inspect the internal handler chain that .NET builds at runtime. Since this chain is stored in a private field, reflection is the only reliable method to access it. The approach is safe, does not modify production code, and gives you full visibility into the pipeline.

The process begins by resolving your service from the DI container. If your service stores the HttpClient in a protected field, you can access it using reflection:

var field = typeof(MyClient)
    .GetField("_httpClient", BindingFlags.Instance | BindingFlags.NonPublic);

var httpClient = (HttpClient)field.GetValue(serviceInstance);

Next, retrieve the private _handler field from HttpMessageInvoker:

var handlerField = typeof(HttpMessageInvoker)
    .GetField("_handler", BindingFlags.Instance | BindingFlags.NonPublic);

var current = handlerField.GetValue(httpClient);

Finally, walk through the entire handler chain:

var handlers = new List<DelegatingHandler>();

while (current is DelegatingHandler delegating)
{
    handlers.Add(delegating);
    current = delegating.InnerHandler;
}

With this list, you can assert the presence of your custom handlers:

Assert.Contains(handlers, h => h is HttpRetryHandler);
Assert.Contains(handlers, h => h is HttpLogHandler);

This gives your test real confidence that the HttpClient pipeline is constructed correctly—exactly as it will run in production.

...see more

Issue

Libraries often expose many raw exceptions, depending on how internal HTTP or retry logic is implemented. This forces library consumers to guess which exceptions to catch and creates unstable behavior.

Cause

Exception strategy is not treated as part of the library’s public contract. Internal exceptions leak out, and any change in handlers or retry logic changes what callers experience.

Resolution

Define a clear exception boundary:

  1. Internally
    Catch relevant exceptions (HttpRequestException, timeout exceptions, retry exceptions).

  2. Log them
    Use the unified logging method.

  3. Expose only a custom exception
    Throw a single exception type, such as ServiceClientException, at the public boundary.

Code Example

catch (Exception ex)
{
    LogServiceException(ex);
    throw new ServiceClientException("Service request failed.", ex);
}

This approach creates a predictable public API, hides implementation details, and ensures your library remains stable even as the internal HTTP pipeline evolves.

...see more

The main reason regular tests cannot inspect HttpClient handlers is simple: the pipeline is private. The HttpClient instance created by IHttpClientFactory stores its entire message-handler chain inside a non-public field named _handler on its base class HttpMessageInvoker.

This means:

  • there is no public property to read the handler list
  • DI registration only confirms setup, not actual construction
  • mocks cannot expose the real pipeline
  • even typed clients hide the underlying handler chain

So while Visual Studio’s debugger can show the handler sequence, your code cannot. This is why common testing approaches fail: they operate at the service level, not the internal pipeline level.

A service class typically stores a protected or private HttpClient instance:

 
protected readonly HttpClient _httpClient;

Even if your test resolves this service, the handler pipeline remains invisible.

To validate the runtime configuration—exactly as it will behave in production—you must inspect the pipeline directly. Since .NET does not expose it, the only practical method is to use reflection. The next Snipp explains how to implement this in a clean and repeatable way.

...see more

Issue

HTTP calls fail for many reasons: timeouts, throttling, network issues, or retry exhaustion. Logging only one exception type results in missing or inconsistent diagnostic information.

Cause

Most implementations log only HttpRequestException, ignoring other relevant exceptions like retry errors or cancellation events. Over time, this makes troubleshooting difficult and logs incomplete.

Resolution

Use a single unified logging method that handles all relevant exception types. Apply specific messages for each category while keeping the logic in one place.

private void LogServiceException(Exception ex)
{
    switch (ex)
    {
        case HttpRequestException httpEx:
            LogHttpRequestException(httpEx);
            break;

        case RetryException retryEx:
            _logger.LogError("Retry exhausted. Last status: {Status}. Exception: {Ex}",
                retryEx.StatusCode, retryEx);
            break;

        case TaskCanceledException:
            _logger.LogError("Request timed out. Exception: {Ex}", ex);
            break;

        case OperationCanceledException:
            _logger.LogError("Operation was cancelled. Exception: {Ex}", ex);
            break;

        default:
            _logger.LogError("Unexpected error occurred. Exception: {Ex}", ex);
            break;
    }
}

private void LogHttpRequestException(HttpRequestException ex)
{
    if (ex.StatusCode == HttpStatusCode.NotFound)
        _logger.LogError("Resource not found. Exception: {Ex}", ex);
    else if (ex.StatusCode == HttpStatusCode.TooManyRequests)
        _logger.LogError("Request throttled. Exception: {Ex}", ex);
    else
        _logger.LogError("HTTP request failed ({Status}). Exception: {Ex}",
            ex.StatusCode, ex);
}

Centralizing logic ensures consistent, clear, and maintainable logging across all error paths.

...see more

When configuring HttpClient using AddHttpClient(), developers often attach important features using message handlers. These handlers form a step-by-step pipeline that processes outgoing requests. Examples include retry logic, request logging, or authentication.

The problem appears when you want to test that the correct handlers are attached. It is common to write integration tests that resolve your service from the DI container, call methods, and inspect behavior. But this does not confirm whether the handler chain is correct.

A handler can silently fail to attach due to a typo, incorrect registration, or a missing service. You may have code like this:

services.AddHttpClient("ClientService")
    .AddHttpMessageHandler<HttpRetryHandler>()
    .AddHttpMessageHandler<HttpLogHandler>();

But you cannot verify from your test that the constructed pipeline includes these handlers. Even worse, Visual Studio can display the handler chain in the debugger, but this ability is not accessible through public APIs.

Without a direct way to look inside the pipeline, teams cannot automatically verify one of the most important parts of their application’s networking stack. The next Snipp explains why this limitation exists.

...see more

The easiest and safest fix is to validate configuration values before Azure services are registered. This prevents accidental fallback authentication and gives clear feedback if something is missing.

Here’s a clean version of the solution:

public static IServiceCollection AddAzureResourceGraphClient(
    this IServiceCollection services,
    IConfiguration config)
{
    var connectionString = config["Authentication:AzureServiceAuthConnectionString"];

    if (string.IsNullOrWhiteSpace(connectionString))
        throw new InvalidOperationException(
            "Missing 'Authentication:AzureServiceAuthConnectionString' configuration."
        );

    services.AddSingleton(_ => new AzureServiceTokenProvider(connectionString));
    return services;
}

This small addition gives you:

✔ Clear error messages
✔ Consistent behavior between environments
✔ No more unexpected Azure calls during tests
✔ Easier debugging for teammates

For larger apps, you can also use strongly typed configuration + validation (IOptions<T>), which helps keep settings organized and ensures nothing slips through the cracks.

With this guard in place, your integration tests stay clean, predictable, and Azure-free unless you want them to involve Azure.

...see more

Most Azure SDK components rely on configuration values to know how to authenticate. For example:

new AzureServiceTokenProvider(
    config["Authentication:AzureServiceAuthConnectionString"]
);
 

If this key is missing, the Azure SDK does not stop. Instead, it thinks:
“I’ll figure this out myself!”

And then it tries fallback authentication options, such as:

  • Developer login credentials
  • Azure CLI authentication
  • Managed Identity lookups
  • Subscription scans

These attempts fail instantly inside a local test environment, leading to confusing “AccessDenied” messages.
The surprising part?
Your project may work fine during normal execution—but your API project or test project may simply be missing the same setting.

This tiny configuration mismatch means:

  • Unit tests succeed
  • API runs fine locally
  • Integration tests fail dramatically

Once you understand this, the solution becomes much clearer.

...see more

Creating reliable HTTP client services is a challenge for many .NET developers. Network timeouts, throttling, retries, and unexpected exceptions often lead to inconsistent logging, unclear error messages, and unstable public APIs. This Snipp gives an overview of how to design a clean, predictable, and well-structured error-handling strategy for your HTTP-based services.

Readers will learn why custom exceptions matter, how to log different failure types correctly, and how to build a stable exception boundary that hides internal details from users of a library. Each child Snipp focuses on one topic and includes practical examples. Together, they offer a clear blueprint for building services that are easier to debug, test, and maintain.

The overall goal is simple: Create a .NET service that logs clearly, behaves consistently, and protects callers from internal complexity.

...see more

Running integration tests in ASP.NET Core feels simple—until your tests start calling Azure without permission. This usually happens when you use WebApplicationFactory<T> to spin up a real application host. The test doesn’t run only your code; it runs your entire application startup pipeline.
That includes:

  • Configuration loading
  • Dependency injection setup
  • Background services
  • Azure clients and authentication providers

If your app registers Azure services during startup, they will also start up during your tests. And if the environment lacks proper credentials (which test environments usually do), Azure returns errors like:

  • AccessDenied
  • Forbidden

This can be confusing because unit tests work fine. But integration tests behave differently because they load real startup logic.
The issue isn’t Azure being difficult—it's your tests running more than you expect.

Understanding this is the first step to diagnosing configuration problems before Azure becomes part of your test run unintentionally.

...see more

Have you ever run an ASP.NET Core integration test and suddenly been greeted by an unexpected Azure “Access Denied” error? Even though your application runs perfectly fine everywhere else? This is a common but often confusing situation in multi-project .NET solutions. The short version: your tests might be accidentally triggering Azure authentication without you realizing it.

This Parent Snipp introduces the full problem and provides a quick overview of the three child Snipps that break down the issue step by step:

  • Snipp 1 – The Issue:
    Integration tests using WebApplicationFactory<T> don’t just test your code—they spin up your entire application. That means all Azure clients and authentication logic also start running. If your test environment lacks proper credentials, Azure responds with errors that seem unrelated to your actual test.

  • Snipp 2 – The Cause:
    The root cause is often a missing configuration value, such as an Azure authentication connection string. When this value is missing, Azure SDK components fall back to default authentication behavior. This fallback usually fails during tests, leading to confusing error messages that hide the real problem.

  • Snipp 3 – The Resolution:
    The recommended fix is to add safe configuration validation during service registration. By checking that required settings exist before creating Azure clients, you prevent fallback authentication and surface clear, friendly error messages. This leads to predictable tests and easier debugging.

Together, these Snipps give you a practical roadmap for diagnosing and fixing Azure authentication problems in ASP.NET Core integration tests. If you’re building APIs, background workers, or shared libraries, these tips will help you keep your testing environment clean and Azure-free—unless you want it to talk to Azure.

...see more

When you run an ASP.NET Core API from the command line, it will not use the port defined in launchSettings.json. This often surprises developers, but it is normal behavior.
The reason is simple: launchSettings.json is only used by Visual Studio or other IDEs during debugging.
To make your app listen on a specific port when running with dotnet run or dotnet MyApi.dll, you must configure the port using runtime options such as command-line arguments, environment variables, or appsettings.json.

Key Points

  • launchSettings.json does not apply when starting the app from the console.
  • Use dotnet run --urls "http://localhost:5050" to force a port.
  • Or set an environment variable:
    ASPNETCORE_URLS=http://localhost:5050
  • For a permanent app-level setting, use appsettings.json to define Kestrel endpoints.
  • Use http://0.0.0.0:5050 if running inside Docker or WSL.
Add to Set
  • .NET
  • Agile
  • AI
  • ASP.NET Core
  • Azure
  • C#
  • Cloud Computing
  • CSS
  • EF Core
  • HTML
  • JavaScript
  • Microsoft Entra
  • PowerShell
  • Quotes
  • React
  • Security
  • Software Development
  • SQL
  • Technology
  • Testing
  • Visual Studio
  • Windows
Actions
 
Sets