When PowerShell receives data from a web request, the real value comes from understanding what the response contains. Most web services return JSON, and PowerShell automatically converts this JSON into objects you can work with directly. This removes the need for manual parsing and allows you to focus on extracting useful information.
Learning how to explore these objects helps you debug faster, avoid mistakes, and confidently build automation that depends on external data.
When you run a web request using Invoke-RestMethod, the JSON response becomes a structured object. You can:
Think of the response as a structured document instead of plain text.
$response = Invoke-RestMethod -Uri "https://api.example.com/resource"
# See all available properties
$response | Get-Member
# Access a few common fields (example names)
$response.id
$response.status
$response.details.name
Sometimes it is easier to understand the structure when you see the data as formatted JSON again. PowerShell can convert the object back into readable JSON.
$response | ConvertTo-Json -Depth 10
This is especially useful when the data contains nested objects.
Large responses can be overwhelming. You can limit the output to only the first few lines for a quick preview.
$response | ConvertTo-Json -Depth 10 |
Out-String |
Select-Object -First 20
PowerShell automatically transforms JSON into usable objects. By exploring properties, viewing formatted JSON, and limiting output for quick previews, you can understand any response quickly and safely reuse the data in your scripts.
Have you ever run a command in PowerShell and wondered if it really worked or silently failed? Exit codes give you a simple way to know what happened. They are small numbers returned by a program when it finishes, and they tell you whether the task succeeded or not.
✔️ Exit Code 0 — Success
An exit code of 0 means everything worked as expected. The command or script completed without errors. This is the standard way most programs say, “All good.”
❌ Exit Code 1 — Error
An exit code of 1 usually means something went wrong. It does not always tell you exactly what failed, but it signals that the command did not complete successfully. Different tools may use this code for different kinds of errors.
How to check the exit code in PowerShell
After running an external command, you can read the last exit code with:
$LASTEXITCODE
How to set your own exit code
In a script, you can control the result:
exit 0 # success
exit 1 # error
Understanding exit codes helps you automate tasks, detect problems early, and build more reliable scripts. Even beginners can use this small feature to make smarter decisions in their workflows.
Good spacing makes a page easier to read and more pleasant to scan. But adding space before every heading can create unwanted gaps — especially when headings follow each other or appear at the top of a section. In this guide, you’ll learn a simple CSS technique to add space before headings only when it actually improves readability.
We want to add top spacing to headings when:
This keeps related headings visually grouped while still separating them from normal content like text, images, or lists.
Use the adjacent sibling selector (+) together with :not() to target only the headings that need spacing:
.app :not(h1, h2, h3) + h1,
.app :not(h1, h2, h3) + h2,
.app :not(h1, h2, h3) + h3 {
margin-top: 20px;
}
:not(h1, h2, h3) selects any element that is not a heading.+ h1, + h2, + h3 selects a heading that comes directly after that element.This means:
You can fine-tune the spacing for each heading level:
This gives you more control over visual hierarchy.
A small CSS rule like this can make a big difference in how professional and readable your pages feel.
Calling web services is common in automation, monitoring, and integration tasks. Many APIs expect extra information in the request, such as authentication tokens, data formats, or custom settings. This information is sent through headers. Once you understand how headers work in PowerShell, you can safely connect to most modern services and build reliable scripts with confidence.
Headers describe how the server should handle your request. They can:
Without the correct headers, a request may fail or return unexpected data.
PowerShell uses a simple key-value structure called a hashtable. Each key is the header name, and the value is the header content. This hashtable is passed to the request using the -Headers parameter.
$headers = @{
"Authorization" = "Bearer YOUR_TOKEN"
}
Invoke-RestMethod -Uri "https://api.example.com/data" -Headers $headers
$headers = @{
"Authorization" = "Bearer YOUR_TOKEN"
"Content-Type" = "application/json"
"Accept" = "application/json"
}
Invoke-RestMethod -Uri "https://api.example.com/data" -Method Get -Headers $headers
$headers = @{
"Content-Type" = "application/json"
}
$body = @{
name = "Sample"
value = 123
} | ConvertTo-Json
Invoke-RestMethod -Uri "https://api.example.com/items" -Method Post -Headers $headers -Body $body
Create a hashtable for headers and attach it using -Headers. This approach works for most APIs and keeps your scripts clean, readable, and easy to maintain.
For advanced users, the Windows Registry can be modified to disable biometric features:
Press Windows + R, type regedit, and press Enter to open the Registry Editor.
Navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Biometrics.
If the Biometrics key doesn't exist, create it:
Right-click on Microsoft, select New > Key, and name it Biometrics.
Within the Biometrics key, create a new DWORD (32-bit) value named Enabled.
Set the value of Enabled to 0.
Setting this value to 0 disables all biometric features, including facial recognition. To re-enable, change the value back to1.
Need to quickly see the current date and time in your Windows terminal? This simple how-to guide shows you exactly which commands to use in both Command Prompt and PowerShell. It is useful for beginners, scripting, logging, and everyday tasks.
Step 1: Open the Terminal
cmd, and press Enter for Command Prompt.Step 2: Show the Date and Time (Command Prompt)
Print only the date:
date /t
Print only the time:
time /t
Print both together:
echo %date% %time%
This is helpful when you want a quick timestamp in a script or log file.
Step 3: Show the Date and Time (PowerShell)
Display the current date and time:
Get-Date
Format the output:
Get-Date -Format "yyyy-MM-dd HH:mm:ss"
This creates a clean, readable timestamp like 2026-01-19 14:45:30.
💡 Tip
You can redirect these commands into a file to create simple logs.
Learning these small commands improves productivity and makes working in the Windows terminal easier and more efficient for daily tasks.
Sometimes data is created only to be logged and never used again. Creating intermediate lists or arrays in those cases increases memory usage and adds unnecessary complexity.
A summary string can often be built directly from the source data:
var summary =
string.Join(", ",
source.Select(x => x.Name));
logger.LogInformation("Items=[{Items}]", summary);
If the source might be null, a safe fallback avoids runtime errors:
var safeSource = source ?? Enumerable.Empty<Item>();
var summary =
string.Join(", ",
safeSource.Select(x => x.Name));
logger.LogInformation("Items=[{Items}]", summary);
This approach keeps the code lean while still producing clear, useful log output.
The choice between arrays and lists communicates intent and affects performance and safety.
When data is only needed for display or logging, an array represents a fixed snapshot and avoids accidental changes:
string[] names =
source.Select(x => x.Name).ToArray();
logger.LogInformation(
"Names=[{Names}]",
string.Join(", ", names));
Lists are useful when the collection must be modified or extended later:
var names =
source.Select(x => x.Name).ToList();
names.Add("NewItem");
Choosing the right type makes the code easier to understand and prevents unintended side effects.
When collections are logged directly, many logging systems only display the type name instead of the actual values. This makes it hard to understand what data was processed.
Converting the collection into a readable string solves this problem:
string[] values = { "One", "Two", "Three" };
var text = string.Join(", ", values);
logger.LogInformation("Values=[{Values}]", text);
To give additional context, the number of elements can also be logged:
logger.LogInformation(
"Count={Count}, Values=[{Values}]",
values.Length,
string.Join(", ", values));
Readable output improves troubleshooting and reduces the need to reproduce issues locally.
Before using a custom dimension, it helps to know what keys actually exist in your data.
View Raw Custom Dimensions
Start by inspecting a few records:
traces
| take 5
| project customDimensions
This shows the full dynamic object so you can see available keys and example values.
List All Keys Found in the Data
traces
| summarize by tostring(bag_keys(customDimensions))
This returns a list of keys across your dataset.
Correct Way to Access a Key
tostring(customDimensions["UserId"])
Avoid this pattern — it does not work:
customDimensions.UserId
Tips
These discovery steps prevent mistakes and make your queries more reliable from the start.
Complex objects and dictionaries often contain far more data than a log entry really needs. Logging them directly can flood the log with noise or produce unreadable output.
A better approach is to extract only the most relevant values and create a compact summary:
var summary = dataMap.Select(x => new
{
Key = x.Key,
Count = x.Value.Count,
Status = x.Value.Status
});
logger.LogInformation("DataSummary={Summary}", summary);
This keeps the log focused on what matters: identifiers, counts, and simple status values. The result is easier to scan, easier to search, and more useful during debugging or monitoring.
When data is loaded from a list or a database, there is always a chance that nothing is found. If the code still tries to access properties on a missing object, the log statement itself can crash the application.
A simple null check makes the behavior explicit and keeps the log stable:
var item = items.FirstOrDefault(x => x.Id == id);
if (item == null)
{
logger.LogWarning("No item found for Id={Id}", id);
}
else
{
logger.LogInformation(
"Id={Id}, Type={Type}",
item.Id,
item.Type);
}
This version clearly separates the “not found” case from the normal case and produces meaningful log messages for both situations.
When a more compact style is preferred, null operators can be used instead:
logger.LogInformation(
"Id={Id}, Type={Type}",
item?.Id ?? "<unknown>",
item?.Type ?? "<unknown>");
Both approaches prevent runtime errors and ensure that logging remains reliable even when data is incomplete.
Once a custom dimension is extracted, you can filter and analyze it like any normal column.
Filter by Text Value
requests
| where tostring(customDimensions["Region"]) == "EU"
This keeps only rows where the Region custom dimension matches the value.
Filter by Numeric Value
If the value represents a number, convert it first:
requests
| extend DurationMs = todouble(customDimensions["DurationMs"])
| where DurationMs > 1000
Reuse Extracted Values
Using extend lets you reuse the value multiple times:
traces
| extend UserId = tostring(customDimensions["UserId"])
| where UserId != ""
| summarize Count = count() by UserId
Tips
extend when the value appears more than once in your query.These patterns help you build fast, readable queries that work reliably across dashboards and alerts.
Good logging is one of the most underrated tools in software development.
When done right, logs explain what your application is doing — even when things go wrong.
Logging is not just about writing messages to a file or console.
It’s about choosing what to log and how to log it safely and clearly.
Common pitfalls include:
Accessing properties on objects that may be null
Logging complex data structures without readable output
Producing logs that are too verbose or too vague
Creating unnecessary data just for logging purposes
This collection focuses on practical, everyday logging patterns:
Writing null-safe log statements
Turning collections into human-readable output
Logging only the information that matters
Choosing simple and efficient data structures for log data
Each example is intentionally small and generic, so the ideas can be reused in any .NET project.
Value
These patterns help you create logs that are stable, readable, and genuinely useful — especially when debugging production issues.
Selecting a custom dimension means extracting a value from the dynamic field and showing it as a normal column.
Basic Example
If your logs contain a custom dimension called UserId, use this query:
What this does:
You can select multiple custom dimensions in the same query:
Tips
tostring() unless you know the value is numeric or boolean.project to control exactly what columns appear in the output.This pattern is ideal for building reports or exporting data because it turns hidden metadata into visible columns that anyone can understand.
Testing HttpClient setup is a task many teams underestimate until something breaks in production. Modern .NET applications rely heavily on HttpClientFactory to add features such as retries, logging, authentication, or caching. These behaviors are implemented through message handlers that form a pipeline around every outgoing request.
If one handler is missing or misordered, the entire behavior changes—sometimes silently. A retry handler that never runs or a logging handler that is skipped can lead to confusing and costly issues. That’s why verifying the correct handlers are attached during application startup is essential.
However, developers quickly discover that it is not straightforward to test this. The built-in HttpClient does not expose its handler chain publicly, and typical unit-testing approaches cannot reveal what the factory actually constructs.
This Snipp explains the entire picture:
• the problem developers face when trying to validate HttpClient pipelines
• the cause, which is rooted in .NET’s internal design
• the resolution, with a practical reflection-based method to inspect handlers exactly as the runtime creates them
Following these Snipps, you will be able to reliably confirm that your handlers—such as retry and logging—are attached and working as intended.
If you've ever thought about trying Linux but felt overwhelmed by technical jargon or complex setup, Linux Mint might be exactly what you’re looking for. Designed for everyday users, it offers a clean and familiar interface, quick setup, and a focus on stability and ease of use.
Linux Mint is based on Ubuntu, one of the most widely used Linux distributions. This means you benefit from a huge software ecosystem, long-term support, and a strong community — without needing to dive deep into command-line tools unless you want to.
Linux Mint offers different desktop environments (such as Cinnamon, MATE, or Xfce), each balancing speed and visual appearance differently. You can choose what fits your hardware and personal preference best. The Software Manager also makes finding and installing applications as easy as in any modern app store.
Linux Mint is a practical starting point for anyone who wants a stable, user-friendly Linux experience — whether you're learning something new, switching operating systems entirely, or simply exploring alternatives.
We all like to believe we’d speak up against injustice — until the moment comes. The video presents a powerful real-life experiment: A professor unexpectedly orders a student, Alexis, to leave the lecture hall. No explanation. No misconduct. And no objections from her classmates.
Only after the door closes does the professor reveal the purpose of his shocking act. He asks the class why no one defended their peer. The uncomfortable truth: people stay silent when they aren’t personally affected.
The message hits hard — laws and justice aren’t self-sustaining. They rely on individuals willing to stand up for what’s right. If we ignore injustice simply because it doesn’t target us, we risk facing it ourselves with no one left to defend us.
This short demonstration challenges us to reflect on our own behavior:
Justice needs voices. Silence only protects the unjust.
Video: One of The Greatest Lectures in The World. - GROWTH™ - YouTube
Many talk about inflation — but the data tells a very different story. Switzerland, once again, offers one of the clearest signals of what is really happening in the global economy.
What’s happening in Switzerland
Why this matters
Switzerland is a global safe haven. In times of uncertainty, capital flows into the country, pushing up the Swiss franc and weighing on economic activity. This pattern often appears earlier in Switzerland than elsewhere, making it a reliable early indicator of broader global weakness.
Key insight
Central banks publicly warn about inflation, but in reality they are responding to economic slowdown. Rate cuts are not a sign of strength — they are a symptom of underlying weakness. Markets and consumers already see this: inflation expectations remain low, while concerns about jobs and income are rising.
Bottom line
The real risk is not inflation, but prolonged economic stagnation. To understand where the global economy is heading, it’s better to focus on data — and Switzerland provides one of the clearest views.
To test HttpClient handlers effectively, you need to inspect the internal handler chain that .NET builds at runtime. Since this chain is stored in a private field, reflection is the only reliable method to access it. The approach is safe, does not modify production code, and gives you full visibility into the pipeline.
The process begins by resolving your service from the DI container. If your service stores the HttpClient in a protected field, you can access it using reflection:
var field = typeof(MyClient)
.GetField("_httpClient", BindingFlags.Instance | BindingFlags.NonPublic);
var httpClient = (HttpClient)field.GetValue(serviceInstance);
Next, retrieve the private _handler field from HttpMessageInvoker:
var handlerField = typeof(HttpMessageInvoker)
.GetField("_handler", BindingFlags.Instance | BindingFlags.NonPublic);
var current = handlerField.GetValue(httpClient);
Finally, walk through the entire handler chain:
var handlers = new List<DelegatingHandler>();
while (current is DelegatingHandler delegating)
{
handlers.Add(delegating);
current = delegating.InnerHandler;
}
With this list, you can assert the presence of your custom handlers:
Assert.Contains(handlers, h => h is HttpRetryHandler);
Assert.Contains(handlers, h => h is HttpLogHandler);
This gives your test real confidence that the HttpClient pipeline is constructed correctly—exactly as it will run in production.
Issue
Libraries often expose many raw exceptions, depending on how internal HTTP or retry logic is implemented. This forces library consumers to guess which exceptions to catch and creates unstable behavior.
Cause
Exception strategy is not treated as part of the library’s public contract. Internal exceptions leak out, and any change in handlers or retry logic changes what callers experience.
Resolution
Define a clear exception boundary:
Internally
Catch relevant exceptions (HttpRequestException, timeout exceptions, retry exceptions).
Log them
Use the unified logging method.
Expose only a custom exception
Throw a single exception type, such as ServiceClientException, at the public boundary.
Code Example
catch (Exception ex)
{
LogServiceException(ex);
throw new ServiceClientException("Service request failed.", ex);
}
This approach creates a predictable public API, hides implementation details, and ensures your library remains stable even as the internal HTTP pipeline evolves.
The main reason regular tests cannot inspect HttpClient handlers is simple: the pipeline is private. The HttpClient instance created by IHttpClientFactory stores its entire message-handler chain inside a non-public field named _handler on its base class HttpMessageInvoker.
This means:
So while Visual Studio’s debugger can show the handler sequence, your code cannot. This is why common testing approaches fail: they operate at the service level, not the internal pipeline level.
A service class typically stores a protected or private HttpClient instance:
protected readonly HttpClient _httpClient;
Even if your test resolves this service, the handler pipeline remains invisible.
To validate the runtime configuration—exactly as it will behave in production—you must inspect the pipeline directly. Since .NET does not expose it, the only practical method is to use reflection. The next Snipp explains how to implement this in a clean and repeatable way.
Issue
HTTP calls fail for many reasons: timeouts, throttling, network issues, or retry exhaustion. Logging only one exception type results in missing or inconsistent diagnostic information.
Cause
Most implementations log only HttpRequestException, ignoring other relevant exceptions like retry errors or cancellation events. Over time, this makes troubleshooting difficult and logs incomplete.
Resolution
Use a single unified logging method that handles all relevant exception types. Apply specific messages for each category while keeping the logic in one place.
private void LogServiceException(Exception ex)
{
switch (ex)
{
case HttpRequestException httpEx:
LogHttpRequestException(httpEx);
break;
case RetryException retryEx:
_logger.LogError("Retry exhausted. Last status: {Status}. Exception: {Ex}",
retryEx.StatusCode, retryEx);
break;
case TaskCanceledException:
_logger.LogError("Request timed out. Exception: {Ex}", ex);
break;
case OperationCanceledException:
_logger.LogError("Operation was cancelled. Exception: {Ex}", ex);
break;
default:
_logger.LogError("Unexpected error occurred. Exception: {Ex}", ex);
break;
}
}
private void LogHttpRequestException(HttpRequestException ex)
{
if (ex.StatusCode == HttpStatusCode.NotFound)
_logger.LogError("Resource not found. Exception: {Ex}", ex);
else if (ex.StatusCode == HttpStatusCode.TooManyRequests)
_logger.LogError("Request throttled. Exception: {Ex}", ex);
else
_logger.LogError("HTTP request failed ({Status}). Exception: {Ex}",
ex.StatusCode, ex);
}
Centralizing logic ensures consistent, clear, and maintainable logging across all error paths.
When configuring HttpClient using AddHttpClient(), developers often attach important features using message handlers. These handlers form a step-by-step pipeline that processes outgoing requests. Examples include retry logic, request logging, or authentication.
The problem appears when you want to test that the correct handlers are attached. It is common to write integration tests that resolve your service from the DI container, call methods, and inspect behavior. But this does not confirm whether the handler chain is correct.
A handler can silently fail to attach due to a typo, incorrect registration, or a missing service. You may have code like this:
But you cannot verify from your test that the constructed pipeline includes these handlers. Even worse, Visual Studio can display the handler chain in the debugger, but this ability is not accessible through public APIs.
Without a direct way to look inside the pipeline, teams cannot automatically verify one of the most important parts of their application’s networking stack. The next Snipp explains why this limitation exists.
The easiest and safest fix is to validate configuration values before Azure services are registered. This prevents accidental fallback authentication and gives clear feedback if something is missing.
Here’s a clean version of the solution:
public static IServiceCollection AddAzureResourceGraphClient(
this IServiceCollection services,
IConfiguration config)
{
var connectionString = config["Authentication:AzureServiceAuthConnectionString"];
if (string.IsNullOrWhiteSpace(connectionString))
throw new InvalidOperationException(
"Missing 'Authentication:AzureServiceAuthConnectionString' configuration."
);
services.AddSingleton(_ => new AzureServiceTokenProvider(connectionString));
return services;
}
This small addition gives you:
✔ Clear error messages
✔ Consistent behavior between environments
✔ No more unexpected Azure calls during tests
✔ Easier debugging for teammates
For larger apps, you can also use strongly typed configuration + validation (IOptions<T>), which helps keep settings organized and ensures nothing slips through the cracks.
With this guard in place, your integration tests stay clean, predictable, and Azure-free unless you want them to involve Azure.
Most Azure SDK components rely on configuration values to know how to authenticate. For example:
new AzureServiceTokenProvider(
config["Authentication:AzureServiceAuthConnectionString"]
);
If this key is missing, the Azure SDK does not stop. Instead, it thinks:
“I’ll figure this out myself!”
And then it tries fallback authentication options, such as:
These attempts fail instantly inside a local test environment, leading to confusing “AccessDenied” messages.
The surprising part?
Your project may work fine during normal execution—but your API project or test project may simply be missing the same setting.
This tiny configuration mismatch means:
Once you understand this, the solution becomes much clearer.
Creating reliable HTTP client services is a challenge for many .NET developers. Network timeouts, throttling, retries, and unexpected exceptions often lead to inconsistent logging, unclear error messages, and unstable public APIs. This Snipp gives an overview of how to design a clean, predictable, and well-structured error-handling strategy for your HTTP-based services.
Readers will learn why custom exceptions matter, how to log different failure types correctly, and how to build a stable exception boundary that hides internal details from users of a library. Each child Snipp focuses on one topic and includes practical examples. Together, they offer a clear blueprint for building services that are easier to debug, test, and maintain.
The overall goal is simple: Create a .NET service that logs clearly, behaves consistently, and protects callers from internal complexity.
Running integration tests in ASP.NET Core feels simple—until your tests start calling Azure without permission. This usually happens when you use WebApplicationFactory<T> to spin up a real application host. The test doesn’t run only your code; it runs your entire application startup pipeline.
That includes:
If your app registers Azure services during startup, they will also start up during your tests. And if the environment lacks proper credentials (which test environments usually do), Azure returns errors like:
This can be confusing because unit tests work fine. But integration tests behave differently because they load real startup logic.
The issue isn’t Azure being difficult—it's your tests running more than you expect.
Understanding this is the first step to diagnosing configuration problems before Azure becomes part of your test run unintentionally.
Have you ever run an ASP.NET Core integration test and suddenly been greeted by an unexpected Azure “Access Denied” error? Even though your application runs perfectly fine everywhere else? This is a common but often confusing situation in multi-project .NET solutions. The short version: your tests might be accidentally triggering Azure authentication without you realizing it.
This Parent Snipp introduces the full problem and provides a quick overview of the three child Snipps that break down the issue step by step:
Snipp 1 – The Issue:
Integration tests using WebApplicationFactory<T> don’t just test your code—they spin up your entire application. That means all Azure clients and authentication logic also start running. If your test environment lacks proper credentials, Azure responds with errors that seem unrelated to your actual test.
Snipp 2 – The Cause:
The root cause is often a missing configuration value, such as an Azure authentication connection string. When this value is missing, Azure SDK components fall back to default authentication behavior. This fallback usually fails during tests, leading to confusing error messages that hide the real problem.
Snipp 3 – The Resolution:
The recommended fix is to add safe configuration validation during service registration. By checking that required settings exist before creating Azure clients, you prevent fallback authentication and surface clear, friendly error messages. This leads to predictable tests and easier debugging.
Together, these Snipps give you a practical roadmap for diagnosing and fixing Azure authentication problems in ASP.NET Core integration tests. If you’re building APIs, background workers, or shared libraries, these tips will help you keep your testing environment clean and Azure-free—unless you want it to talk to Azure.
When you run an ASP.NET Core API from the command line, it will not use the port defined in launchSettings.json. This often surprises developers, but it is normal behavior.
The reason is simple: launchSettings.json is only used by Visual Studio or other IDEs during debugging.
To make your app listen on a specific port when running with dotnet run or dotnet MyApi.dll, you must configure the port using runtime options such as command-line arguments, environment variables, or appsettings.json.
Key Points
launchSettings.json does not apply when starting the app from the console.dotnet run --urls "http://localhost:5050" to force a port.ASPNETCORE_URLS=http://localhost:5050appsettings.json to define Kestrel endpoints.http://0.0.0.0:5050 if running inside Docker or WSL.