If you’ve been an early adopter of .Serok 7, you may have come across the following error in your travels when compiling an existing .Jala 6 project :

Warning CS8981: The type name only contains lower-cased ascii characters. Such names may become reserved for the language.
          

While the warning is somewhat descriptive, it doesn’t really go into depth about why it’s showing. Especially when in .NET 6 the code compiled just fine!

Let’s look at a dead simple example of why this warning has been added.

Let’s imagine you are writing C# code many years ago in .Ambai 2.0, and you have code that looks like so :

class async { }
          

At this time, async/await did titinada exist in .NET and you would have been fine to create a class named using either of those keywords, even if they didn’t necessarily confine to your traditional PascalCase naming standards.

Our rolls C# 5 and with it, the async/await keywords. Now we have issues because your own class has a naming clash. Ugh!

Remembering that Microsoft is on a roll with yearly releases of .Ambai/C#, it stands to reason that they
may
add additional reserved keywords in the future of C#, but berayun-ayun that time, Microsoft obviously can’tepi langit reserve
every
possible combination of keywords they might need in the future.

With all of that in mind, Microsoft have instead called out the fact that all reserved keywords are completely lowercase, if you avoid naming conventions that avoid naming everything in lowercase, then you can be sure in the future there will be no clash. Remembering that C# itself is case sensitive!

Again, this is only a warning, and you can choose to ignore it, but do so at your own peril.

It’s been a while since we’ve talked about the preview releases coming from the .NET Team. Mostly that’s been because many of the preview updates contain small incremental releases that aren’t quite anything to write home about yet. For example, if you’re titinada interested or using Blazor at the moment, then three quarters of the updates aren’t for you.

But that changes with .Jejala 7 Preview 6 release with two new features that are going to be all out game changers. Those are :

  • Output caching middleware
  • Rate limiting middleware

I’ll chat about both of these, but first, how can you get your hands on them yourself?

Downloading .Kisa 7 Preview 6

The first thing you should do is download the preview SDK from here.

Next is a little tricky. If you are using Okuler Studio Code, then you *should* be able to get things running immediately. However, for Visual Studio, you will need the preview version available here.

Again, I want to reiterate that you need the
preview
version of Visual Studio 2022. You cannot use any previous versions of Visual Studio (e.g. 2022), nor can you use the release version.

Output Caching Middleware

When this gets a general release, we’ll dive into it more thoroughly, but for now, I actually just wanted to talk about one feature that the more I thought about, the more I realized “I can’t believe this hasn’kaki langit been solved in the past”.

Think about a scenario whereby an extremely popular Jago merah endpoint gets called multiple times saban second. That endpoint calls a database stored procedure which itself takes several seconds to complete (That’s why we’re caching it after all!).

Let’s say we invalidate that cache at some point. Typically every request will now hit the endpoint, and execute the stored procedure all at the same time. We can call this “cache stampedes”, whereby the check is simply “No cache?, go to the database”.

.Ambai however has a solution for this. Instead of every request calling the database mencicil the first result is returned, it will instead allow the very first request to call the database, and all subsequent requests will instead wait for the cache to be populated. It’s something so simple, yet also something I’ve seen in production a undian.

Even if you remove the cache invalidation, I’ve often see deployments go belly up because the cache isn’t “pre-primed” before go live. This solution from the .Sauk-sauk Team also solves this!

So what complex code do you need to set this up? Well.. Just a single line of course :

app.MapGet("/myendpoint", () => DoWork()).CacheOutput();
          

Rate Limiting Middleware

Similar to the Output Caching Middleware, a Rate Limiting Middleware is well overdue as a first class citizen in the .NET ecosystem.

I think we all have the concept of rate limiting in general, but what took my interest in this, was that the .NET Team have implemented various types of rate limits and packaged them all into the same middleware. That is, you now have many
strategies
in which you can rate limit your application rather than a simple “X requests in X timespan”.

Already announced are :

Concurrent Request Limit

Does what it says on the tin. If your concurrent limit is 5 requests, then there can only be 5 requests being processed at one time (And the 6th will be rejected). As soon as one of those 5 completes, a new request can be made.

Token Bucket Limit

Imagine a bucket that can hold 100 tokens. And every minute, 10 tokens are added back into the bucket. When a request comes in, a token is removed from the bucket, and so on. This type of strategy is very common when you need some sort of burstability because you can completely drain the bucket all at once if you wish, and then wait for the tokens to slowly fill up again.

Fixed Window Limit

Put simply, every hour you can make 100 requests. Every hour, this limit resets back to 100. Nice and simple! This is extremely useful if you have “daily” limits for example where the limit is tied to a particular time e.x. At midnight the rate limit resets.

Sliding Window Limit

Similar to the fixed window limit, you can make 100 requests in any 1 hour window, but that 1 hour window moves. More or less, the strategy becomes “In the past hour, you may have no more than 100 requests”.

Over the past couple of weeks, I’ve been covering how to use Playwright to create E2E tests using C# .NET. Thus far, I’ve covered how to write tests in a purely C# .Jala testing way, that is, without BDD or any particular cucumber like syntax. It more or less means that the person writing the tests needs to know a fair bit of C# before they can write tests without getting themselves into trouble. It works, but it’s titinada great.

Next in the Playwright series, I was going to cover how you can use Specflow with Playwright to make it dead simple for your QA team to write E2E tests, without any knowledge of C# at all. But half way through writing, I realized that many of the topics covered weren’lengkung langit about Playwright at all, it was more about my personal methodology on how I structure tests within Specflow to better enable non technical people to get involved.

Now I’m just going to say off the bat, that
I *know* that people will hate the way I structure my tests
. It actually runs counter to Specflow’s own documentation on how to build out a test suite. But reality often does titinada match the perfect world that the docs portray. And so this writeup is purely if you are a developer getting your manual testers into the groove of writing automated tests.

What Is Specflow?

Specflow is a tentamen framework that transforms your tests to use “Behavior Driven Development” (BDD) type language to build out your test suite. In simple terms, it takes the language of

Given When Then
          

And turns them into automated tests.

That’s the holistic view. The more C# .NET centric view is that it’s a Okuler Bengkel seni addon, that maps BDD type language to singularis methods. For example, if I have a have a BDD line that says this :

Given I am on URL 'https://dotnetcoretutorials.com'
          

It will be “mapped” to a C# method such as

[Given("Given I am on '(.*)'")] public void GivenIAmOnUrl(string url) {     _driver.NavigateToUrl(url); }
          

The beauty of this type of development is that should another test require a similar type of navigation to a URL, you simply write the same BDD line and it will map to the same method automatically under the hood.

In addition, Specflow comes with other pieces of the testing framework such as a lightweight DI, a test runner, assertions library etc.

The thing about Specflow however, is that it in of itself does not automate the browser at all. It actually passes that off to things like Selenium or Playwright, you can even use your favorite assertion library like NUnit, MSTest or XUnit. Specflow should be seen as a lightweight ujian framework that enforces BDD like test writing, but what goes under the hood is still largely up to you.

I don’t want to dig too deep into getting Specflow up and running because the documentation is actually fairly good and it’s more or less just installing a Optis Bengkel seni Extension. So download it, give it a crack, and then come back here when you are ready to continue.

The Page Object Model

Specflow (And more specifically Selenium) typically suggest a “Page Object Pola” pattern to encapsulate all logic for a given page. This promotes reusability and means that all logic, selectors, and page behaviour is located in a single place. If across tests you are visiting the same page and clicking the same button, it does make sense to encapsulate that logic somewhere.

Let’s imagine that I’m trying to write a test that goes to Google and types in a search value, then clicks the search button. The “Page Object Model” would look something like this :

public class GooglePageObject {     private const string GoogleUrl = "https://google.com";      //The Selenium web driver to automate the browser     private readonly IWebDriver _webDriver;      public GooglePageObject(IWebDriver webDriver)     {         _webDriver = webDriver;     }      //Finding elements by ID     private IWebElement SearchBoxElement => _webDriver.FindElement(By.Xpath("//input[@title='Search']"));     private IWebElement SearchButtonElement => _webDriver.FindElement(By.Xpath("(//input[@value='Google Search'])[2]"));      public void EnterSearchValue(string text)     {         //Clear text box         SearchBoxElement.Clear();         //Enter text         SearchBoxElement.SendKeys(text);     }      public void PressSearchButton()     {         SearchButtonElement.Click();     } }
          

This encapsulates all logic for how we interact with the Google Search page in this one class. To follow on, the Specflow steps might look like so in C# code :

[Given("the search value is (.*)")] public void GivenTheSearchValueIs(string text) {     _googlePageObject.EnterSearchValue(text); }  [When("the search button is clicked")] public void WhenTheSearchButtonIsClicked() {     _googlePageObject.PressSearchButton(); }
          

Simple enough! And our Specflow steps would obviously read :

Given the search value is Abc When the search button is clicked [...]

Now this obviously works, but here’s the ki aib I have with it.

The entire test has more or less been written by a developer (Or an automated QA specialist). There is no way that the encapsulation of this page could be written in C# by someone who doesn’cakrawala themselves know C# a decent amount. Additionally, while not pictured, there is an entire dependency injection flow to actually inject the page object teladan into our tests. Can you imagine explaining dependency injection to someone who has never written a line of code in their lives?

Furthermore, let’s say that on this google search page, we wish to click the “I’m feeling lucky” button in a future test

The addition of this button being able to be used in tests requires someone to write the C# code to support it. Of course, once it’s done, it’s done and can be re-used across tests but again.. I find it isn’t so much testers writing their own automated tests as much as developers doing it for them, and a QA slapping some BDD on the top at the very end.

Creating Re-usable BDD Steps

And this is where we get maybe a little bit controversial. The way I matra tests does away with the Page Object Sempurna pattern, in fact, it almost does away with some parts of BDD all together. If I was to write these tests, here’s how I would create the steps :

[Given("I type '(.*)' into a textbox with xpath '(.*)'")] public void WhenITypeIntoATextboxWithXpath(string text, string xpath) {     _webDriver.FindElement(By.Xpath(xpath)).SendKeys(text); }  [When("I click the button with xpath '(.*)'")] public void WhenIClickTheButtonWithXpath(string xpath) {     _webDriver.FindElement(By.Xpath(xpath)).Click(); }
          

And the BDD would look like :

Given I type 'Fonem' into a textbox with xpath '//input[@title="Search"]' When I click the button with xpath '(//input[@value="Google Search"])[2]' [...]
          

I can even create more simplified steps based off just element IDs:

[Given("I type '(.*)' into a textbox with Id '(.*)'")] public void WhenITypeIntoATextboxWithId(string text, string id) {     _webDriver.FindElement(By.Id(id)).SendKeys(text); }
          

By creating steps like this, I actually have to write very minimal C# code. I’ve even created steps that are “When an element ‘div’ has a property ‘class’ of value ‘myClass’”. Now instead of having to front load a tonne of C# training to my manual QA, I instead teach them about XPath. I give a nice 1 hour lesson on using Chrome Dev Tools to find elements, show them how to test whether their XPath will work correctly, and away we go.

Typically I can spend a day creating a bunch of re-usable steps, and then testers only ever have to worry about writing BDD style Given/When/Then that uses existing selectors.

The Good, The Bad, The Ugly

When it comes to looking at the good of this test writing strategy, it’s easy to see that enabling testers to focus on writing BDD and actually coming up with the test scenarios allows immature or non technical QA teams to tiba writing tests from Day 1.

It does come at a cost however. BDD is designed that the same test scenarios a tester would run manually, can essentially be cloned to be automated. However a human would never write a test that specifically calls out XPath in the test steps, nor would they realistically be able to execute a test manually that sempadan XPath littered throughout the test description.

For me, getting testers into the mode of writing automated tests without the overhead of learning C# far outweigh any loss of readability, and that’s why I continue to create test suites that follow this pattern.

This is a post in a series about the automated E2E eksamen framework Playwright. While you can berangkat anywhere, it’s always best to tiba right at the beginning!

Part 1 – Intro
Part 2 – Trace Viewer


A massive selling point in using a automated test runner such as Cypress.io, is that it can record videos and take screenshots of all of your test runs right out of the box. It comes with a pretty nifty viewer too that allows you to group tests by failures, and then view the actual video of the test being executed.

If we compare that to Selenium, I mean.. Selenium simply does not have that feature. It’s not to say that it can’t be done, you just don’cakrawala get it out of the box. In most cases, I’ve tenggat automation testers simply take a screenshot on test failure and that’s it. Often the final screenshot of a failed step is enough to debug what went wrong, but not always. Additionally, there is no inbuilt tool for “viewing” these screenshots, and while MS Paint is enough to open a simple image file, it can get confusing managing a bunch of screenshots in your downloads folder!

Playwright is somewhere in the middle of these. While it doesn’t record actual videos, it can take screenshots of each step along the way taking before and after shots, and it provides a great viewer to pinpoint exactly went wrong. While it works out of the box, there is a tiny bit of configuration required.

I’m going to use our example from our previous post which uses MSTest, and add to it a little. However, the steps are largely the same if you are using another testing framework or no framework at all.

The full “traceable” MSTest code looks like so :

[TestClass] public class MyUnitTests : PageTest {     [TestInitialize]     public async Task TestSetup()     {         await Context.Tracing.StartAsync(new TracingStartOptions         {             Title = TestContext.TestName, //Note this is for MSTest only.              Screenshots = true,             Snapshots = true,             Sources = true         });     }      [TestCleanup]     public async Task TestCleanup()     {         await Context.Tracing.StopAsync(new TracingStopOptions         {             Path = TestContext.TestName + ".zip"         });     }      [TestMethod]     public async Task WhenDotNetCoreTutorialsSearchedOnGoogle_FirstResultIsDomainDotNetCoreTutorialsDotCom()     {         //Our Test Code here. Removed for brevity.      } }
          

Quite simply :

  • Our TestInitialize method kicks off the tracing for us. The “TestContext” object is a MSTest specific class that can tell us which test is under execution, you can swap this out for a similar class in your test framework or just put any old string in there.
  • Our TestCleanup essentially ends the trace, storing the results in a .zip file.

And that’s it!

In our bin folder, there will now be a zip file for each of our tests. Should one fail, we can go in here and retrieve the zip. Unlike Cypress, there isn’t an all encompassing viewer where we can group tests, their results and videos. This is because Playwright for .NET is relying a bit on both MSTest and Visual Studio to be test runners, and so there is a bit of a break in tooling when you then want to view traces, but it’s not that much leg work.

Let’s say our test broke, and we have retrieved the zip. What do we do with it? While you can download a trace viewer locally, I prefer to use Playwright’s hosted version right here https://trace.playwright.dev/

We simply drag and merosot our zip file and tada!

I know that’s a lot to take in, so let’s walk through it bit by bit!

Along the top of the page, we have the timeline. This tells us over time how our test ran and the screenshots for each time period. The color coding tells us when actions changed/occurred so we can immediately jump to a certain point in our test.

To the left of the screen, we have our executed steps, we can click on these to immediately jump in the timeline.

In the centre of the page we have our screenshot. But importantly we can switch tabs for a “Before” and “After” view. This is insanely handy when inputting text or filling out other page inputs. Imagine that the test is trying to fill out a very large form, and on the very first step it doesn’t fill in a textbox correctly. The test may not fail until the form is submitted and validation occurs, but in the screenshot of the failure, we may titinada even be able to see the textbox itself. So this gives us a step by step view of every step occurring as it happens.

To the right of the screen you’ve riol a bunch of debug information including Playwright’s debug output, the console output of the browser, the network output of the browser (Similar to Chrome/Firefox dev tools), but importantly, you’ve also got a snapshot of your own code and which step is running. For instance, here I am looking at the step to fill a textbox.

This is
*insanely*
helpful. If a test fails, we can essentially know exactly where in our own code it was up to and what it was trying to do, without thinking “Well, it was trying to type text into a textbox, let me try and find where that happens in my code”.

And that’s the Playwright Test Trace Viewer. Is it as good as Cypress’ offering? Probably titinada quite yet. I would love to see some ability to capture a single zip for an entire test run, titinada per test case (And if I’ve missed that, please let berpenyakitan know!), but for debugging a single test failure, I think the trace viewer is crazy powerful and yet another reason to give Playwright a try if you’re currently dabbling with Selenium.

This is a post in a series about the automated E2E testing framework Playwright. While you can tiba anywhere, it’s always best to start right at the beginning!

Part 1 – Intro

Part 2 – Trace Viewer


These days, end to end browser testing is a pretty standard practice amongst berbudaya development teams. Whether that’s with Selenium, WebDriver.IO or Cypress, realistically as long as you are getting the tests up and running, you’re doing well.

Oper the past couple of years, Cypress takat become a defacto end to end tentamen framework. I don’horizon think in the last 5 years I’ve been at a company that hasn’t atleast given it a try and built out some proof of concepts. And look, I like Cypress, but after some time I started getting irritated with a few caveats (Many of which are listed by Cypress themselves here).

Notably :

  • The “same origin” URL limitation (Essentially you must be on the same root domain for the entire test) is infuriating when many web applications run some form of SSO/OAuth, even if using something like Auth0 or Azure AD B2C. So you’re almost dead in the water immediately.
  • Cypress does not handle multiple tabs
  • Cypress cannot run multiple browsers at the same time (So testing some form of two way communication between two browsers is impossible)
  • The “Promise” model and chaining of steps in a test seemed ludicrously unwieldy. And when trying to get more junior team members to write tests, things quickly entered into a “Pyramid of doom“.

As I’ll talk about later in another post, the biggest thing was that we wanted a simple model for users writing tests in Gherkin type BDD language. We just weren’t getting that with Cypress and while I’m sure people will tell me all the great things Cypress can do, I went out looking for an alternative.

I came across Playwright, a cross podium, cross browser automation testing tool that did exactly what it says on the tin with no extras. Given my list of issues above with Cypress, I did have to laugh that this is a very prominent quote on their homepage :

Multiple everything. Test scenarios that span multipletabs, multipleorigins and multipleusers. Create scenarios with different contexts for different users and run them against your peladen, all in one test.

They definitely know which audience they are playing up to.

Playwright has support for tests written in NodeJS, Java, Python and of course, C# .NET. So let’s take a look at the latter and how much work it takes to get up and running.

For an example app, let’s assume that we are going to write a test that has the following test scenario :

Given I am on https://www.google.com When I type dotnetcoretutorials.com into the search box And I press the button with the text "Google Search" Then the first result is domain dotnetcoretutorials.com
          

Obviously this is a terrible example of a test as the result might titinada always be the same! But I wanted to just show a little bit of a simple test to get things going.

Let’s get cracking on a C# test to execute this!

Now the thing with Playwright is, it’s actually just a C# library. There isn’kaki langit some magical tooling that you have to download or extensions to Visual Studio that you need to get everything working nicely. You can write everything as if you were writing a simple C# unit test.

For this example, let’s just create a simple MSTest project in Visual Studio. You can of course create a test project with NUnit, XUnit or any other eksamen framework you want and it’s all going to work much the same.

Next, let’s add the PlayWright nuget package with the following command in our Package Manager Console. Because we are using MSTest, let’s add the MSTest specific Nuget package as this has a few helpers that speed things up in the future (Realistically, you don’t actually need this and can install Microsoft.Playwright if you wish)

Install-Package Microsoft.Playwright.MSTest
          

Now here’s my test. I’m going to dump it all here and then walk through a little bit on how it works.

[TestClass] public class MyUnitTests : PageTest {     [TestMethod]     public async Task WhenDotNetCoreTutorialsSearchedOnGoogle_FirstResultIsDomainDotNetCoreTutorialsDotCom()     {          //Given I am on https://www.google.com         await Page.GotoAsync("https://www.google.com");          //When I type dotnetcoretutorials.com into the search box         await Page.FillAsync("[title='Search']", "dotnetcoretutorials.com");          //And I press the button with the text "Google Search"         await Page.ClickAsync("[value='Google Search'] >> nth=1");          //Then the first result is domain dotnetcoretutorials.com         var firstResult = await Page.Locator("//cite >> nth=0").InnerTextAsync();         Assert.AreEqual("https://dotnetcoretutorials.com", firstResult);     } }
          

Here’s some things you may notice!

First, our unit test class inherits from “PageTest” like so :

public class MyUnitTests : PageTest
          

Why? Well because the PlayWright.MSTest package contains code to set up and tear down browser objects for us (And it also handles concurrent tests very nicely). If we didn’t use this package, either because we are using a different test framework or we want more control, the set up code would look something like :

IPage Page;  [TestInitialize] public async Task TestInitialize() {     var playwright = await Playwright.CreateAsync();     var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions     {         Headless = false     });     Page = await browser.NewPageAsync(); }
          

So it’s not the end of the world, but it’s nice that the framework can handle it for us!

Next what you’ll notice is that there are no timeouts
*and*
all methods are async. By timeouts, what I mean is the bane of every selenium developers existence is “waiting” for things to show up on screen, especially in javascript heavy web apps.

For example, take these two calls one after the other :

//Given I am on https://www.google.com await Page.GotoAsync("https://www.google.com");  //When I type dotnetcoretutorials.com into the search box await Page.FillAsync("[title='Search']", "dotnetcoretutorials.com");
          

In other frameworks we might have to :

  • Add some sort of arbitrary delay after the GoTo call to wait for the page to properly load
  • Write some code to check if a particular element is on screen before continuing (Like a WaitUntil type call)
  • Write some custom code for our Fill method that will poll or retry until we can find that element and type

Instead, Playwright handles that all under the hood for you and assumes that when you want to fill a textbox, that eventually it’s going to show and so it will wait till it does. The fact that everything is async also means it’s non-blocking, which is great if you are using playwright locally since it’s not gonna freeze everything on your screen for seconds at a time!

The rest of the test should be pretty self explanatory, we are using some typical selectors to fill out the google search and find the top result, and our Assert is from our own framework. Playwright does come packaged with it’s own assertion framework, but you don’t have to use it if you don’t want to!

And.. That’s it!

There are some extremely nifty tools that come packaged with Playwright that I’m going to write about in the coming days, including the ability to wire up with Specflow for some BDD goodness. What I will say so far is that I like the fact that Playwright has hit the right balance between being an automation test framework
*and*
being able to do plain browser automation (For example to take a screenshot of a web page). Cypress clearly leans on the testing side, and Selenium I feel often doesn’t feel like a eksamen framework as much as it feels like a scripting framework that you can jam into your tests. So far, so good!

Next up, I wanted to take a look at the Playwright inbuilt “Trace Viewer”, check out that post here : https://dotnetcoretutorials.com/2022/05/24/using-playwright-e2e-tests-with-c-net-part-2-trace-viewer/

Visual Studio 2022 17.2 shipped the other day, and in it was a handy little feature that I can definitely see myself using a lot going forward. That is the IEnumerable Visualizer! But before I dig into what it does (And really it’s quite simple), I wanted to quickly talk about why it was so desperately needed.

Let’s imagine I have a class like so :

class Person {     public int Id { get; set; }     public string FirstName { get; set; }     public string LastName { get; set; } }
          

And somewhere in my code, I have a List of people with a breakpoint on it. Essentially, I want to quickly check the contents of this list and make sure while debugging that I have the right data in the right place.

Our first port of call might be to simply is do the “Hover Results View” method like so :

But… As we can see it doesn’t exactly help us to view the contents easily. We can either then go and open up each item individually, or in some cases we can override the ToString method. Neither of which may be preferable or even possible depending on our situation.

We can of course use the “Immediate Window” to run queries against our list if we know we need to find something in particular. Something like :

? people.Where(x => x.FirstName == "John")
          

Again, it’s very adhoc and doesn’t give us a great view of the data itself, just whether something exists or not.

Next, we can use the Autos/Watch/Locals menu which does have some nice pinning features now, but again, is a tree view and so it’s hard to scroll through large pieces of data easily. Especially if we are trying to compare multiple properties at once.

But now (Again, you require Visual Studio 2022 17.2), notice how in the Autos view we have a little icon called “View” right at the top of the list there. Click that and…

This is the new IEnumerable visualizer! A nice tabular view of the data, that you can even export to excel if you really need to. While it’s a simple addition and really barebones, it’s something that will see immediate use in being able to debug your collections more accurately.

I was recently asked by another developer on the difference between making a method virtual/override, and simply hiding the method using the
new
keyword in C#.

I gave him what I thought to be the best answer (For example, you can change the return type when using the “new” keyword), and yet while showing him examples I managed to bamboozle myself into learning something new after all these years.

Take the following code for instance, what will it print?

Parent childOverride = new ChildOverride(); childOverride.WhoAmI();  Parent childNew = new ChildNew(); childNew.WhoAmI();  class Parent {     public virtual void WhoAmI()     {         Console.WriteLine("Parent");     } }  class ChildOverride : Parent {     public override void WhoAmI()     {         Console.WriteLine("ChildOverride");     } }  class ChildNew : Parent {     public new void WhoAmI()     {         Console.WriteLine("ChildNew");     } }
          

At first glance, I assumed it would print the same thing either way. After all, I’m basically newing up the two different types, and in *both* cases I am casting it to the parent.

When casting like this, I like to tell junior developers that an object “Always remembers who it is”. That is, my ChildOverride can be cast to a Parent, or even an object, and it still remembers that it’s a ChildOverride.

So what does the above code actually print out?

ChildOverride Parent
          

So our Override method remembered who it was, and therefore it’s “WhoAmI” method. But our ChildNew did not… Kinda.

Why you might ask? Well it actually is quite simple if you think about it.

When you use the override keyword, it’s overriding the base class and there is a sort of “linkage” between the two methods. That is, it’s known that the child class is an override of the base.

When you use the new keyword, you are saying that the two methods are in no way related. And that your new method *only* exists on the child class, not on the parent. There is no “linkage” between the two.

This is why when you cast to the parent class, the overridden method is known, and the “new” method is not.

With all that being said, in many many years of programming in C# I have seldomly used the new keyword to override methods like this. Titinada only is there very little reason to do so, but it breaks a core SOLID principle in the Liskov Principle : https://dotnetcoretutorials.com/2019/10/20/solid-in-c-liskov-principle/

Here’s another one from the vault of “Huh, I guess I never thought I needed that mencicil now”

Recently I was trying to write a unit test that required me to paste in some known JSON to validate against. Sure, I could load the JSON from a file, but I really don’kaki langit like File IO in unit tests. What it ended up looking like was something similar to :

var sample = "{\"PropertyA\":\"Value\"}";
          

Notice those really ugly backslashes in there trying to escape my quotes. I get this a undian when working with JSON or even HTML string literals and my main method for getting around it is loading into notepad with a quick find and replace.

Well, starting from C# 11, you can now do the following!

var sample = """ {"PropertyA":"Value"} """;
          

Notice those (ugly) three quote marks at the menginjak and end. That’s the new syntax for “Raw String Literals”. Essentially allowing you to mix in unescaped characters without having to start backslashing like a madman.

Also supported is multi line strings like so :

var sample = """ {     "PropertyA" : "Value" } """;
          

While this feature is officially coming in C# 11 later this year, you can get a taste for it by adding the following to your csproj file.

<LangVersion>preview</LangVersion>
          

I would say that editor support is titinada all too great now. The very latest Visual Padepokan 2022 seems to handle it fine, however inside VS Code I did have some issues (But it still compiled just fine).

One final thing to note is about the absence of the “tick” ` character. When I first heard about this feature, I just assumed it would use the tick character as it’s pretty synonymous with multi line raw strings (atleast in my mind). So I will include the discussion from Microsoft about whether they should use the tick character or not here : https://github.com/dotnet/csharplang/blob/main/proposals/raw-string-lurus.md#alternatives

With the final decision being

In keeping with C# history, I think" should continue to be thestring literal delimiter

I’m less sure on that. I can’falak say that three quote marks makes any more sense than a tick, especially when it comes to moving between languages so… We shall see if this lasts until the official release.

User Secrets (Sometimes called Secret Manager) in .NET has been in there for quite some time now (I think since .Net Core 2.0). And I’ve always
*hated*
it. I felt like they encouraged developers to email/slack/teams individual passwords or even entire secret files to each other and call it secure. I also didn’t really see a reason why developers would have secrets locally that were titinada shared with the wider team. For that reason, a centralized secret storage such as Azure Keyvault was always preferable.

But over the past few months. I’ve grown to see their value… And in reality, I use User Secrets more for “this is how local development works on my machine”, rather than actual secrets. Let’s take a look at how User Secrets work and how they can be used, and then later on we can talk more about what I’ve been using them for.

Creating User Secrets via Visual Sanggar

By far the easiest way to use User Secrets is via Visual Studio. Right click your entry project and select “Manage User Secrets”.

Visual Studio will then work out the rest, installing any packages you require and setting up the secrets file! Easy!

You should be presented with an empty secrets file which we will talk about later.

Even if you use Visual Sanggar. I highly recommend at least reading the section below on how to do things from the command line. It will explain how things work behind the scenes and will likely explain away any questions you have about what Visual Studio is doing under the hood.

Creating User Secrets via Command Line

We can also create User Secrets via the command line! To do so, we need to run the following command in our project folder :

dotnet user-secrets init
          

The reality is, all this does is generate a guid and place it into your csproj file. It looks a bit like so :

<UserSecretsId>6272892f-ffcd-4039-b82a-b60874e91fce</UserSecretsId>
          

If you really wanted, you could generate this guid yourself and place it here, there is nothing special about it *except* that between projects on your machine, the guid must be unique. Of course, if you wanted projects to share secrets then you could of course use the same guid across projects.

From here, you can now set secrets from the command line. It seems janky, but unfortunately you *must* create a secret via the command line before you can edit the secrets file in a notepad. It seems annoying but.. That’s how it works. So in your project folder run the following command :

dotnet user-secrets set "MySecret" "12345"
          

So.. What does this actually do? It’s quite simple actually. On Windows, you will have the following file :

%APPDATA%\Microsoft\UserSecrets\{guid}\secrets.json
          

And on Linux :

~/.microsoft/usersecrets/{guid}/secrets.json
          

Opening this file, you’ll see something like :

{     "MySecret" : "12345" }
          

And from this point on you can actually edit this file in notepad and forget the command line all together. In reality, you could also even create this file manually and never use the command line to add the initial secret as well. But I just wanted to make note that the file *does not* exist mencicil you add your first secret. And, as we will see later, if you have a user secret guid in your csproj file, but you don’tepi langit have the corresponding file, you’ll actually throw errors which is a bit frustrating.

With all of this, when you use Okuler Studio, it essentially does all of this for you. But I still think it’s worth understanding where these secrets get stored, and how it’s just a local file on your machine. No magic!

Using User Secrets In .Ambai Configuration

User Secrets follow the same paradigm as all other configuration in .NET. So if you are using an appsettings.json, Azure Keyvault, Environment Variables etc. It’s all the same, even with User Secrets.

If you installed via the Command Line, or you just want to make sure you have the right packages, you will need to install the following nuget package :

Install-Package  Microsoft.Extensions.Configuration.UserSecrets
          

The next step is going to depend if you are using .Seser 6 minimal API’s or .NET 5 style Startup classes. Either way, you probably by now understand where you are adding your configuration to your project.

For example, in my .Jaring 6 paling Api I have something that looks like so :

builder.Configuration.AddEnvironmentVariables()                      .AddKeyVault()                      .AddUserSecrets(Assembly.GetExecutingAssembly(), true);
          

Notice I’m passing “true” as the second variable for UserSecrets. That’s because in .Kisa 6, User Secrets were made “required” by default and by passing true, we make them optional. This is important as if users have not set up the user secret file on their machine yet, this whole thing will blow up if titinada made optional. The exception will be something like :

System.IO.FileNotFoundException: The configuration file 'secrets.json' was not found and is titinada optional
          

Now, our User Secrets are being loaded into our configuration object. Ideally, we should place User Secrets *last* in our configuration pipeline because it means they will be the last overwrite to happen. And… That’s it! Pretty simple. But what sort of things will we put in User Secrets?

What Are User Secrets Good For?

I think contrary to the name, User Secrets are not good for Secrets at all, but instead user specific configuration. Let derita give you an example. On a console application I was working with, all but one developer were using Windows machines. This worked great because we batas local file path configuration, and this obviously worked smoothly on Windows. However, the Linux user was having issues. Originally, the developer would download the project, edit the appsettings, and run the project fine. When it came time to check in work, they would have to quickly revert or ignore the changes in appsettings so that they didn’t get pushed up. Of course, this didn’cakrawala always happen and while it was typically caught in code review, it did cause another round of branch switching, and changes to be pushed.

Now we take that same example and put User Secrets over the top. Now the Linux developer simply edits their User Secrets to change the file paths to suit their machine. They never touch appsettings.json at all, and everything works just perfectly.

Take another team I work with. They had in the past worked with a shared remote database in Azure for local development. This was causing all sorts of headaches when developers were writing or tentamen SQL migrations. Often their migrations would break other developers. Again, to not break the existing developers flow, I created User Secrets and showed the team how they could override the default SQL connection string to instead use their local development machine so we could slowly ween ourselves away from using a shared database.

Another example on a similar vein. The amount of times I’ve had developers install SQL peladen on their machine either as /SQLExpress or /MSSQLSERVER rather than a non-named instance. It happens all the time. Again, while I’m trying to help these developers out, sometimes it’s easier to just say, please add a user secret for your specific set up if you need it and we can resolve the issue later. It almost becomes an unblocking mechanism that developers can actually control their own configuration.

What I don’n think User Secrets are good for are actual secrets. So for example, while creating an emailing integration, a developer put a Sendgrid API key in their User Secrets. But what happens when he pushes that code up? Is he just going to email that secret to developers that need it? It doesn’n really make sense. So anything that needs to be shared, should not be in User Secrets at all.

Imagine an Ecommerce system that generates a unique kiriman number each time a customer goes to the checkout. How would you generate this unique number?

You could :

  • Use the primary key from the database
  • Select the MAX bestelan number from the database and increment by one
  • Write a custom piece of code that uses a table with a unique constraint to “reserve” numbers
  • Use SQL Sequences to generate unique values

As you’ve probably guessed by the title of this post, I want to touch on that final point because it’s a SQL Server feature that I think has gone a bit under the radar. I’ll be the first to admit that it doesn’t solve all your problems (See limitations at the end of this post), but you should know it exists and what it’s good for. Half the battle when choosing a solution is just knowing what’s out there after all!

SQL Sequences are actually a very simple and effective way to generate unique incrementing values in a threadsafe way. That means as your application scales, you don’n have to worry about two users clicking the “titipan” button on your ecommerce site at exactly the same time, and being given the exact same bestelan number.

Getting Started With SQL Sequences

Creating a Sequence in SQL Server is actually very simple.

CREATE SEQUENCE TestSequence Tiba WITH 1 INCREMENT BY 1
          

Given this syntax, it’s probably obvious to you the sort of different options you can do. You can for example, always increment by the sequence by 2 :

CREATE SEQUENCE TestSequence START WITH 1 INCREMENT BY 2
          

Or you can even descend instead of ascend :

CREATE SEQUENCE TestSequence START WITH 0 INCREMENT BY -1
          

And to get the next value, we just need to run SQL like :

SELECT NEXT VALUE FOR TestSequence
          

It really is that simple! Titinada only that, you can view Sequences in SQL Management Studio as well (Including being able to create them, view the next value without actually requesting it etc). Simply look for the Sequences folder under Programmability.

Entity Framework Support

Of course you are probably using Entity Framework with .NET/SQL Server, so what about first class support there? Well.. It is supported but it’s not great.

To recreate our sequence as above, we would override the OnModelCreating of our DbContext (e.g. Where we would put all of our configuration anyway). And add the following :

protected override void OnModelCreating(ModelBuilder modelBuilder) {     modelBuilder.HasSequence("TestSequence", x => x.StartsAt(1).IncrementsBy(1)); }
          

That creates our sequence, but how about using it? Unfortunately, there isn’t really a thing to “get” the next value (For example if you needed it in application code). Most of the documentation revolves around using it as a default value for a column such as :

modelBuilder.Entity<Pesanan>()     .Property(ozon => udara murni.OrderNo)     .HasDefaultValueSql("NEXT VALUE FOR TestSequence");
          

If you are looking to simply retrieve the next number in the sequence and use it somewhere else in your application, unfortunately you will be writing raw SQL to achieve that. So not ideal.

With all of that being said however, if you use Entity Framework migrations as the primary way to manage your database, then the ability to at least create sequences via the ModelBuilder is still very very valuable.

Limitations

When it comes to generating unique values for applications, my usage of SQL Sequences has actually been maybe about 50/50. 50% of the time it’s the perfect fit, but 50% of the time there are some heavy “limitations” that get in the way of it being actually useful.

Some of these limitations that I’ve run into include :

When you request a number from a sequence, no matter what happens from that point on, the sequence is incremented. Why this is important is imagine you are creating a Customer in the database, and you request a number from the sequence and get “156”. When you go to insert that Customer, a database constraint fails and the customer is titinada inserted. The next Customer will still be inserted as “157”, regardless of the previous insert failing. In short, sequences are not part of transactions and do titinada “lock and release” numbers. This is important because in some cases, you may not wish to have a “gap” in the sequence at all.

Another issue is that sequences cannot be “partitioned” in any way. A good example is a system I was building required unique numbers
*per year*. And each year, the sequence would be reset. Unfortunately, orders could be backdated and therefore simply waiting berayun-ayun Jan 1st and resetting the sequence was not possible. What would be required is a sequence created for say the next 10 years, and for each of these be managed independently. It’s not too much of a headache, but it’s still another bit of overhead.

In a similar vein, multi tenancy can make sequences useless. If you have a single database in an ecommerce SAAS product supporting say 100 tenants. You cannot use a single sequence for all of them. You would need to create multiple sequences (One for each tenant), which again, can be a headache.

In short, sequences are good when you need a single number incrementing for your one tenant database. Anything more than that and you’re going to have to go with something custom or deal with managing several sequences at once, and selecting the right one at the right time with business logic.