Tuesday, December 25, 2018

Hero-U: Rogue to Redemption

In 2015 I was so happy to hear that Cory and Lori Cole, game designers for the Sierra Entertainment company, were doing games again, using Kickstarter to fund their work. Particularly I was happy that they were doing something very similar to Quest for Glory, which was one of my very favorite game series ever. Well, the game was finally released in the summer of 2018 and I just had to play it. Short conclusion: I had a lot of fun, but not everything was perfect.

game cover and the two Coles The game is an adventure role playing game called Hero-U: Rogue to Redemption and it's about a small time thief who meets a mysterious bearded figure right after he successfully breaks into a house and steals, as per contract, a "lucky coin". The man gives him the opportunity to stop thieving and instead enroll into Hero University as a Rogue, rogues being a kind of politically correct thieves, taking from the rich and giving to the poor and all that. You spend the next 40-50 hours playing this kid in the strange university and finally getting to be a hero.

You have to understand that I was playing the Quest for Glory games, set in the same universe as Hero-U, when I was a kid. My love for the series does not reflect only the quality of the games, the humor, the nights without Internet where I had to figure out by myself how to solve a puzzle so that I could brag to my friends who were doing the same at the time, but the entire experience of discovery and wonder that was childhood. My memories of the Sierra games are no doubt a lot better than the games themselves and any attempt of doing something similar was doomed to harsh criticism. So, did the Coles destroy my childhood?

Nope. Hero U was full of puns and entertainment and rekindled the emotions I had playing QfG. I recommend it! But it won't get away from criticism, so here it is.

Update: I've finished the game again, going for the "epic" achievement called Perfect Prowler, which requires you don't kill anything. I recommend this as the start game because, if you think about it a bit, it's the easier way to finish the game. To not kill anything you need to sneak past enemies, meaning maxing your stealth. To defeat your enemies (which is also NOT the rogue way as taught at the university) you need to have all sorts of defenses, combat skills, magical weapons or runes, etc. By focusing on stealth you actually focus on the story, even if sometimes it is annoying to try to get past flying skulls for ten minutes, saving and reloading repeatedly, until your stealth is high enough. Some hints for people doing this:
  1. Sleeping powder is your friend, as it instantly makes an enemy unresponsive and does not alert other enemies that are standing right next to them
  2. Sleeping powder works on zombies, for some reason
  3. Demolishing a wall with a Big Boom while guards are sleeping next to it does not hurt said guards, even better, they magically disappear letting you plunder the entire room
  4. If someone else kills your enemy, you didn't kill anything :)
  5. The achievement says you have to not kill things, you can attack them at your leisure as long as you flee or use some other methods to escape

Anyway, the second run made me even more respectful towards the creators of the game, as they thought of so many contingencies to allow you to not get stuck whatever style of play you have. And this on a game that had so many production issues. Congratulations, Transolar!

And now for the original analysis:

What is great about the game is that it makes you want to achieve as much as possible in a rather subtle way. It doesn't show you X points out of Y the way old Sierra games did, but it always hints of the possibility of doing more if you only "apply yourself". Yes, it feels very much like a school. And I liked it. What's wrong with me?

I liked the design of the game, although I wish there was a way to just open a door you often go through, rather than click on the door and then choose Open from the list of possible and useless options like Listen on the door or Look at the door. I liked that you had a lot of actions for the objects in the game, which made it costly to just explore every possible option, but also satisfying to find one that works in your favor.

And the game is big! A lot of decisions, a lot of characters and areas to explore, a lot of quests and a lot of puns. Although, in truth, even if I loved the QfG series for their puns, in Hero-U it feels like they tried a little bit too much. In fact, I will write a lot about what I didn't like, but those are general things that are easy to point out. The beautiful part is in the small details that are much harder to describe (and not spoil).

The biggest issue I had with the game was the time limits. The story takes the hero through a semester of 50 days at the university and he has to do as much as possible in that time. This was good. It makes for a challenge, it forces you to manage the time you have to choose one or the other of several options. You can't just train fighting skills for weeks and then start killing critters. However, each day has several other time limits, mainly breakfast/class, supper and sleep. You may be in the depths of the most difficult dungeon, took you hours to get there, if it's supper time, your "hero" will instantly find his way back so he can grab some grub. You don't have the option to skip meals or a night's sleep, which would have been great as an experience and very little effort in development, as he already has "tired", "hungry", "injured" and other states that influence his skills.

This takes me to the general issue of linearity of story. The best QfG games were wonderful because you had so many options of what you could do: you could explore, do optional side quests that had little or nothing to do with the main story, solve puzzles in a multitude of ways (since in those games you got to choose your class). Hero-U feels very linear to me: a lot of timed quests with areas that only open up after specific events that have nothing to do with you, the items you get at the store change to reflect the point in time you are in, a choice of girls and boys to flirt with, but really only one will easily respond to your attempts at romance, the only possible ending with variations so small as to make them irrelevant and so on. And many a time it is terribly frustrating to easily find a hidden door or secret passage, but be unable to do anything with it until "it's time". You carry these big bombs with you, but when you get to a blocked door you can't just demolish it. I already mentioned the many options you have to interact with random objects in the game, but the vast majority of them are useless and inconsistent. QfG had some of these issues, too, though.

An interesting concept are the elective classes, which are so easy to miss it's ridiculous. Do not miss the chance (as I did) to do science, magic or healing. It reminds me of QfG games you played as a fighter and then started them again as a mage or thief. The point is to take all your tests (and since you get the results a few days later) you need to know your stuff (i.e. read the text of the lectures and understand what the teachers are saying). Unfortunately, the classes don't do much to actually help you. Science gives you a lot of traps and explosives, healing gives you a lot of potions and pills and magic gives you sense magic and some runes. You can easily finish the game without any of them and it is always annoying to have to run from the end of your classes (at 14:00) and reach the elective classroom on another floor, having to dodge Terk and also considering that you might want to do work in the lock room, practice room, library, recreation room and reception, all in one hour (you have to get to the class by 15:00). And the elective eats two hours of your time, just in time for (the mandatory) dinner.

And then there is the plot itself. I had a hard time getting immersed in a story where young people learn at a university teachers know is infested with dangerous creatures that students fight, but do nothing to either stop or optimize the process. Instead, everybody knows about the secret passages, the areas, but pretend they do not. Students never party up to do a quest together. There are other classes in the university, not only Rogues learn there, but you never meet them. Each particular rogue student has a very personal reason to be in the university, which makes me feel it's amazing that the class has seven students; in other years there must have been a maximum of two. You get free food from all over the world, but you have to buy your own school supplies. There are two antagonists that really have absolutely no power over you, no back story, and you couldn't care less that they exist. Few of the characters in the game are sympathetic or even have believable motivations.

Bottom line: I remembered what it was like when I was a child playing these games and enjoyed a few days of great fun. I felt like the story could have had more work done so that we care about the characters more and have more ways to play the game. The limits often felt very artificial and interrupted me from being immersed in the fantastic world. It felt like a Quest for Glory game, but not the best ones.

It is worth remembering that this game is the first since the 1990s when the creators were working in Sierra Games. They overcame a lot of new hurdles and learned a lot to make Hero-U. The next installments or other games will surely go more smoothly both in terms of story and playability. I have a lot of trust in them.

Some notes:
  • There is a Hero-U Student Handbook in PDF form.
  • Time is very important. It pays to save, explore an area, reload and go directly where you need to go.
  • Stealth is useful. There is an epic achievement to finish the game without killing anything. That feels a bit extreme, but it also shows that items and combat skills may be less relevant than expected.
  • Exams are important: save and pass the exams so you can get elective classes. I felt like every part of the story was excessively linear except elective classes which you can even miss completely because you get no help with them from the teachers or the game mechanism.
  • Some doors towards the end cannot be opened and are reserved for future installments of the series.
  • You can lose a lot of time in the catacombs for no good reason. Don't be ashamed to create and use a map of the rooms.

I leave you with a gameplay video:

Friday, December 21, 2018

Sonar Source rule 3898 Value types should implement "IEquatable"

This is another post discussing a static analysis rule that made me learn something new. SonarSource Rule 3898 says: If you're using a struct, it is likely because you're interested in performance. But by failing to implement IEquatable<T> you're loosing performance when comparisons are made because without IEquatable<T>, boxing and reflection are used to make comparisons.

There is a StackOverflow entry that discusses just that and the answer to this particular problem is not actually the accepted one. In pure StackOverflow fashion I will quote the relevant bit of the answer, just in case the site will go offline in the future: I'm amazed that the most important reason is not mentioned here. IEquatable<> was introduced mainly for structs for two reasons:
  1. For value types (read structs) the non-generic Equals(object) requires boxing. IEquatable<> lets a structure implement a strongly typed Equals method so that no boxing is required.
  2. For structs, the default implementation of Object.Equals(Object) (which is the overridden version in System.ValueType) performs a value equality check by using reflection to compare the values of every field in the type. When an implementer overrides the virtual Equals method in a struct, the purpose is to provide a more efficient means of performing the value equality check and optionally to base the comparison on some subset of the struct's field or properties.

I thought this was worth mentioning, for those performance critical struct equality scenarios.

Saturday, December 15, 2018

Skyward (Skyward #1), by Brandon Sanderson

book cover Skyward is Brandon Sanderson at his best... and worse. Yes, his best characters have always been young rebellious loud mouths with a penchant for over the top lines and punny jokes. And yes, this is a young adult novel with a classically clichéd plot. I feel guilty for liking it so much, but hey, it apparently works! Personally I feel it's a shame Sanderson spends a year writing a book and I finish it in two days, but at least he's not George R. R. Martin!

The whole idea revolves around this society of humans, driven underground by an alien force. They live on a planet surrounded by a sphere of debris few can get through and attacked periodically by alien fighter planes and bombers that the humans must repel in order to survive. And here is this heroic little girl who dreams of becoming a pilot fighter despite her father being universally despised for being a coward and leaving the field of battle. Determined to clear her and her father's name, she enrolls in a school for cadet fighters and discovers she has what it takes to protect her friends and save humankind.

Sounds familiar? It should, every story lately seems to be about the same character. Is it an interesting and engaging character? Yes. Is the world weird and familiar enough to be enjoyed? Yes. If this is all you need, you will love the book. And of course, it's the first book in a series. I need a little more, though, and I feel that the twists were terribly predictable and there were holes everywhere in the world building. If you only focus on the characters, as the author did, you enjoy the book. But as soon as you try to imagine yourself there, things start to make little sense and whatever you would do, it would not be what the characters in the book do. Plus... that fighter! Deus Ex Machina much?

Bottom line: lovely book to read in a few days and feel you are a reader, but the story is as standard as they come and the only nice thing about it is that Sanderson wrote it.

Friday, December 14, 2018

Static analysis of code in Visual Studio, Rule sets, etc

Intro


Visual Studio has a very interesting feature called Rule Sets. You basically create a file where you declare which warnings from analyzers will be ignored, displayed as info, warning or error. With the built in code analysis, but also with the help of a plethora of extensions and NuGet packages, this can be a very powerful tool. I am using VS2017 Professional for this post.

Create a rule set


Let's start with creating a new project (a .NET Framework console app) called Rulesets, which will create the standard Program.cs file and a solution for the project. Next, right click the solution in Solution Explorer and go to Add → New Item, go to the General category and select Code Analysis Rule Set. Name it whatever you want to name it, I will call it default.ruleset, then save it.



At this point you should be in a rule set editor, showing you a list of rules grouped by code analyzer id. Press F4 or go to the little wrench icon so that the Properties window is open, then give your ruleset a name (Default Rule Set) and save the file (ctrl-S).



You can create as many of these files and they will be saved in the Solution Items or as a file in a project (I recommend the former) and associate them to any number of projects. Let's assign the new rule set to our project: Go to the solution properties, select Common Properties → Code Analysis Settings, then select as many projects as you want in the list in the right. Then click on the little dropdown arrow an you should be prompted with a list of possible rules, including Default Rule Set. Select it.



Obviously, you can just choose a Microsoft included rule set instead, but those do not take into account your own extensions/packages.

Note: in order to start an incremental process, let's say starting with the minimum recommended settings from Microsoft and then adding stuff to it, use the Include element in a .ruleset file. The files coming by default with Visual Studio can be found at %ProgramFiles(x86)%/Microsoft Visual Studio/2017/Professional/Team Tools/Static Analysis Tools/Rule Sets. Example:
<Include Path="minimumrecommendedrules.ruleset" Action="Default" />

From the Visual Studio GUI you can click on the folder icon in the rule set editor top bar to include other sets and the wrench icon to open the settings.

This helps a lot with having small variations between your projects. For example a tests project might have different settings. Or the data access layer project might have auto generated files that don't respect your coding standards. You can just create a new ruleset that includes the default, then disables some of the rules, like mandatory class documentation.

Note that the rule set editor is not perfect. It will only show the rules as defined in the current file, ignoring the included sets. That is why if the default for a rule is None and you set it to Warning in your base set which you then include in another set where you set it back to None, it will not be saved correctly. Some manual checks are required to ensure correctness.

A list of analyzers


Now, I've noticed a list of possible extensions for Visual Studio that use this system. Here is a list of the ones I thought were good enough, free and useful.
Visual Studio extensions:
  • Microsoft Code Analysis 2017 - Live code analysis rules and code fixes addressing API design, performance, security, and best practices for C# and Visual Basic.
  • Security Code Scan - Detects various security vulnerability patterns: SQL Injection, Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), XML eXternal Entity Injection (XXE), etc.
  • MetricsAnalyzer - analyzer extension to check if you code follow metrics rules
  • Moq.Analyzers - Visual Studio extension that helps to write unit tests using Moq mocking library by highlighting typical errors and suggesting quick fixes
  • Code Cracker for C# - analyzer library for C# that uses Roslyn to produce refactorings, code analysis, and other niceties
  • Visual Studio Intellicode - this is interesting in the sense that it uses AI to improve your code and intellisense
  • Roslynator 2017 - A collection of 500+ analyzers, refactorings and fixes for C#, powered by Roslyn.
  • SonarLint for Visual Studio 2017 - Roslyn based static code analysis: Find and instantly fix nasty bugs and code smells in C#, VB.Net, C, C++ and JS.
  • clean-code-net - Set of C# Roslyn analyzers to improve code correctness
  • CommentCop - Analyzes (mostly) xml comments and provides code fixes. Uses Roslyn C# code analyser.

NuGet packages:
  • StyleCop.Analyzers - there is also a StyleCop extension, but weirdly it does not use the Rule Set system and you just run it manually and gives you warnings that you can control only through the extension configuration
  • Public API Analyzer - An analyzer for packages with public APIs.
  • UnityEngineAnalyzer - Roslyn Analyzer for Unity3D
  • AsyncAwaitAnalyzer - A set of Roslyn Diagnostic Analyzers and Code Fixes for Async/Await Programming in C#.
  • ef-perf-analyzer - EntityFramework Performance Analyzer
  • Asyncify-CSharp - an analyzer and codefix that allows you to quickly update your code to use the Task Asynchronous Programming model.

The packages above are NuGets you install in your project. Just check out the list from NuGet: NuGet packages containing Analyzer. Many extensions also have a NuGet counterpart. It's your choice if you want to not bloat your Visual Studio and choose to install analyzers on a per project basis. There is one advantage more in using project packages: the analysis will pop up at build time and you can thus enforce not being able to compile without following the rules in the set.

Curating your rule sets


So you've learned how to choose a rule set for your projects, how to create your own, but what do you use them for? My suggestion is to work on a complete rule set (or sets) for your entire company and then you can just enforce a coding style without having to manually code review everything.

If you are like me, then you probably installed everything in that list above and then tried it on your project... and you got tens of thousands of warnings and errors. How do you curate a rule set without having to go through every single message? I see several ways of doing this.

The perfect project


Some people/companies have a flagship technical project that they are very proud of. It uses all the latest technologies, it is perfectly written, thoroughly code reviewed by all the members of the team. It can do no wrong. If you have such a project, just enable all possible rules, then disable all that make suggestions for change. In the end you will have a rule set enforcing your coding style, for better or worse.

Start from scratch


The other solution is to start with a new project, then review every message until you get none. Then start coding. An iterative process, for sure, one you will never finish, but it will be good enough after a while and it will also engage your team in technical discussions on how to improve their code, which can't possibly hurt.

Add analyzers one by one


Start with a rule set where all rules are disabled, then start reviewing them one by one, as you either chose to disable them or refactor your code. This solution may be the worse, because it gives excuses to just stop the process midway, but it might be the only one available. Anyway, try to install extensions and packages one by one, too.

Start from the coding standards


Perhaps you have a document describing the coding standards in your team. You might start from it, then look for the rules that enforce it. I think that this will only make you see how woefully inadequate your coding standards are, but it might work.

Other notes


I've had the situation where I created derived rulesets from the default one (using Include) and somehow they ended up with an absolute path for the default ruleset file. It might be an issue with how the editor saves the file.

By default, rules in the ruleset editor are grouped by analyzer ID, but multiple analyzers might manage the same rules, so always manage rules individually, else you will see that enabling or disabling an entire group will change other groups as well and you won't know where you started from.

The ruleset editor is not perfect. One very annoying issue is with ruleset inheritance (doesn't load the parent rules). One example is that you want to have a general ruleset that does NOT include an analyzer (let's say the XUnit one) and then you want something inheriting the base ruleset to DO include the XUnit analyzer. While you can do it by hand, the ruleset editor will not allow you to make these changes, as the original ruleset will have every XUnit rule disabled, but the editor for the unit test one will not know this. There are two solutions for this:
  1. Have a very inclusive basic rule set, then remove rules from the others. This is not perfect, as there could be circular needs (one set has one rule and not the other, while the other has them the other way around)
  2. Have a lot of basic rule sets, split on topic. Then manage every rule set that you need as just includes. This works in every situation, but requires you edit the used rulesets by hand. In the case above you could have a special DoNotUseXunitAnalyzers.ruleset, for example.
  3. A third solution that I do not recommend is to never include anything, instead just copy paste the content of the basic ruleset in every inheriting one. While this works and allows the editor and whatever engine behind to work well, it would be a nightmare to maintain.

Conclusion


I've discussed how to define and control static code analysis in your Visual Studio projects. If nothing of what is available is up to your standards, Roslyn now allows making your own code analyzers in a very simple way. Using code analyzers (and refactorings) can improve productivity, engage the team in technical analysis of standards, enforce some coding standards and help you find hard to detect errors in your code.

Hope it helps.

Thursday, December 13, 2018

Constant vs Static Readonly

I was playing with code analysis rule sets in Visual Studio (see my blog post about it) and I got hit by come conflicting rules. I will discuss only SonarSource rules, but a lot of other analyzers have similar rules.

OK, one of them is something that I intuitively thought was universally good: RSPEC-3962: "static readonly" constants should be "const" instead. Makes sense, right? A constant is compiled better, integrated faster, it's a constant! No overhead, nothing changes it. This rule was marked as a minor improvement to the code, anyway.

Then, bam!, RSPEC-2339: Public constant members should not be used. Critical rule! Basically it says the opposite: turn your constant into static readonly. What's going on?!

This is not one of those pairs of rules that contradict each other based on user preference, like using var instead of the type name when the type is obvious and viceversa. These are two different, apparently conflicting, yet complementary concepts.

But what is really the difference between a static readonly field and a constant, other than constants can only be value types? Constant values are retrieved at compile time, as an optimization, since they are not expected to change, while static readonly values are retrieved at runtime. This means that if you use a library in your project, the constants it declares will be incorporated into your application when you compile it. You may change the .dll of the library afterwards, with inconsistent results, since readonly statics will now have changed values and the constants not.

Here, an example. In the creatively named project Library there is a Container class with a public constant ingeniously named Constant and a public static readonly field that has the same value as Constant.
namespace Library
{
    public class Container
    {
        public const int Constant = 1;
        public static readonly int StaticReadonly = Constant;
    }
}

Then there is a program that uses these two values to display them:
class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine($"Container.Constant: {Container.Constant} Container.StaticReadonly: {Container.StaticReadonly}");
        Console.ReadKey();
    }
}

The expected output is Container.Constant: 1 Container.StaticReadonly: 1. Now change the value of Constant to 2, right click the Library project and only build it, not the program. Then take the resulting .dll and copy it in the bin folder of the program, then run it manually. The output is now... Container.Constant: 1 Container.StaticReadonly: 2 and that from a code like StaticReadonly = Constant;.

Conclusion: public constants should be avoided if they are used between projects and since you don't know where they will be used, better to avoid them at all times. This will really annoy people who like to create separate classes to store constants, but that's OK, because the feeling is mutual.

FormattableString and string interpolation

So I was watching this Entity Framework presentation and I noticed one example that looked like this:
db.ExecuteSqlCommand($"delete from Log where Time<{time}");

Was this an invitation to SQL injection? Apparently not, since the resulting SQL was something like DELETE FROM Log WHERE Time < @_p0. But how could that be? Enter FormattableString, which is a class implementing the venerable IFormattable interface, but which is available in .NET Framework only from version 4.6 and in .NET Core from the very beginning. Apparently, when an interpolated string is assigned to a FormattableString, it is compiled as an instance with all the values from the string before the formatting. In our case ExecuteSqlCommand had a FormattableString overload. Note that the method is an extension method from RelationalDatabaseFacadeExtensions, not Database.ExecuteSqlCommand.

Let's test this with a little program:
class Program
{
    static void Main(string[] args)
    {
        var timeDisplay = new TimeDisplay();
        Test($"Time display:{timeDisplay}");
        Console.ReadKey();
    }
 
    private static void Test(string text)
    {
        Console.WriteLine(text);
    }
 
    private class TimeDisplay
    {
        public override string ToString()
        {
            return DateTime.Now.ToString("s");
        }
    }
}

Here I create an instance of TimeDisplay and then use it in an interpolated string which is then sent to the Test method, which Console.WriteLines it. The ToString method of TimeDisplay is overridden to display the current time. The result is predictable: Time display:2018-12-13T11:24:02. I will then change the type of the parameter of Test to be FormattableString. It still works and it displays the same thing. Note that if I have both a FormattableString and a string version of the same method, string will be used first when an interpolated string is sent as a parameter!

But what do I get in that instance? Let's change the Test method even more:
private static void Test(FormattableString text)
    {
        Console.WriteLine($"Format: {text.Format} " +
            $"ArgumentCount: {text.ArgumentCount} " +
            $"Arguments: {string.Join(", ",text.GetArguments())}");
    }

The displayed result of the program is now Format: Time display:{0} ArgumentCount: 1 Arguments: 2018-12-13T11:28:35. Note that the argument is in fact a TimeDisplay instance and it is displayed as a time stamp because of the ToString override.

What does this mean?

Well, we can do great things like Entity Framework does, interpreting the intent of the developer and providing a more informed output. I am considering this as a solution for logging. Logger.LogDebug($"{someObjectWithAHeavyToString}") now doesn't have to execute the ToString() method of the object unless the Debug log level is enabled, for example.

But we can also really mess things up. I will get past the possible yet unlikely security problem where you believe you pass an object as .ToString() and in fact it is passed as the entire object, allowing a malicious library to do whatever it wants with it. Let's consider more probable scenarios.

One is that a code reviewer will tell you "put magic strings in their own variables or constants", so you immediately take the string sent to test and automatically move it a local variable (which Visual Studio will create it as a FormattableString), then you replace that with var (because the type is obvious, right?). Suddenly the test variable is a string.

Another is even worse, although if you decided to code like this you have other issues. Let's get back to something similar to the original example:
db.ExecuteSqlCommand($"delete from Log where Id = {id}");

And let's change it:
var sql=$"delete from Log where Id = {id}";
db.ExecuteSqlCommand(sql);

Now sql is a string, its value is computed from the id, which might be provided by the user. Replace this with Bobby Tables and you got a nice SQL injection.

Conclusion: an interesting, if somewhat confusing, concept. Other than the logging idea, which I admit is pretty interesting, I am yet to find a good place to use it.

Monday, December 10, 2018

The Everything Creative Writing Book, by Wendy Burt-Thomas

book cover The name says it all: "The Everything Creative Writing Book: All you need to know to write novels, plays, short stories, screenplays, poems, articles, or blogs", maybe too much. In this book, Wendy Burt-Thomas takes a holistic approach to writing, discussing everything from how to write poetry and children's books, blogs and technical specs to how to find an agent, self publish and so on. It covers writing techniques and editing advice, writer block solutions and how to deal with rejection (or success for that matter) and many more. In that regard, the book is awesome, it shows everything you might want to know a little about in order to decide what you actually choose to do, but like that Nicholas Butler quote An expert is one who knows more and more about less and less until he knows absolutely everything about nothing, the book is probably not very useful to someone who has already started working on things.

That said, the book is compact, to the point and can help a lot at the very beginning of the writer's journey. It can be used as a reference, so that whenever a particular subject or concern appears, you just flip to that chapter and see what Wendy recommends. Is it good advice? I have no idea. I've certainly read books that go more in depth about topics that interested me more, like how to write a novel or how to set up a scene, but a panoramic view of the business is not bad either. The material also felt a little dated for something released in 2010, especially in the technical sections.

You choose if you find it useful or not.

Thursday, December 06, 2018

How to build an Adapter for similar libraries in .NET

Intro


An adapter is a software pattern that exposes functionality through an interface different from the original one. Let's say you have an oven, with the function Bake(int temperature, TimeSpan time) and you expose a MakePizza() interface. It still bakes at a specific temperature for an amount of time, but you use it differently. Sometimes we have similar libraries with a common goal, but different scope, that one is tempted to hide under a common adapter. You might want to just cook things, not bake or fry.

So here is a post about good practices of designing a library project (complete with the use of software patterns, ugh!).



Examples


An example in .NET would be the WebRequest.Create method. It receives an URI as a parameter and, based on its type, returns a different implementation that will handle the resource in the way declared by the WebRequest. For HTTP, it will used an HttpWebRequest, for FTP an FtpWebRequest, for file access a FileWebRequest and so on. They are all implementations of the abstract class WebRequest which would be our adapter. The Create method itself is an example of the factory method pattern.

But there are issues with this. Let's assume that we have different libraries/projects that handle a specific resource scope. They may be so different as to be managed by different organizations. A team works on files, another on HTTP and the FTP one is an open source third party library. Your team works on the WebRequest class and has to consider the implications of having a Create factory method. Is there a switch there? "if URI starts with http or https, return new HttpWebRequest"? In that case, your WebRequest library will need to depend on the library that contains HttpWebRequest! And it's just not possible, since it would be a circular reference. Had your project control over all implementations, it would still be a bad idea to let a base class know about a derived class. If you move the factory into a factory class it still means your adapter library has to depend on every implementation of the common interface. As Joe Armstrong would say You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

So how did Microsoft solve it? Well, they did move the implementation of the factory in another creator class that would implement IWebRequestCreate. Then they used configuration to associate a prefix with an implementation of WebRequest. Guess you didn't know that, did you? You can register your own implementations via code or configuration! It's such an obscure feature that if you Google WebRequestModulesSection you mostly get links to source code.

Another very successful example of an adapter library is jQuery. Yes, the one they now say you don't need anymore, it took industry only 12 years to catch up after all. Anyway, at the time there were very different implementations of what people thought a web browser should be. The way the DOM was represented, the Javascript objects and methods, the way they actually worked compared to the way they should have worked, everything was different. So developers were often either favoring a browser over others or were forced to write code for each possible version. Something like "if Internet Explorer, do A, if Netscape, do B". The problem with this is that if you tried to use a browser that was neither Internet Explorer or Netscape, it would either break or show you one of those annoying "browser not supported" messages.

Enter jQuery, which abstracted access over all these different interfaces with a common (and very nicely designed) one. Not only did it have a fluent interface that allowed you to do multiple things with a single target (stuff like $('#myElement').show().css({opacity:0.7}).text('My text');), but it was extensible, allowing third parties to add modules that would allow even more functionality ($('#myElement').doSomethingCool();). Sound familiar? Extensibility seems to be an important common feature of well designed adapters.

Speaking of jQuery, one very used feature was jQuery.browser, which told you what browser you were using. It had a very sophisticated and complex code to get around the quirks of every browser out there. Now you had the ability to do something like if ($.browser.msie) say('OMG! You like Microsoft, you must suck!'); Guess what, the browser extension was deprecated in jQuery 1.9 and not because it was not inclusive. Well, that's the actual reason, but from a technical point of view, not political correctness. You see, now you have all this brand new interface that works great on all browsers and yet still your browser can't access a page correctly. It's either an untested version of a particular browser, or a different type of browser, or the conditions for letting the user in were too restrictive.

The solution was to rely on feature detection, not product versions. For example you use another Javascript library called Modernizr and write code like if (Modernizr.localstorage) { /* supported */ } else { /* not-supported */ }. There are so many possible features to detect that Modernizr lets you pick and choose the ones you need and then constructs the library that handles each instead of bundling it all in one huge package. They are themselves extensible. You might ask what all this has to do with libraries in .NET. I am getting there.

The last example: Entity Framework. This is a hugely popular framework for database access from Microsoft. It would abstract the type of the database behind a very nice (also fluent) interface in .NET code. But how does it do that? I mean, what if I need SQL Server? What if I want MongoDB or PostgreSQL?

The way is having different "providers" to translate .NET code Expressions into whatever the storage needs. The individual providers are added as dependencies to your project, without the need for Entity Framework to know about them. Then they are configured for use in code, because they implement some common interfaces, and they are ready for use.

Principles for adapters


So now we have some idea about what is good in an adapter:
  • Ease of use
  • Common interface
  • Extensibility
  • No direct dependency between the interface and what is adapted
  • An interface per feature

Now that I wrote it down, it sounds kind of weird: the interface should not depend on what it adapts. It is correct, though. In the case of Entity Framework, for example, the provider for MySql is an adapter between the use interface of MySql and the .NET interfaces declared by Entity Framework; interfaces are just declarations of what something should do, not implementation.

Picture time!


The factory and the common interface are one library that will use that library in your project. Each individual adapter depends on it, as well, but your project doesn't need to know about it until needed.

Now, it's your choice if you register the adapters dynamically (so, let's say you load the .dll and extract the objects that implement a specific interface and they know themselves to what they apply, like FtpWebRequest for ftp: strings) or you add dependencies to individual adapters to your project and then manually register them yourself and strong typed. The important thing is that you don't reference the factory library and automatically be forced to get all the possible implementations added to your project.

It seems I've covered all points except the last one. That is pretty important, so read on!

Imagine that the things you want to adapt are not really that similar. You want to force them into a common shape, but there will be bits that are specific to one domain only and you might want them. Now here is an example of how NOT to do things:
var target = new TargetFactory().Get(connectionString);
if (target is SomeSpecificTarget specificTarget) {
  specificTarget.Authenticate(username, password);
}
target.DoTargetStuff();
In this case I use the adapter for Target, but then bring in the knowledge of a specific target called SomeSpecificTarget and use a method that I just know is there. This is bad for several reasons:
  1. For someone to understand this code they must know what SomeSpecificTarget does, invalidating the concept of an adapter
  2. I need to know that for that specific connection string a certain type will always be returned, which might not be the case if the factory changes
  3. I need to know how SomeSpecificTarget works internally, which might also change in the future
  4. I must add a dependency to SomeSpecificTarget to my project, which is at least inconsistent as I didn't add dependencies to all possible Target implementations
  5. If different types of Target will be available, I will have to write code for all possibilities
  6. If new types of Target become available, I will have to change the code for each new addition to what is essentially third party code

And now I will show you two different versions that I think are good. The first is simple enough:
var target = new TargetFactory().Get(connectionString);
if (target is IAuthenticationTarget authTarget) {
  authTarget.Authenticate(username, password);
}
target.DoTargetStuff();
No major change other than I am checking if the target implements IAuthenticationTarget (which would best be an interface in the common interface project). Now every target that requires (or will ever require) authentication will receive the credentials without the need to change your code.

The other solution is more complex, but it allows for greater flexibility:
var serviceProvider = new TargetFactory()
                             .GetServiceProvider(connectionString);
var target = serviceProvider.Get<ITargetProvider>()
                             .Get();
serviceProvider.Get<ICredentialsManager>()
   ?.AddCredentials(target, new Credentials(username, password));
target.DoTargetStuff();
So here I am not getting a target, but a service provider (which is another software pattern, BTW), based on the same connection string. This provider will give me implementations of a target provider and a credentials manager. Now I don't even need to have a credentials manager available: if it doesn't exist, this will do nothing. If I do have one, it will decide by itself what it needs to do with the credentials with a target. Does it need to authenticate now or later? You don't care. You just add the credentials and let the provider decide what needs to be done.

This last approach is related to the concept of inversion of control. Your code declares intent while the framework decides what to do. I don't need to know of the existence of specific implementations of Target or indeed of how credentials are being used.

Here is the final version, using extension methods in a method chaining fashion, similar to jQuery and Entity Framework, in order to reinforce that Ease of use principle:
// your code
var target = new TargetFactory()
                  .Get(connectionString)
                  .WithCredentials(username,password);
 
 
// in a static extensions class
 
public static Target WithCredentials(this Target target, string username, string password)
{
  target.Get<ICredentialsProvider>()
        ?.AddCredentials(target, new Credentials(username, password));
  return target;
}
 
public static T Get<T>(this Target target)
{
  return target.GetServiceProvider()
           .Get<T>();
}
This assumes that a Target has a method called GetServiceProvider which will return the provider for any interface required so that the whole code is centered on the Target type, not IServiceProvider, but that's just one possible solution.

Conclusion


As long as the principles above are respected, your library should be easy to use and easy to extend without the need to change existing code or consider individual implementations. The projects using it will only use the minimum amount of code required to do the job and themselves be dependent only on interface declarations. As well as those are respected, the code will work without change. It's really meta: if you respect the interface described in this blog then all interfaces will be respected in the code all the way down! Only some developer locked in a cellar somewhere will need to know how things are actually getting done.

Sunday, December 02, 2018

The Mist

No natural phenomenon, except maybe fire, seems more alive than the mist. But while fire is young, angry, destructive, mist is an old grumpy creature, moving slowly, hiding itself in contradictions. It doesn't hide distant features as much as it reveals close ones through contrast, it doesn't absorb light as much as it lets itself glow around sources of illumination, it makes sounds crystal clear by covering the constant hum of far off noise, dense and yet immaterial, its blanket like qualities offset by its cold embrace. Never more life like than when it clings to a still surface of water, the slightest gust of wind prompts annoyed tendrils and every move of another living thing elicits mirror acts, dream like, half finished motions forgotten before they even end. If mist could only remember...

Tuesday, November 20, 2018

The Things They Carried, by Tim O'Brien

book cover I started reading The Things They Carried as a recommendation for writing style and it is, indeed, a very deep personal work. Tim O'Brien writes about the Vietnam war in most if not all of his work, but this novella is a collection of short stories all brought together under the mantle of a sort of a confession. It's a mosaic, each piece beautiful, but together creating the artistic vision of the true war.

I liked the subtlety, most of all. The characters are not overly complex, but they are portrayed in a very personal manner, with details that are important for the overall meaning of the book. I loved how O'Brien described soldiers going to war (instead of running away to Canada, as he almost did) because they were too embarrassed not to. Died in the war because they were afraid to die of shame. Too cowardly to run.

At just 150 pages, the book shows not how the training went, or how the shooting was, it presents everything from the viewpoint of the people there. How it takes over every feeling you have, how it changes you into this creature that is completely different from the man (or woman) who left. It's not about maneuvers or tactical prowess or strategies of survival. They are all meaningless. The important part is to keep a semblance of sanity.

The titled refers to the trinkets people carry to remind them of who they are. And they carry much more: hopes, wounds, fears, diseases, the ever growing arsenal of pointless weapons and ammunition and so on. A bit depressing, but a damn good read.

Friday, November 16, 2018

Adding logging to your objects without changing them (much)! Using MarshalByRefObject and RealProxy.

I was attempting to add very detailed logging (tracing) to my application. In order to do that, I had to change hundreds of objects. That wouldn't do. Fortunately, .NET has a nice feature called RealProxy. Let's see a quick and dirty example:
    class Program
    {
        static void Main(string[] args)
        {
            JsonConvert.DefaultSettings = () => new JsonSerializerSettings
            {
                ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
                NullValueHandling = NullValueHandling.Ignore,
                Formatting = Formatting.Indented
            };

            Test test = new Test();
            test.field = "test";
            test.Property = "Test";
            string outObject;
            string refObject = "ref";
            var result = test.Method("test1", "test2", ref refObject, out outObject);
            Console.WriteLine(JsonConvert.SerializeObject(new object[] {
                    test,
                    result,
                    refObject,
                    outObject
                })
            );
            Console.ReadKey();
        }
 
    }
 
    class Test
    {
        public string field;
        public string Property { get; set; }
        public string Method(string parameter1, string parameter2, 
                                ref string refObject, out string outObject)
        {
            if (parameter1 == null)
                throw new ArgumentNullException("Parameter one cannot be null");
            refObject += " reffed";
            outObject = "outed";
            return $"{parameter1} : {parameter2}";
        }
    }

So I defined an object with a field, a property and a method. That's what my application does. It then displays the object and the result of the method call. Here is the result:
[
  {
    "field": "test",
    "Property": "Test"
  },
  "test1 : test2",
  "ref reffed",
  "outed"
]

I would like to log everything that happens with my Test object. Well, as such I can't do anything, I need to change the code a bit, like this:
    class Program
    {
        static void Main(string[] args)
        {
            JsonConvert.DefaultSettings = ()=>new JsonSerializerSettings
            {
                ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
                NullValueHandling = NullValueHandling.Ignore,
                Formatting = Formatting.Indented
            };

            Test test = (Test)new LoggingProxy(new Test()).GetTransparentProxy();
            test.field = "test";
            test.Property = "Test";
            string outObject;
            string refObject = "ref";
            var result = test.Method("test1", "test2", ref refObject, out outObject);
            Console.WriteLine(JsonConvert.SerializeObject(new object[] {
                    test,
                    result,
                    refObject,
                    outObject
                })
            );
            Console.ReadKey();
        }
 
    }
 
    class Test: MarshalByRefObject
    {
        public string field;
        public string Property { get; set; }
        public string Method(string parameter1, string parameter2, 
                                ref string refObject, out string outObject)
        {
            if (parameter1 == null)
                throw new ArgumentNullException("Parameter one cannot be null");
            refObject += " reffed";
            outObject = "outed";
            return $"{parameter1} : {parameter2}";
        }
    }
 
    class LoggingProxy : RealProxy
    {
        private readonly object _target;
 
        public LoggingProxy(object obj) : base(obj?.GetType())
        {
            _target = obj;
        }
 
        public override IMessage Invoke(IMessage msg)
        {
            if (msg is IMethodCallMessage methodCall)
            {
                var arguments = methodCall.Args.ToArray();
                var result = methodCall.MethodBase.Invoke(_target, arguments);
                return new ReturnMessage(
                                result, 
                                arguments, 
                                arguments.Length, 
                                methodCall.LogicalCallContext, 
                                methodCall);
            }
            return null;
        }
    }

So this is what I did above:
  • I've inherited my Test class from MarshalByRefObject
  • I've created a LoggingProxy class that inherits from RealProxy and implements the Invoke method
  • I've replaced new Test(); with (Test)new LoggingProxy(new Test()).GetTransparentProxy();

Running it we get the same result.

Time to add some logging. I will write stuff on the Console, too, for this demo. Here are the changes to the LoggingProxy class:
    class LoggingProxy : RealProxy
    {
        private readonly object _target;
 
        public LoggingProxy(object obj) : base(obj?.GetType())
        {
            _target = obj;
        }
 
        public override IMessage Invoke(IMessage msg)
        {
            if (msg is IMethodCallMessage methodCall)
            {
                var arguments = methodCall.Args.ToArray();
                string typeName;
                try
                {
                    typeName = Type.GetType(methodCall.TypeName).Name;
                }
                catch
                {
                    typeName = methodCall.TypeName;
                }
                try
                {
                    Console.WriteLine($"Called {typeName}.{methodCall.MethodName}" +
                                              $"({JsonConvert.SerializeObject(arguments)})");
                    var result = methodCall.MethodBase.Invoke(_target, arguments);
                    Console.WriteLine($"Success for {typeName}.{methodCall.MethodName}" +
                                              $"({JsonConvert.SerializeObject(arguments)}): " +
                                              $"{JsonConvert.SerializeObject(result)}");
                    return new ReturnMessage(
                                    result,
                                    arguments,
                                    arguments.Length,
                                    methodCall.LogicalCallContext,
                                    methodCall);
                }
                catch (Exception exception)
                {
                    Console.WriteLine($"Error for {typeName}.{methodCall.MethodName}" +
                                              $"({JsonConvert.SerializeObject(arguments)}): " +
                                              $"{exception}");
                    return new ReturnMessage(exception, methodCall);
                }
            }
            return null;
        }
    }

It's the same as before, but with a try/catch block and some extra Console.WriteLines. Here is the output:
Called Object.FieldSetter([
  "RealProxyTests2.Test",
  "field",
  "test"
])
Success for Object.FieldSetter([
  "RealProxyTests2.Test",
  "field",
  "test"
]): null
Called Test.set_Property([
  "Test"
])
Success for Test.set_Property([
  "Test"
]): null
Called Test.Method([
  "test1",
  "test2",
  "ref",
  null
])
Success for Test.Method([
  "test1",
  "test2",
  "ref reffed",
  "outed"
]): "test1 : test2"
Called Object.GetType([])
Success for Object.GetType([]): "RealProxyTests2.Test, RealProxyTests2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
Called Object.FieldGetter([
  "RealProxyTests2.Test",
  "field",
  null
])
Success for Object.FieldGetter([
  "RealProxyTests2.Test",
  "field",
  "test"
]): null
Called Test.get_Property([])
Success for Test.get_Property([]): "Test"
[
  {
    "field": "test",
    "Property": "Test"
  },
  "test1 : test2",
  "ref reffed",
  "outed"
]

A bounty of information. The first lines are the setting of the field, Property and the execution of Method. But then there are a GetType, a field getter and a Property getter. What's that about? That's JsonConvert, serializing the Test proxy.
Warning: I've copied methodCall.Args into a local property called arguments. At first glance it might appear superfluous, but methodCall.Args is not really an array of object, even if it appears that way in the debugger. It is read-only, changing any of its items has no effect.

Remember how we defined the proxy in Program.Main? We can use a method in our proxy object, let's call it Wrap:
    public static T Wrap<T>(T obj)
        {
            if (obj is MarshalByRefObject marshalByRefObject)
            {
                return (T)new LoggingProxy(marshalByRefObject).GetTransparentProxy();
            }
            return obj;
        }

Conclusion:

While this works, there are a series of issues related to it:
  1. RealProxy is only available for .NET Framework, not .NET Core (see DispatchProxy for that)
  2. Serialization is not so straightforward as in this demo (see my blog post about Newtonsoft serialization
  3. You need to inherit from MarshalByRefObject, so this solution doesn't work for classes that already have a base class
  4. Performance wise you should make sure you are not logging or executing anything unless the correct log level is set (something like logger.LogTrace(JsonConvert(...)) would not work because the JSON serialization occurs no matter before executing LogTrace)
  5. Also, this wrapping is not free. With a simple proxy that did nothing than execute the code it took 43 times more time to run. Of course, that's because the actual execution of setting properties or returning a string is basically zero. When I added a Thread.Sleep(1) in the method, it took almost the same amount of time. Just don't use it in performance sensitive applications

Final thoughts: in the same namespace with RealProxy there is a ProxyAttribute class that at first glance seems to be even better: you just decorate a class with the attribute and BANG! instant AOP. But it's not that simple. First of all it works only on object that inherit from ContextBoundObject which itself inherit from MarshalByRefObject. And while it seems like a good idea to just replace MarshalByRefObject with ContextBoundObject in the code above, know that no generic class can inherit from it. There might be other restrictions, too. If you make Test inherit ContextBoundObject, the debugger will already show new Test() as being a transparent proxy, without wrapping it with any code. It might still be usable in certain conditions, though.

Thursday, November 15, 2018

Newtonsoft Json OutOfMemoryException

I was trying to log some stuff (a lot of stuff) and I noticed that my memory went to the roof (16GB in a few seconds), then an OutOfMemoryException was thrown. I've finally narrowed it down to the JSON serializer from Newtonsoft.

First of all, some introduction on how to serialize any object into JSON: whether you use JsonConvert.SerializeObject(yourObject,new JsonSerializerSettings { <settings> }) or new JsonSerializer { <settings> }.Serialize(writer, object) (where <settings> are some properties set via the object initializer syntax) you will need to consider these properties:
We will use these classes to test the results:
class Parent
{
    public string Name { get; set; }
    public Child Child1 { get; set; }
    public Child Child2 { get; set; }
    public Child[] Children { get; set; }
    public Parent Self { get; internal set; }
}
 
class Child
{
    public string Name { get; set; }
    public Parent Parent { get; internal set; }
}

For this piece of code:
JsonConvert.SerializeObject(new Child { Name = "other child" }, settings)
you will get either
{"Name":"other child","Parent":null}
or
{
  "Name": "other child",
  "Parent": null
}
based on whether we use Formatting.None or Formatting.Indented. The other properties do not affect the serialization (yet).

Let's set up the following objects:
var child = new Child
{
    Name = "Child name"
};
var parent = new Parent
{
    Name = "Parent name",
    Child1 = child,
    Child2 = child,
    Children = new[]
    {
        child, new Child { Name = "other child"}
    }
};
parent.Self = parent;
child.Parent = parent;

As you can see, not only does parent have multiple references to child and one to himself, but the child also references the parent. If we try the code
JsonConvert.SerializeObject(parent, settings)
we will get an exception Newtonsoft.Json.JsonSerializationException: 'Self referencing loop detected for property 'Parent' with type 'Parent'. Path 'Child1'.'. In order to avoid this, we can use ReferenceLoopHandling.Ignore, which tells the serializer to ignore circular references. Here is the output when using
var settings = new JsonSerializerSettings
{
    Formatting = Formatting.Indented,
    ReferenceLoopHandling=ReferenceLoopHandling.Ignore
};

{
  "Name": "Parent name",
  "Child1": {
    "Name": "Child name"
  },
  "Child2": {
    "Name": "Child name"
  },
  "Children": [
    {
      "Name": "Child name"
    },
    {
      "Name": "other child",
      "Parent": null
    }
  ]
}

If we add NullValueHandling.Ignore we get
{
  "Name": "Parent name",
  "Child1": {
    "Name": "Child name"
  },
  "Child2": {
    "Name": "Child name"
  },
  "Children": [
    {
      "Name": "Child name"
    },
    {
      "Name": "other child"
    }
  ]
} 
(the "Parent": null bit is now gone)

The default for the ReferenceLoopHandling property is ReferenceLoopHandling.Error, which throws the serialization exception above, but we can also use ReferenceLoopHandling.Serialize besides Error and Ignore. In that case we get a System.StackOverflowException: 'Exception of type 'System.StackOverflowException' was thrown.' as it tries to serialize at infinitum.

PreserveReferencesHandling is rather interesting. It creates extra properties for objects like $id, $ref or $values and then uses those to define objects that are circularly referenced. Let's use this configuration:
var settings = new JsonSerializerSettings
{
    Formatting = Formatting.Indented,
    NullValueHandling = NullValueHandling.Ignore,
    PreserveReferencesHandling = PreserveReferencesHandling.Objects
};

Then the result will be
{
  "$id": "1",
  "Name": "Parent name",
  "Child1": {
    "$id": "2",
    "Name": "Child name",
    "Parent": {
      "$ref": "1"
    }
  },
  "Child2": {
    "$ref": "2"
  },
  "Children": [
    {
      "$ref": "2"
    },
    {
      "$id": "3",
      "Name": "other child"
    }
  ],
  "Self": {
    "$ref": "1"
  }
}

Let's try PreserveReferencesHandling.Arrays:
var settings = new JsonSerializerSettings
{
    Formatting = Formatting.Indented,
    NullValueHandling = NullValueHandling.Ignore,
    ReferenceLoopHandling=ReferenceLoopHandling.Ignore,
    PreserveReferencesHandling = PreserveReferencesHandling.Arrays
};

The result will then be
{
  "Name": "Parent name",
  "Child1": {
    "Name": "Child name"
  },
  "Child2": {
    "Name": "Child name"
  },
  "Children": {
    "$id": "1",
    "$values": [
      {
        "Name": "Child name"
      },
      {
        "Name": "other child"
      }
    ]
  }
}
which annoyingly adds an $id to the Children array. There is one more possible value, PreserveReferencesHandling.All, which causes this output:
{
  "$id": "1",
  "Name": "Parent name",
  "Child1": {
    "$id": "2",
    "Name": "Child name",
    "Parent": {
      "$ref": "1"
    }
  },
  "Child2": {
    "$ref": "2"
  },
  "Children": {
    "$id": "3",
    "$values": [
      {
        "$ref": "2"
      },
      {
        "$id": "4",
        "Name": "other child"
      }
    ]
  },
  "Self": {
    "$ref": "1"
  }
}

I personally recommend using PreserveReferencesHandling.Objects, which doesn't need setting the ReferenceLoopHandling property at all. Unfortunately, it adds an $id to every object, even if it is not circularly defined. However, it creates an object that can be safely deserialized back into the original, but if you just want a quick and dirty output of the data in an object, use ReferenceLoopHandling.Ignore with NullValueHandling.Ignore. Note that object references cannot be preserved when a value is set via a non-default constructor such as types that implement ISerializable.

Warning, though, this is still not enough! In my logging code I had used ReferenceLoopHandling.Ignore and the exception was quite different, an OutOfMemoryException. It seems that even with circular references checked, JsonSerializer will messes up some times.

The culprits? Task<T> (or async lambdas send as parameters) and an Entity Framework context object. The solution I employed was to check the type of the objects I send to the serializer and, if any of the offending types, replace them with the full names of their types.

Hope it helps!

Monday, November 12, 2018

Wix (Windows Installer) changing the color of a control

This is a simple gotcha related to changing the color of a control. Let's say you have a label that you want to present in a different color. Normally you would do something like this:
<!-- I put it somewhere in Product.wxs -->
<TextStyle Id="WixUI_Font_Normal_Red" FaceName="Tahoma" Size="8" Red="255" Green="55" Blue="55" />
 
<!-- somewhere in your UI -->
<Control Id="LabelRed" Type="Text" X="62" Y="200" Width="270" Height="17" Property="MYPROPERTY">
     <Text>{\WixUI_Font_Normal_Red}!(loc.MYPROPERTY)</Text>
</Control>

Yet for some reason, it doesn't work when the control is a checkbox, for example. The simple explanation is that this is by design: only text controls can change color. The solution is to split your control into the edit control without a text, then add a text control next to it with the color you need.

Here is an example of a checkbox that changes the label color based on the check value:
        <Control Id="DoNotRunScriptsCheckbox" Type="CheckBox" X="45" Y="197" Height="17" Width="17" Property="DONOTRUNSCRIPTS" CheckBoxValue="1"/>
 
        <Control Id="DoNotRunScriptsLabel" Type="Text" X="62" Y="200" Width="270" Height="17" CheckBoxPropertyRef="DONOTRUNSCRIPTS">
          <Text>!(loc.DoNotRunScriptsDescription)</Text>
          <Condition Action="hide"><![CDATA[DONOTRUNSCRIPTS]]></Condition>
          <Condition Action="show"><![CDATA[NOT DONOTRUNSCRIPTS]]></Condition>
        </Control>
 
        <Control Id="DoNotRunScriptsLabelRed" Type="Text" X="62" Y="200" Width="270" Height="17" CheckBoxPropertyRef="DONOTRUNSCRIPTS">
          <Text>{\WixUI_Font_Normal_Red}!(loc.DoNotRunScriptsDescription)</Text>
          <Condition Action="hide"><![CDATA[NOT DONOTRUNSCRIPTS]]></Condition>
          <Condition Action="show"><![CDATA[DONOTRUNSCRIPTS]]></Condition>
        </Control>