.NET (306) administrative (42) Ajax (42) AngularJS (2) ASP.NET (146) bicycle (2) books (211) browser (10) C# (139) cars (1) chess (30) CodePlex (11) Coma (8) database (63) deployment (3) Entity Framework (2) essay (120) flash/shockwave (2) flex (1) food (3) friend (2) game (22) idea (5) IIS (8) javascript (86) LInQ (2) Linux (6) management (4) manga (47) misc (733) mobile (1) movies (105) MsAccess (1) murder (2) music (64) mysql (1) news (101) NuGet (1) permanent (1) personal (68) PHP (1) physics (2) picture (354) places (12) politics (15) programming (536) question (2) rant (124) religion (3) science (44) Sharepoint (3) software (60) space (2) T4 (2) technology (13) Test Driven Development (4) translation (2) VB (2) video (106) Visual Studio (45) web design (47) Windows API (8) Windows Forms (3) Windows Server (6) WPF/Silverlight (64) XML (12)

Wednesday, September 02, 2015

FreedomTM, by Daniel Suarez

Book cover FreedomTM is the sequel, or rather the second part, of Daniel Suarez's Daemon, which I've reviewed previously. While Daemon spooked me with its realism, FreedomTM does away with all that and changes both pace, scope and plot. I guess Suarez had this in his head from the beginning of starting the book, but I didn't see it coming. Be warned, if you have not read Daemon, this review is going to have some serious spoilers.

You see, from a technological thriller, the book directly goes into socio-economic commentary and from a dumb AI engine that treats the world as a computer game, we get an Agent Smith Emperor of Dune kind of thing, which recognizes humanity as the scourge it is and assumes the role of the solution. Suspension of disbelief is almost impossible as you see "the good guys" surviving death (repeatedly), the bad guys being bad just because they can and being defeated with deus ex machina kind of solutions, and technological solutions solving every problem humanity ever had or could have. FreedomTM is the software developer's wet dream, where the algorithm that rules all other algorithms is not only possible, but implemented and bug free.

That doesn't mean that the book is bad. Far from it. I liked it a lot. However, compared with Daemon, it's like an American blockbuster movie cop out from a situation that is dramatic and full of tension: everything is going to be alright. Instead of maintaining the tension and having the reader on the edge of the seat, so to speak, everything gets explained in the first part of the book and the rest is just dedicated to epic conflict. Oh, and some completely unnecessary and quite difficult to believe romance. In fact, quite paradoxically, I will suggest you do not read FreedomTM immediately after Daemon. Instead, live with the daemon inside of your head, let it make you think about possibilities and wonder about what could be coming next, then, maybe, read the second part.

Friday, August 28, 2015

How to translate T-SQL Datetime2 to Float or Real

The new Datetime2 data type introduced in Microsoft SQL Server 2008 has several advantages over the old Datetime type. One of them is precision in 100 nanoseconds rather than coarse milliseconds, another is that is has a larger date range. It has disadvantages, too, like a difficulty in translating it into numerical values. It was a classic hack to CONVERT/CAST a Datetime into a Float in order to get a numerical value that you could manipulate (like convert it to an integer to get the date without time, which is now accomplished by converting it to Date, another type introduced in SQL Server 2008). There are many reasons why one needs to translate a datetime into a numerical value, I don't get into that here. So here is how to convert a Datetime2 value into a Float.

First solution:
DATEDIFF(SECOND,{d '1970-01-01'}, @Time)+DATEPART(nanosecond,@Time)/1.0E+9
- returns a value in seconds with nanosecond precision from the beginning of the year 1970. Advantage: simple to use and understand. Disadvantage: not similar to the conversion from Datetime to Float.

Second solution:
DATEDIFF(DAY,{d '1900-01-01'}, @Time)+DATEPART(HOUR,@Time)/24.0+DATEPART(MINUTE,@Time)/(24.0*60)+DATEPART(SECOND,@Time)/(24.0*60*60)+DATEPART(nanosecond,@Time)/(24.0*60*60*1.0E+9)
- returns a value that is similar to the float conversion of a datetime. Advantage: doesn't lose precision like converting to a Datetime and then to Float. Disadvantage: look at the length of that!

Final solution:
25567+(DATEDIFF(SECOND,{d '1970-01-01'}, @Time)+DATEPART(nanosecond,@Time)/1.0E+9)/86400.0
- combines the two solutions above. It easy to read and understand. It computes the number of seconds with nanoseconds from 1970, divides by 86400 to get the number of days and adds 25567, which is the number of days between 1900 and 1970.

Wednesday, August 26, 2015

The story of the LEFT JOIN in T-SQL (and why I learned to love the NOT EXISTS)

As a software developer - and by that I mean someone writing programs in C#, Javascript and so on, and occasionally using databases when something needs to be stored somewhere - I have an instinctual fear of the Arrow Anti-pattern. Therefore I really dislike stuff like NOT EXISTS(SELECT 1 FROM Something). However, recent tests have convinced me that this is the best solution for testing for existing records. I am not going to reinvent the wheel; here are some wonderful links regarding this, after which I will summarize:

Let's say you want to insert in a table all records from another source that do not already exist in the table. You have several options, but the most commonly used are:
FROM SourceTable
LEFT JOIN DestinationTable
ON SomeCondition
WHERE DestinationTable.Id IS NULL
FROM SourceTable
WHERE NOT EXIST(SELECT 1 FROM DestinationTable WHERE SomeCondition)

Personally I prefer the first version, for readability reasons as well as having listened to the mantra "Never do selects in selects" for all my life. However, it becomes apparent that the second version is a lot more efficient. The simple reason is that for the first example Microsoft SQL Server will first join the two tables in memory, then filter. If you have multiple combinations of records that satisfy the condition this will result in some huge memory and CPU usage, especially if you have no indexes defined and, sometimes, because you have some indexes defined. The second option uses one of the few methods guaranteed to exit, NOT EXISTS, which immediately invalidates a record at the first match.

Other options involve using the EXCEPT or INTERSECT operations in SQL, but they are not really helping. Intersecting ids, for example, then inner joining with SourceTable works, but it is somewhere in between the two solutions above and it looks like crap as well. Join hints don't help either.

Friday, August 21, 2015

How to OUTPUT columns that you did not INSERT in T-SQL

The OUTPUT clause is a very useful tool in Microsoft SQL, allowing for getting automatically inserted columns in the same command as the INSERT. Imagine you have a table with an identity column and you need the generated ids as you insert new records. It would look like this:
     Value NVARCHAR(100) 

CREATE TABLE AnotherTable 
     Value               NVARCHAR(100), 
     AnotherValue        NVARCHAR(100), 
     SomeConditionIsTrue BIT 


     Id INT ,
     AnotherValue        NVARCHAR(100)

INSERT INTO MyTable (Value) 
    OUTPUT inserted.Id INTO #ids (id) 
    FROM   AnotherTable 
WHERE  SomeConditionIsTrue = 1 

-- Do something with the inserted Ids 

However, what do you do if you want to also insert the column AnotherValue to the #ids table? Something like this does not work:
INSERT INTO MyTable (Value) 
    OUTPUT inserted.Id,AnotherTable.AnotherValue INTO #ids (id,AnotherValue) 
    FROM   AnotherTable 
WHERE  SomeConditionIsTrue = 1 

Enter the often ignored MERGE, which can help us translate the query above into:
    SELECT Value , AnotherValue
        FROM   AnotherTable
    WHERE  SomeConditionIsTrue = 1 
) t ON 1=0 --FALSE
  INSERT (Value) VALUES (t.Value)
  OUTPUT Inserted.Id, t.AnotherValue INTO #ids (Id, AnotherValue);

Note the 1=0 condition so that the merge never "matches" and how the select from the first query now contains all the columns needed to be output, even if only some of them are inserted in the insert table.

This post was prompted by a StackOverflow answer that, as great as it was, didn't make it clear what to do when you get your values from a more complicated select. The answer is simple: put it all in the 'using' table.

Thursday, August 20, 2015

No Longer Human (Ningen Shikkaku), by Osamu Dazai

book cover I read this short novel from start to end in under a day. Osamu Dazai writes from the point of view of a sociopathic young man who cannot seem to understand the human condition and fears all people around him, mostly because he expects to be found out at every moment. The title of the book can be translated in several ways, the English one relates to the protagonist's feelings of losing one's humanity, while the literal translation reads as "disqualified from being human", implying a societal judgement. Imagine a Japanese version of The Stranger, by Albert Camus, and you get a good picture of the plot and feel of the book. Both books were written in the same period, more or less, but while Camus probably imagined the character, many believe Dazai was talking about himself - he committed suicide soon after.

No Longer Human is the second best rated Japanese book and was adapted in movie and manga. It is difficult to imagine those being better than the dry accounting of the inner turmoil of the character, starting as a little boy who devises "clowning" as a method of passing the test of humanity, outwardly fun and good natured and inwardly terrified of being discovered as a fraud and punished by the society of strange human beings that he cannot understand or empathize with. I highly recommend it.