Monday, May 30, 2016

Bookmark Explorer is now stable.

A small part of a screenshot Bookmark Explorer, a Chrome browser extension that allows you to navigate inside bookmark folders on the same page, saving you from a deluge of browser tabs, has now reached version 2.4.0. I consider it stable, as I have no new features planned for it and the only changes I envision in the near future is switching to ECMAScript 6 and updating the unit test (in other words, nothing that concerns the user).

Let me remind you of its features:

  • lets you go to the previous/next page in a bookmark folder, allowing sequential reading of selected news or research items
  • has context menu, popup buttons and keyboard shortcut support
  • shows a page with all the items in the current bookmark folder, allowing selection, deletion, importing/exporting of simple URL lists
  • shows a page with all the bookmarks that were deleted, allowing restoring them, clearing them, etc.
  • keyboard support for both pages
  • notifies you if the current page has been bookmarked multiple times
  • no communication with the Internet, it works just as well offline - assuming the links would work offline, like local files
  • absolutely free

Install it from Google's Chrome Web store.

Sunday, May 29, 2016

The problem with competition

It became obvious to me that one of the most popular and common ways of "winning" consists in changing the definition of what that means. See capitalism for example, boasting that a group will benefit if each of its members attempts to improve their lives. The "winners" will pull everything up and will expand while the "losers" will just fade gently into the background. Allegedly, the greatest demonstration of this is the victory of capitalism over socialism and communism. But it's all a fallacy, as their reasoning can be translated as follows: "We measure success in capital, others don't. In the end, we have more capital, so we win". It's not who is better, but how you ultimately define "better". I find it disturbing that an economic model that attempts to optimize happiness has not emerged at any point in history.

This not only happens at a macro level between countries or economic systems, it happens between people as well. "Successful people" proudly announce their recipe for success to people who wouldn't really consider that a good thing. See people that cheat and corrupt and kill to "get ahead". One might covet their resources or power status, but how many of the "losers" would actually condone their behavior, take the same risks or appreciate the situation you get when employing such tactics? Same applies to heroes. We want to save the world, but we are more afraid of trying and failing. Heroes go past it, maybe not because of courage, but because that is their set goal.

Yet competition is the engine of evolution. Doesn't that prove competition is the solution? I say not. Look at the successful animals in nature: they are perfect for their niche. Crocodiles spend huge amounts of time motionless just beneath the surface of the water only to jump and snatch their prey when coming to the watering hole; cheetahs are faster than anything with legs, catching their prey in a matter of minutes; sharks roam the water, peerless in their domain. And yet all of these creatures are far from perfect. They age, they get sick, they don't build anything lasting more than their own lives, their only legacy are offspring just as flawed as them. And guess what? All of these creatures are getting less and less because of humans: weak, pathetic, inoffensive hairless monkeys who can achieve more than any others just by banding together and sharing their resources and their results. If competition would be the ultimate solution, then there will be a creature strong, tough, intelligent and immortal. Yet there isn't one.

I submit that competition is great only if two elements are fulfilled: a) you have the ability to evolve, to improve. b) there is someone better or at least equal to compete against. If b is not available, complacency will turn competitivity towards the weak. Instead of getting better, you will stop others getting to where you are. It's a simple application of available force. If point a is missing, you will be the one that a stronger competitor will stifle. And yet, what I am describing is not competition, but having a purpose. Behind the appearance of competition, when you try to catch up with someone better, you actually set a goal for yourself, one that is clearly defined. It is irrelevant if the target is a person or if they even consider themselves in competition with you. One might just as well choose an arbitrary goal and improve themselves by reaching it.

Why am I writing about this? For several reasons.
One is to simply make evident that if you envy someone for their success it is either because you can't get possibly there or because you won't - you have determined that to take that path would take away something that you value more. For example comfort. People envy the position of others less so, but they basically are not prepared to make the effort required to get there. Yet laziness doesn't disappear. Why? Because one reaches a goal after many attempts and failures, not in a straight line. Only once someone got there, it is much easier to follow their path sans the potholes, the setbacks and the mistakes.
Another is to show that the purpose defines the path, not the other way around. Setting a goal defines both success and failure and that is why many people with responsibility prefer to not set one. However, without the goal, people just stagnate, go around in circles. Look at space exploration: each successive US administration comes with another idea, abandoning what their predecessors did, going nowhere. When did they do anything that mattered? When they had a clear goal of doing better than the Russians. If someone were to go and colonize Titan and start living there, they wouldn't find it so expensive and pointless to go to the Moon, asteroids and Mars. Without someone to do that, though, they don't do anything.

Laziness is in our nature. Evolution is lazy. Competition is ultimately lazy. You can get comfortable in your lead, while occasionally shooting other racers in the foot when they get close enough. The opposite of laziness is not work, but direction. Once you set a goal, you know how far you go and how fast you get there. A group benefits more when all its members work towards a common goal. Funny enough, in such group scenarios competition between members is often cancerous. I find it also amusing that there is always someone better or at least equal to compete against: yourself.

Friday, May 27, 2016

Comments switched to Disqus

Disqus logo I have been using Disqus as a commentator for some time and yes, it is a bit bloated and yes, it writes all kinds of errors in the console, but it has a big advantage that you have all your comments and replies in a single place. When you go to a Disqus enabled web site you see a warning that you have unread messages. So from now on, my blog is using Disqus as its comment engine. While doing this, I've also updated some layout and code, so let me know if anything is wrong.

So there it is. Tell me what you think! Preferably in the comment section.

Liu Cixin: Devourer, The Three Body Problem and The Dark Forest

Four book covers over Liu Cixin's photo Liu Cixin is Chinese, which makes reading his work not only a pleasant science fiction pastime, but also an intercultural experience. That is because Chinese people are weeeeird :). Just kidding, but it does make for an interesting experience. Even with good English translation, the mindset behind the writing is clearly different from the Western writers I usually read.

I have read Devourer first, a short story, to see if I enjoy the writing style, then I read the first two books in the Remembrance of Earth series: The Three-Body Problem and The Dark Forest. It is clear that the author likes to think big, some even compared him with Arthur C. Clarke. In both stories Earth enters in contact with alien species which are vastly superior and devastatingly indifferent to the fate of the human race. While Devourer really reads like a Chinese story, you know with the emperor and the adviser and so on, it retains the same fear of irrelevance as the huge books in the trilogy.

To me it felt like The Three-Body Problem was more accessible, while The Dark Forest has a change of pace and style, but it very well may be because of the translator. It was easier to imagine scenes from Asian movies - with people shouting hysterically at each other to prove their loyalty to one group or the other and "generals" and all that jazz - while reading the second book than the first. Even so, throughout the reading I had these weird feelings of wrongness sometimes when things happened in a certain way because the protagonists were Chinese. Yet this was not relevant to the story or the enjoyment of the books. Also many Chinese cultural references were both instructive and eye opening. As an example, The Three-Body Problem starts in the middle of the Chinese Cultural Revolution which is just as appalling, if not more so, as our Nazi era.

I cannot talk about the stories without spoiling them too much, so I won't. Enough to say that they are all hard sci-fi, even if some of the elements there are not so scientifically accurate. Clearly for Liu Cixin the story took precedence to high technology, which is a good thing.

The third book in the trilogy, Death's End, will allegedly appear September 2016, from Tor Books. However, I have mixed feelings about it. The story almost ended with the second book. Do I really care what happens next? Will it be relevant or just the typical three book publishing deal forced the author's hand? There are some questions that remain unanswered and I would be glad to see a clarification in this upcoming book, but will they be enough to flesh a great story?

Wednesday, May 25, 2016

Trolling on the Moon - a short story

When Bzolrlg was hungry, he was also angry. In fact, just as their names are incomprehensible and often change in time, troll emotions were out of whack as well, so if trolls had any interest in cataloging feelings they would have one called hanger, combining their two most natural states: hunger and anger. Blzorg did not have an interest in emotions, though, he just wanted to eat and the microwave oven was cooking the food too slowly. With typical troll logic, he decided to hurry up the process by bashing it with one giant harry hand. He called them both Harry and it doesn't really matter which one was it, anyway. The important part is that the device was smashed like the cheap orcish knockoff it was.
It so happens that the distorted shape of the oven serendipitously concentrated microwaves from one end to the other, creating enough force to generate thrust and pushing the entire wreck up in the air. Blzarg decided to hang on to his food, which meant hanging on the microwave, which meant going up as well. The force generated by the tiny magnetron should not have been enough to lift the entire oven and the giant creature holding it - not without propellant anyway - but the strength of the troll's blow didn't only deform the interior shape of the microwave, it also tore down a wire, thus generating enough computational error to allow not only for the force, but also for its conservation for after the power cable was torn from the wall socket.
Thus, Blozrg went through his hut's roof, up into the atmosphere, further up, reached space and continued up until up made no sense and further still, until up became down again and he crashed into the Moon. Bzorlg survived - trolls are sturdy like that - but his food didn't. His hanger overwhelmed him, causing wide spread devastating damage to the population of N'na'vi living there. In fact, he just continued to go into a straight line, punching and smashing until he reached the same spot where he had landed and then he continued on anyway. Meanwhile, the inhabitants, overly confused by the entire incident, decided to place food on the line the troll rampaged on, so that he doesn't feel the need to change direction. This, incidentally, explains why the Moon has a ring of dust when you look up at it.

The general council of the N'na'vi held an emergency meeting after the confusion turned into acceptance. They needed to understand what had happened, which was, by all their knowledge, impossible. First of all, the Earth could not sustain life. It was mostly blue and the N'na'vi, divided as they usually are, were united in hating the toxic color. It was one of the reasons why they lived on the other side of the Moon, so they didn't suffer all kinds of ailments having to look at the horror in their sky. Obviously, living on the surface would be impossible. But even if they could have conceived of creatures that would withstand the color blue, surely they would have been destroyed by the layer of corrosive atmosphere containing oxygen or drowned by the huge quantities of water in it or squashed by its enormous pressure. In the old days there was something called religion which posited that all bad N'na'vi would go there to suffer eternal blue torment, but reason had since triumphed and such preposterous beliefs were beneath even a child. It was as ridiculous as believing life was possible on Neptune!
The conclusion was obvious: the troll, if it even existed, was not alive but a natural phenomenon, akin to the cloud of disintegrating comets that was always changing the planet Earth. Once every few decades curiosity got the better of the N'na'vi and they sacrificed some of their scientists, forcing them to look at the planet with a telescope. The changes were always great, so great in fact, that the logical explanation seemed terribly improbable. However, using N'hair's black hole theorem, it was proven that it was the only one: cosmic impacts were continuously reshaping the Earth, probably helped by the corrosive atmosphere, causing not only the weird structured shapes they observed - some seeming to move for a long time before stopping due to the energy of the impact, but also the massive changes in atmospheric particulates and global temperature.
Even so, something had to be done regarding this land orbiting troll phenomenon, which meant scientists would need more data. They already had Small Data, Big Data, but in this case they needed more, as clearly what they had was not enough for the massive computational machines of the N'na'vi, so they decided on organizing an expedition to the planet Earth.
Clearly, it would have been too expensive - and blue - to send real N'na'vi on the planet, so they started constructing a fake N'na'vi, one that could withstand the air and the water and even be able to destroy pesky comet fragments threatening it. It wouldn't have worked on the troll, naturally, since as far as they knew, he might have been indestructible and they couldn't risk damaging the Moon. They christened the fake N'na'vi as N'N'na'vi, because even if it was, it wasn't. They worked as hard as they could, yet the N'N'na'vi still hated the color blue, so they had to dismantle its eyes. And therein was the problem: how could their machine tell them what was going on down there without images? In a rush, they decided to install two spectrometers instead of its eyes: a near Infrared one and an X-ray spectrometer. Thus, N'N'na'vi could determine the composition of items on Earth.

Leaving the Moon was relatively easy, all you had to do was jump high enough. Landing on Earth was a problem, but N'N'na'vi was sturdy. The inhabitants of the Moon had studied the troll and had built a skin analog for their explorer. The radiation belts around the planet were a much bigger issue, since they knew not if they would affect N'N'na'vi. Fortunately, science being so very advanced on the Moon, they had also studied the belts for a long time and they had discovered a way through: all they had to do was unbuckle the belt as they went through, making sure to buckle it back when safe. It was a very risky proposition, though, as any failure could leave the belts unbuckled, free to fall away from Earth and let the blue escape, hitting the Moon. It was such a large risk that the mission almost didn't go through. Yet a courageous N'na'vi scientist, only living survivor of previous Earth surveys, wearing a patch over the eye he had used to look onto the planet, spoke up. Only a dozen had ever gazed upon Earth and most had succumbed to the terrible color.
"How can we be sure that another troll won't arrive on our world? Maybe an even bigger, meaner one? One that could bring an end to the N'na'vi. You know why The Man on the Moon is gone? Because he couldn't make a bit of a difference, even if he had known of the terrible faith awaiting him. He didn't have a space N'N'na'vi, that is why! We need to find and catalog all the trolls, at least the big dangerous ones, before we end up like him!"
The speech was inspiring and so the project moved on to the launch phase. N'N'na'vi jumped and headed towards Earth. It would have taken around two days to get there, plenty of time to observe the planet as it approached, yet misfortune made it so that an Elven rocket stumbled on the same trajectory of the exploring machine. Mistaking it for a cometary fragment, the N'N'na'vi destroyed it, thus causing widespread panic on Earth.

Elondriel stood up in the council room and calmly, coolly, yet with an occasional weird nervous laughter that expressed the strongest elven emotion there was - slight annoyance, started speaking. He spoke to the people gathered hastily to address the issue of the invading space fleet that had destroyed a rocket, but started with a joke. He glided to the middle of the room, in that slightly lilting way elves use to declare the inner energy they choose to restrain out of politeness and civilized social responsibility, he looked at Bazos the dwarf and said the words "at first I thought the dwarves might have something to do with this". He laughed the small laugh and continued: "But they couldn't possibly have gone beyond geosynchronous orbit". The dwarves threw poisoned looks at the speaker while pretending to smile, as befitting their natural competition against elves in all things technical.
"However, it seems that the threat is, indeed, extraterrestrial. Therefore I believe it is obvious that the rocket was not destroyed as an attack against my company, but against the entire planet. So while I will certainly need to be compensated for the loss, the countermeasures should be taken by all of us, as a united front, for if we don't act now against this threat that apparently originated on the Moon we are all in danger. Elves, dwarfs, humans... even dark elves, " he said glancing toward to group of secretive dark haired people conferring in a corner, slanted eyes betraying their deep suspicion of the speaker, " we all must band together and fight back!".
"A preposterous idea," jumped the human representative. "There is no water on the Moon. How could anything but the most primitive life survive there... in that vast desolate gray desert?".
"I only said it appears to have originated on the Moon," continued the elf, "but surely it must have come from further away. Probably Mars. I believe we need to go there immediately! But first, let's decide how to manage the threat coming from this space machine"
The troll representative interjected in hanger: "Members of the council, the solution should be obvious to you all: we should nuke it!", causing everybody to speak at the same time, being either strongly for or strongly against it. It so happened that the crisis was unfolding at the same time when a very rare yet powerful mineral had been discovered, promising fission bombs that would dwarf in power - pun not intended - even fusion bombs. An unopinium nuclear explosion would certainly have been enough to destroy the alien invader, but the polarizing radiation emitted by the material made any consensus on the issue almost impossible.
An ent raised a branch, focusing attention on her large wooden body. She was an olive ent and the representative of CGI, the union of lesser races. Immediately she tried to suggest a diplomatic approach: "Perhaps instead of jumping to hasty reactions alternatives should be considered. Surely a committee of the Union races could appoint a group of highly specialized experts to a research center that would analyse the ... errr... entity and propose communication solutions to be then discussed in the Middle Earth Commission". This stopped everybody in their tracks, causing all to think hard upon the ent's words. It took them several minutes to understand what was actually said and some more to try to figure out what the words meant.
"You mean talk to it?", a vampire interrupted the silence. All present knew of the provincial directness of races living near the dark forests of Transylvania, but they all felt a bit offended at the curtness of the sentence. In civilized high council meetings, phrases needed not only weight and gravitas, but length as well. Otherwise, who would take them seriously? Yet vampires were known for their ability to find solutions where others did not. Hidden in dark places, away from the light of the sun, they devised ingenious things that profited many. And all they requested in return was blood, which was always enough and cheap to boot. "Yes, I did mention the word communication, didn't I?", replied the ent, olive branches all crossed over her trunk.
"You are so right, of course, madam representative," the human spoke again, "but we must consider the budget. Nuking the intruder is certainly cheaper than talking to it, not to mention faster. Communication with the alien is the responsibility of SETI - perhaps not even them, since their purview is searching for extraterrestrial intelligence, not actually communicating with them. Their budget is limited and we need to discuss in congress if we want to increase it. Defense, on the other hand, has enough budget and discretion - since from the Constitution everything is under their purview". "The Human Constitution," he added hastily when eyes suddenly turned dark towards him, making sure his words conveyed the big capital letters he meant. "Wait, what are you typing there?" he asked the vampire, alarmed.
"You said she was right, so I sent a ping to the alien craft using the protocols of the emergency radio broadcast system. Something like 'Hi!'", the vampire replied. Vampires were not considered a separate race, since originally they had started as humans, but not really human either. However, that meant that the human representative was responsible for what the hacker did. Red faced he asked in anger "How did you know the secret code required to activate the broadcast system?!"
"Oh, that wasn't an issue. Defense generals kept forgetting the password so they reset everything to eight zeroes. Even nuclear launch codes!". The troll made half a move to enter the conversation, but thought differently once everybody threw him severe looks. He shrugged, dejectedly.
Chaos ensued in the discussion, people trying to shift blame from one to the other. Meanwhile, N'N'na'vi heard the message loud and clear. It just didn't get it.

The N'na'vi machine had already carefully passed through the radiation belts, carefully buckling them back, and now headed towards the planet. Unfamiliar with atmosphere reentry, the Moon civilization had neglected to take into account the overheating of their machine, but the natural shape of the N'na'vi - also imparted on the explorer - accidentally eliminated the threat. The reentry heat just gave the amalgamation of thin tentacles a slight glow while slowing N'N'na'vi to a delicate float. All the residents of Earth could see it shine brightly while descending towards the surface, causing wide spread panic and despair, only exceptions being children too young to understand, geezers too old to care and Pastafarians, who actually got to rejoice.
In its descent, N'N'na'vi intercepted a clear radio message which it immediately ran through the complex translation machinery it was equipped with by the brilliant Moon scientists. "High!", the message said. "Yes, I am high!", it answered, unable to determine the source of the message, but assuming it came from its creators. Pastafarians rejoiced yet again, to everyone's chagrin. Communication stopped for a while, then another message arrived. First translation started with "We, the people of Earth...", which it immediately dismissed as incorrect, since there could be no people on Earth. The only people it knew of where the N'na'vi, whose name literally meant "Not those people", for reasons forgotten by history. Still, people. It got even more confusing when a dwarven rocket launched in order to get more information about the alien machine. N'N'na'vi did a spectral analysis of the rocket, as it didn't behave as a cometary fragment at all: it rose from the ground up in a way that orbital mechanics could not explain. The analysis determined that the surface of the object was stained with the color blue (translating mechanisms even suggested the shape of a word that could have meant blue). Terrified, N'N'na'vi destroyed the rocket.
Moving its tentacles, the machine learned to guide its descent, marginally avoiding hitting the water - which would have been disastrous - and instead crashing at night in the desert. N'N'na'vi had here the opportunity to calm down as everything looked - so to speak - as expected: sand everywhere, which its instruments analysed to be a safe yellowish white, the air relatively dry for the hellish planet, the surface even showing signs of cosmic impacts. Back home, scientists from the control room felt just as calm, maybe a little bored, as they had hoped some of their well established theories would be challenged, so that they would get to prove them all over again.
Chaos ensued as the first helicopter arrived, luckily a nice radar scattering black, as the fake N'na'vi had determined there was life on Earth, which pushed everybody in a frenzy to determine the likely method in which the programming had failed so severely. After a brief hope that the "safe mode" of the machine would somehow determine the flaw and fix it, communication was interrupted and the mission scrubbed. N'N'na'vi was clearly beyond salvation and a new mission needed planning.

It was lucky that the landing had been during night, as the international council has decided, in the hope that the machine would prove as violent and destructive as it had been that far, to send the Transylvanian vampire to continue efforts to pacify the alien. It would have been ridiculous to die of sun exposure before getting to see such an interesting thing. Humans called the vampire Nosferatu, unable to pronounce the name correctly. In reality, his name meant "annoying one" in the vampire native language and he had always, proudly, lived up to his name. Less brutal than most of his brethren, Nosferatu had always been motivated by interesting experiences, as far as it didn't require a lot of effort on his part. This, as far as he was concerned, was topping them all. An actual alien device: it was science fiction come true! He considered what the first words should be. "Hi!" didn't work so well, so he was cycling through alternatives: "Hello!", "Welcome!", maybe "Greetings and salutations!" which sounded oh-so very cool.
Nosferatu did not really fear destruction at the hand (tentacle?) of the alien, since he didn't fear death. Technically he had no life, of course, but it went deeper than that: he considered everything a game. When the game ended, it just ended. He never considered it as a threat or something to be afraid of. The only real fear was that of losing. Defined as such, the purpose of the game was to successfully establish communication with the alien and maybe convince it to not destroy the Earth. Although, he had to admit, seeing the world end was also interesting.
When the helicopter arrived, the large mass of moving tentacles moved frantically, menacingly even, like a bowl of furiously boiling pasta. Nosferatu instantly disliked the alien's appearance. As they approached and eventually landed close to the device, the frothing stopped, the alien froze, the tentacles stooped, then just collapsed. The vampire approached, touched the long slender appendages, he even kicked some in frustration. The alien visitor had died, just like from an insidious disease: it had been agitated, then had collapsed and finally had stopped reacting. The odds for that, Nosferatu thought, were so low that it made it all ridiculous, pathetic even. The world getting destroyed would have been better.
The newly created Department of Earth Defense got most of the machine, for it was determined that indeed it was a machine, even if it looked like a living creature. The energy source, the weapons, the method of locomotion, they would all be researched by carefully international teams, the knowledge shared freely and equally between the eight most powerful races. The only part they couldn't care less about was the brain of the machine. It was obviously flawed, causing the machine to fail, but also impossible to trust. The best possible solution for any alien intelligence was to dispose of it as soon as possible. Nosferatu was tasked with doing this, mostly because it was against his express advice and everybody hated his kind anyway. He obediently filled all the paperwork, talked to all the people, personally delivered a weird looking device to the Hazardous Devices department and witnessed its destruction. His direct supervisor accompanied him at every step verifying that it all went according to orders and enjoying every moment of the vampire's anguish.
Luckily, Nosferatu's boss didn't know an alien device from a geek garage project and so N'N'na'vi's brain was saved a fiery death. Back in his garage, the vampire would attempt to finish the game.

Back on the Moon, the N'na'vi had come up with a theory that explained everything that had happened. Clearly their glorious civilization was blindsided by someone as devious, if not more - scary thought, as them. Others have had the same idea as them, creating a fake that was able to explore the blue planet. Their obvious purpose had been to stage a covert attack on the Moon from the very location the N'na'vi would never assume an attack was even possible. Devious indeed, but not as clever as to fool them! They had learned a lot from creating the artificial N'na'vi, even if they had lost it in an obvious ruse. No matter, they could build others, and better.
The second model was larger, even more powerful, and designed as a hybrid of a N'na'vi and the (now they knew) alien machine that was still ravaging the narrow corridor around the Moon. It had limbs, like the alien, but it had tentacles like a Moon resident, some located around the "head" of the device while others closely knit together to form a programmable flexible sheet of material that would allow the machine to glide through atmospheres. This sheet was located on the back of the model, as to not hinder movement and obscure sensors. The first mission of this model, called the N3 as a clear reminder that it was not N'N'na'vi, was to grab Blozarg and throw him back to Earth, much to his hanger.
Various scientific workgroups were created in order to ascertain the origin location of the attackers. It was obvious, when you thought about it: there was no blue there. The sneak attack, probably just something to test their defenses, must have originated from Mars, or perhaps from a more habitable place, like one of the Martian moons. As soon as all the other theories were ridiculed into oblivion, a Martian offensive remained the only logical scientific theory explaining everything that had happened. A counter attack strategy was devised and plans were put in motion. The Mars moons were too insignificantly small to invade and there was the remote possibility that an extremophile form of life might even exist on the surface of the red planet. The simplest solution, as N'hair would have said himself, was to destroy the planet completely and for that they would need to upgrade the energy weapons on N3.

When DED was created, all the respective budgets of the other existing defensive departments were merged into one. When even the black budgets were added up, the resources allowed immediate research and development of space transportation and weaponry. The first ship, borrowing construction secrets from the Martian device valiantly captured by the Earth military forces, was christened Falas from the place where it was built. An elven name for sure, but simple enough that it wasn't obvious enough to cause anger with the other races. Military strategists developed plans to protect Earth from further attack, but then went further with devising a way to get to Mars itself. Equipped with five huge bombs, containing all the unopinium ever mined, Falas was redesigned as an attack vessel, capable of destroying the entire planet if need be or a huge space fleet if self destructing. Unexpectedly, having all the unopinium moved into orbit made Earth races much more amenable to compromise with each other. As such, and fueled by a most grievous action on the part of the enemy, they quickly reached an agreement on how to proceed.
Days after the launch of Falas, a troll came crashing on the planet, burning through the upper atmosphere, as an insult to the joined Earth defensive force. The trolls immediately decided the only possible reply was complete destruction of their enemy. Even Blorzgl, ravenously devouring a boar's roast while recovering from his ordeal, agreed with a hangry but dignified "Mmm-hmm" to the proposal to obliterate Mars since he never actually paid attention to where he was during his furious devastation. A quick analysis of the existing legal framework decided that obliterating Mars would not contaminate it, so international law not only allowed but actually supported the action, since also further contamination would become impossible after the mission.
Some protests came from cultural organizations claiming Mars as an important historical item. In the winter, a council of supporters met in Elrond and almost succeeded in derailing the plan, but for the subsequent assassination of the organizer, otherwise an honest and honorable man. "People like him always end up dead", the powers that be decided. Some concerns were raised that breaking up a planet would lead to asteroid bombardment of Earth, but thanks to the technology gleamed from the alien attacker, asteroids could be destroyed. Some astronomers complained about the knowledge that would be lost when tearing apart a planet we mostly know nothing about, but were mollified when given seats on the Falas, so they could observe the pieces when Mars broke up. "It's more efficient than drilling the surface", one of the scientists was heard saying. Elondriel committed suicide.

Meanwhile, Nosferatu had been working clandestinely and furiously on reactivating the alien machine processors. He had large quantities of blood stored in a special closet where he had chained but not killed a really fat man. As long as he fed him, the vampire always had a fresh supply of blood and, as the man was fat, he would survive quite a long time with water alone. At first, he determined several security vulnerabilities through which he could infiltrate the programming, but he had trouble going past the five time redundant antihacking mechanisms. Being an advanced civilization, the N'na'vi had long since cemented the practice of protecting their intellectual property first, no matter how little intellect had actually been used to create it. Even so, the vampire persevered and ultimately prevailed.
Turning the machine on was also a problem, since the power source had been taken by the defense department for reverse engineering. He ended up stealing as much power from his neighbors as possible, for fear of raising red flags with the power company.
The next problem he had to overcome, after he managed to interface Earth hardware with Moon hardware, was quite unexpected. He had linked all the inputs and outputs of the brain to a text console. However, in the interest of communication, he had also connected a video camera as an input. The problem? Nosferatu had a blue skin. It took some time to realize that the reason the alien machine was behaving so violently was his skin color. He had at first suspected wrong interface connections. When he had eliminated that, he believed the alien to be racist. By the time he figured out and installed a simple color filter on the camera, the N'na'vi electronic brain was partially insane. However, being an artificial brain, he was less sensitive as a real Moon inhabitant and more easy to fix by a hacker with the skills of Nosferatu.
In the end, communication was finally possible, while Nina (the vampire had decided N'N'na'vi was taking too long to type or pronounce) was by then a half and half Earth-Moon artificial intelligence.

Both civilizations independently decided to cloak their attack to the best of their abilities. After all, they only had one chance for it. Surely, generals on both sides thought, if we tardy too much or if we attack and fail, the next move of the enemy would be to totally destroy Earth. It would only make sense, coming from such a mindlessly aggressive opponent. Thus striking first was not only prudent, it was necessary, regardless of how it felt. History, in the end, would be the judge of their present decisions, they all said. With so much stealth and helped by orbital mechanics, the two fleets headed towards Mars at full speed, each oblivious of the existence of the other. Afraid of intelligence infiltration, they were also under strict radio silence. By the time Nosferatu busted several security firewalls to be able to stop the international council from ignoring his calls and messages, and by the time the communication Moon relay was convinced the disabled N'N'na'vi unit was not disabled, it was already too late. There was no stopping the destruction of Mars, even if either sides would have acknowledged the possibility that the other existed.
What actually happened was that Earth and Moon both decided they were under electronic and propagandist attack and actively protected themselves from any messages from the vampire. Nosferatu was considered compromised by DED and immediately an order for his arrest was issued. The N'na'vi knew him as Dracula, because the last audio communication they received repeated the word several times before they were able to block it. Luckily for the vampire, he had made sure the origin of his messages remained hidden.
Convinced that at any moment evil Martians might destroy them, Earth and Moon worked continuously on their defenses for the following six months, as well as space observatories focused on Mars. When the time came, the entire Earth system was watching the red planet, as the fleets prepared for attack. The light of the destructive forces pinned Mars as the brightest star on the firmament, yet no confirmation message came from either Falas or N3. Horrifyingly, Mars remained unscathed.

The Mars Hegemony ruled the Solar System for two centuries before an unfortunate solar event brought change to the situation. Seeing their most powerful attack being stopped with no effort whatsoever, Earth capitulated. After several weeks of deliberation a message from all the races in Middle Earth and the Dark Land, speaking as one, declared unconditional surrender to the forces of Mars. A similar message had been sent from the Moon almost immediately after the loss of contact with N3. When Mars responded, people breathed (or not breathed, depending on where they were) easier. The message was simple "From now one, you will service the Mars Hegemony. We will establish a base next to your planetary body to keep you in check. Any misstep and you will be destroyed. As a reward for your absolute obedience, we give you the Solar System. All these worlds are yours, except Mars. Attempt no landings there!".
It was hubris or maybe boredom that made Nosferatu risk the sun one day. He died two centuries after he had successfully formed the unknowing Earth-Moon alliance and no one will ever know why or how. He died in the sun and people never gave a second thought to it, other than dismiss the very old but still active warrant on his arrest. With the help of the alien brain he had hacked N3, managing to see what had happened. The people on Falas were terrified to see a huge ship in form of a troll with a heroic cape on its back. The most unsettling thing was the impossible blond curly hair it had on its ugly head. Without hesitation they fired unopinium bombs at it. As for the N3, it had been thoroughly programmed and tested to avoid past mistakes. In view of a giant space vessel of clear Earth origin, the machine could only surmise that its programming had become corrupted by the enemy. Life on Earth was hardcoded as impossible, so the only logical explanation was that Martians were tampering with its software. It immediately fired powerful beam weapons and then self destructed. On the completely disabled Falas, the artificial intelligence installed specifically for this purpose decided it was a no-win scenario and initiated self destruct. Both fleets were annihilated instantly.
The weird combination of Moon and Earth technology in Nosferatu's basement allowed him to receive both capitulation messages and also fake the origin of the reply. For two hundred years he manipulated the two civilizations towards exploring and later colonizing the Solar System, all while each thought they were working with and under the most powerful Martians. Nosferatu aka Dracula died, but it wasn't the end, only the beginning. With him gone, Nina continued to control the Mars Hegemony, in its own mechanical way, growing and becoming more and more at every step. The only reason why the most powerful intelligence in the existence of the Solar System didn't assimilate every living creature as a part of itself - the logical conclusion of its AI programming to optimize peace, exploration and the accumulation of knowledge - was its firm belief, fused somewhere in its basic circuitry where even a vampire hacker could not reach, that life on Earth and the Moon is ultimately... impossible.

Tuesday, May 24, 2016

Bluetooth keeps turning on by itself on my Android phone

Firechat flipping Bluetooth I have been plagued by this thing for a few weeks: every time I turn on the Wi-Fi, then turn it off, something starts turning the Bluetooth on. Turn it off and it goes back up in a minute. The only solution was to turn off the phone and then back again without turning Wi-fi or Bluetooth on. Strangely enough, there is no way to disable Bluetooth on the phone and no way to know who turned it on last.

As an investigation, I tried something called Event Logger, which logs when Bluetooth is turned on or off, but fails to notify you of what did it. In fact, I am still not sure how I was supposed to determine what software did it and this demonstrates a systemic issue with mobile phones: you have no real control or even knowledge over what happens in it.

Long story short, I've uninstalled a lot of applications just to see if the problem goes away. In the end it was Firechat! After uninstalling it, the problem went away. Apparently I am not the only one experiencing this, although one of the few pages on the Internet regarding Firechat and Bluetooth claims Firechat does not turn it on by itself, although numerous reviews on the app's Google Store page say differently.

Thursday, May 19, 2016

The shortcuts that kill the storyteller

When I was young I occasionally wrote short stories that were moderately well received by my friends, but I have never attempted to do anything "real"; I would just get some weird idea in my head and it would materialize after an afternoon of furious writing. There was nothing to it in terms of technique or studying the classics or anything, just telling a story. In fact, trying to rewrite it afterwards would ruin it, betraying the underlying lack of craft. After a while, I just stopped, but I held tight to the belief that some day I might actually do this well, like write a novel. Not for money and fame, but because I would like to "be that guy".

Recently I have revisited that belief and decided to take it further: actually plan the novel, write it, see what I am truly capable of. So far, it has not been going well, but I've learned a lot. Hopefully I will retain the level of interest required to carry it through. However, in this post I want to explain some of the things that I have become to understand about writing stories and one in particular: the shortcuts.

Many a time the story needs to go somewhere, but in real life terms getting there would be boring or be prohibitive in terms of time. In that case a shortcut is taken, either by some gimmick, by montage or, as is more often the case, through camera work. How many times didn't you watch an actor looking intensely for a threat, their face or person taking over the whole screen, only to be caught off guard by someone or something that suddenly comes out from outside the camera angle? And if you think just a little bit about it, it would have been impossible to be blindsided by someone coming from there because, even if we don't see them, the person the camera is pointed at would! In a typically evolutionary way, someone tried it, it worked, it caught on and now finding it irritating is seen as nitpicking. "Well they needed to make it happen, it doesn't have to make sense".

That thing, right there, when common sense is sacrificed for expediency, is killing - a tiny bit - the story. And while it works on camera, it is much more complicated in writing, because what you don't realize while going through the motions of empathizing with a character and joining them in their adventure is that the writer needs to know and understand everything that happens, not only what is "in the scene". If the murderer suddenly appears next to the victim and kills her, the writer might decide to not explain how he got there, but they need to know! If not, the story gets hurt.

To build my experience, I've decided to practice on writing something that seemed easy at the time: a Star-Trek novel. I love Star Trek, I've watched almost everything there is, including fan made videos, and most of the time I've felt like I would have made the story a little better. In fact, I was acting like a tester, considering that every single error the developer makes is an affront to common sense and anyone would have done better. I've decided to put my writing where my mouth was, at least give all those screenwriters a chance to get vindicated (and, boy, did they!). My thinking was that Star Trek has a constraining mythos that would force me to use already existing concepts - thus restricting me from thinking of so many things that I would never start and also allowing me to not need to reinvent or explain them - as well as a positive vibe, that would force me from writing depressing "everybody dies" stories. Well, guess what, in my story almost everybody dies anyway; take that, Star Trek!

My point is that trying to write that way revealed the many flaws in the Star Trek storytelling. Every time there is a "problem" someone comes up with a device or algorithm or alien power - usually termed somewhat like "problem remover", that just takes the pesky technical aspects away from the narrative and helps the viewer focus on the important part: the characters and the plot. I mean, while people still debate the limitation of phase cannons - that at least attempt to appear grounded in science - no one says anything about stuff like "inertial dampeners" which pretty much means "that thing that removes that kink that no one actually knows how to get rid of". This is just the beginning. Let's stick with Star Trek Enterprise for now, the one that put Star Trek back on the map and had the most compelling characters and storylines. Think of your favorite characters there: Picard, Data, Worf, maybe Deanna Troi. How did they get there? What was their childhood like? What are they doing when they are not on duty? The show has tried to touch on that, but just with the "whatever is needed for the story" approach. A more direct and obvious way to demonstrate this: there are no toilets in Star Trek. No one needs one, either - have you seen how the brig looks?

As characters go, everybody on that ship comes from the Starfleet Academy, but what do they learn there? What are the paths that they need to take in order to graduate? How do they reconcile vast differences in culture, language and learning speed for all the races in the Federation? I mean, they are all human with some stuff on their face and some extra makeup, but the background story, as something different from merely what you "see", needs all that information. The Star Trek universe survives in these loose network of stuff that taken separately and given some deeper context might make sense, but taken together they just contradict each other. And again comes the nitpicker label to stop you from ruining the experience for everybody else.

This brings me to the shortcut side effects. As a reader and especially as a viewer, you enjoy them because it takes you faster through the story. They remove what is not relevant to you. Well, emotionally relevant, but that's another can of worms altogether. As a writer, though, as a storyteller, these things are slow acting poison. After decades of watching Hollywood films, trying to write something feels like stepping barefooted on glass shards. You feel dumb, not only because it is impossible to write what characters do without a deeper understanding of who they are, not because you realize that even the smallest attempt at writing results in way to many questions to answer on paper - although you need to know the answers, but also because you start seeing how shallow was your interest in all those characters you actually loved watching on the screen. It's like that moment when you realize your lover has a secret life and it hurts because you know it's you who didn't notice or take interest in it, it's all you.

That's not bad. It makes it obvious that you casually ignore some layers of reality. It can lead to getting to appreciate them in the future. The difficulty I feel comes from not ever having trained for it. In fact, I have been taught to avoid it, by passively watching just the surface of everything, never attempting to infer what the depths hide. And when I try, at my age, to change the way I see the world, my way of ... being me, it's fucking difficult. Even simple stuff like mentally trying to describe a place or a person when you first see them, in terms of senses and emotions and comparisons with common concepts and - hardest of all - putting it in actual words... all of this is hard! It feels like an operating theater in which I perform while others watch me and judge. I feel anger and frustration because it conflicts with the original story, where I was good at writing.

There was a very stupid movie where Kate Beckinsale would be Adam Sandler's girlfriend (I mean, impossible to suspend disbelief, right?) and he would be annoyed with all the touchy-feely aspects of their relationship and instead use this "problem remover" remote that would fast forward past it. And then he comes to regret going through important bits of his life like a senseless robot and what it does to him. The movie might have been bad, but the underlying idea becomes very real when you attempt to write stories. Your characters are your lovers, your children, your spawn. Ignoring them is a crime to the story.

Think of the classical iceberg metaphor: just the tip is visible. It also applies to stories. The writer needs to have all that cool stuff hidden under the surface of the book, just in order to show to the reader the content. Characters need backstories that you will only hint at, but that you must know. Stuff that is excruciatingly boring to discuss in real life, like what the light in a room makes you think of - if you take the time to do it, which is never, you must put on paper because you know how it feels, but how do you translate that to another person, with another mind, culture, references, upbringing?

There is no real end to this post, I could write a lot on the subject - I am writing about how hard writing is, I know: ironic - but I will be stopping here. Probably readers have done that a while back, anyway. To the obstinate who got to this part, I salute you. Who knows, perhaps not taking the short path while reading this post has somehow enriched your story. I am not a writer, these insights have come to me just from attempting to do it. Perhaps that is the best reason to try new things, because besides feeling like a complete moron, you gain new valuable insight every time you do.

Monday, May 16, 2016

Replacing desktop applications with large Javascript driven single pages stupid.

Style over form For a very long time the only commonly used expression of software was the desktop application. Whether it was a console Linux thing or a full blown Windows application, it was something that you opened to get things done. In case you wanted to do several things, you either opted for a more complex application or used several of them, usually transferring partial work via the file system, sometimes in more obscure ways. For example you want to publish a photo album, you take all pictures you've taken, process them with an image processing software, then you save them and load them with a photo album application. For all intents and purposes, the applications are black boxes to each other, they only connect with inputs and outputs and need not know what goes on inside one another.

Enter the web and its novel concept of URLs, Uniform Resource Locators. In theory, everything on the web can be accessible from the outside. You want to link to a page, you have its URL to add as an anchor in your page and boom! A web site references specific resources from another. The development paradigm for these new things was completely different from big monolithic applications. Sites are called sites because they should be a place for resources to sit in; they are places, they have no other role. The resources, on the other hand, can be processed and handled by specific applications like browsers. If a browser is implemented in all operating systems in the same way, then the resources get accessed the same way, making the operating system - the most important part of one's software platform - meaningless. This gets us to this day and age when an OS is there to restrict what you can do, rather than provide you with features. But that's another story altogether.

With increased computing power, storage space, network speeds and the introduction and refining of Javascript - now considered a top contender for the most important programming language ever - we are now able to embed all kinds of crazy features in web pages, so much so that we have reached a time when writing a single page application is not only possible, but a norm. They had to add new functionality to browsers in order to let the page tweak the browser address without reloading the page and that is a big deal! And a really dumb one. Let me explain why.

The original concept was that the web would own the underlying mechanism of resource location. The new concept forces the developer to define what a resource locator means. I can pretty much make my own natural language processing system and have URLs that look like: me that post ranting about the single page apps. And yes, the concept is not new, but the problem is that the implementation is owned by me. I can change it at any time and, since it all started from a desire to implement the newest fashion, destined to change. The result is chaos and that is presuming that the software developer thought of all contingencies and the URL system is adequate to link to resources from this page... which is never true. If the developer is responsible for interpreting what a URL means, then it is hardly "uniform".

Another thing that single page apps lead to is web site bloating. Not only do you have to load the stuff that now is on every popular website, like large pointless images and big fonts and large empty spaces, but also the underlying mechanism of the web app, which tells us where we are, what we can do, what gets loaded etc. And that's extra baggage that no one asked for. A single page app is hard to parse by a machine - and I don't care about SEO here, it's all about the way information is accessible.

My contention is that we are going backwards. We got the to point where connectivity is more important than functionality, where being on the web is more important than having complex well done features in a desktop app. It forced us to open up everything: resources, communication, protocols, even the development process and the code. And now we are going back to the "one app to rule them all" concept. And I do understand the attraction. How many times did I dream of adding mini games on my blog or make a 3D interface and a circular corner menu and so on. This things are cool! But they are only useful in the context of an existing web page that has value without them. Go to single page websites and try to open them with Javascript disabled. Google has a nice search page that works even then and you know what? The same page with Javascript is six times larger than the one without - and this without large differences in display. Yes, I know that this blog has a lot of stuff loaded with Javascript and that this page probably is much smaller without it, but the point it that the blog is still usable. For more on this you should take the time to read The Web Obesity Crisis, which is not only terribly true, but immensely funny.

And I also have to say I understand why some sites need to be single page applications, and that is because they are more application than web site. The functionality trumps the content. You can't have an online image processing app work without Javascript, that's insane. You don't need to reference the resource found in a color panel inside the photo editor, you don't need to link to the image used in the color picker and so on. But web sites like Flipboard, for example, that display a blank page when seen without Javascript, are supposed to be news aggregators. You go there to read stuff! It is true we can now decide how much of our page is a site and how much an application, but that doesn't mean we should construct abominations that are neither!

A while ago I wrote another ranty rant about how taking over another intuitively common web mechanism: scrolling, is helping no one. These two patterns are going hand in hand and slowly polluting the Internet. Last week Ars Technica announced a change in their design and at the same time implemented it. They removed the way news were read by many users: sequentially, one after the other, by scrolling down and clicking on the one you liked, and resorted to a magazine format where news were just side by side on a big white page with large design placeholders that looked cool yet did nothing but occupy space and display the number of comments for each. Content took a backseat to commentary. I am glad to report that two days later they reverted their decision, in view of the many negative comments.

I have nothing but respect for web designers, as I usually do for people that do things I am incapable of, however their role should always be to support the purpose of the site. Once things look cool just for the sake of it, you get Apple: a short lived bloom of user friendliness, followed by a vomitous explosion of marketing and pricing, leading to the immediate creation of cheaper clones. Copying a design because you think is great is normal, copying a bunch of designs because you have no idea what your web page is supposed to do is just direct proof you are clueless, and copying a design because everyone else is doing it is just blindly following clueless people.

My advice, as misguided as it could be, is forget about responsiveness and finger sized checkboxes, big images, crisp design and bootstrapped pages and all that crap. Just stop! And think! What are you trying to achieve? And then do it, as a web site, with pages, links and all that old fashioned logic. And if you still need cool design, add it after.

Sunday, May 15, 2016

Chrome Extension: Facebook Filter

Update 17 June 2016: I've changed the focus of the extension to simply change the aspect of stories based on status, so that stories with content are highlighted over simple shares. I am currently working on another extension that is more adaptive, but it will be branded differently.

Update 27 May 2016: I've published the very early draft of the extension because it already does a cool thing: putting original content in the foreground and shrinking the reposts and photo uploads and feeling sharing and all that. You may find and install the extension here.

Have you ever wanted to decrease the spam in your Facebook page but couldn't do it in any way that would not make you miss important posts? I mean, even if you categorize all your contacts into good friends, close friends, relatives, acquaintances, then you unfollow the ones that really spam too much and you hide all posts that you don't like, you have no control over how Facebook decides to order what you see on the page. Worse than that, try to refresh repeatedly your Facebook page and see wildly oscillating results: posts appear, disappear, reorder themselves. It's a mess.

Well, true to this and my word I have started work on a Chrome extension to help me with this. My plan is pretty complicated, so before I publish the extension on the Chrome Webstore, like I did with my previous two efforts, I will publish this on GitHub while I am still working on it. So, depending on where I am, this might be alpha, beta or stable. At the moment of this writing - first commit - alpha is a pretty big word.

Here is the plan for the extension:
  1. Detect the user has opened the Facebook page
  2. Inject jQuery and extension code into the page
  3. Detect any post as it appears on the page
  4. Extract as many features as possible
  5. Allow the user to create categories for posts
  6. Allow the user to drag posts into categories or out of them
  7. Use AI to determine the category a post most likely belongs to
  8. Alternatively, let the user create their own filters, a la Outlook
  9. Show a list of categories (as tabs, perhaps) and hide all posts under the respective categories
This way, one might skip the annoying posts, based on personal preferences, while still enjoying the interesting ones. At the time of this writing, the first draft, the extension only works on, not on any subpages, it extracts the type of the post and sets a CSS class on it. It also injects a CSS which makes posts get dimmer and smaller based on category. Mouse over to get the normal size and opacity.

How to make it work for you:
  1. In Chrome, go to Manage Extensions (chrome://extensions/)
  2. Click on the Developer Mode checkbox
  3. Click on the Load unpacked extension... button
  4. Select a folder where you have downloaded the source of this extension
  5. Open a new tab and load Facebook there
  6. You should see the posts getting smaller and dimmer based on category.
Change statusProcessor.css to select your own preferences (you may hide posts altogether or change the background color, etc).

As usual, please let me know what you think and contribute with code and ideas.

Friday, May 13, 2016

DNS Resolver - A Chrome extension to help you with malfunctioning DNS or IP blocking.

screenshot I've written another Chrome extension that I consider in beta, but so far it works. Really ugly makeshift code, but I am not gathering data about the way I will use it, then I am going to refactor it, just as I did with Bookmark Explorer. You may find the code at GitHub and the extension at the Chrome webstore.

This is how it works: Every time you access anything with the browser, the extension will remember the IPs for any given host. It will hold a list of the IPs, in reverse order (last one first), that you can just copy and paste into your hosts file. The hosts file is found in c:/Windows/System32/drivers/etc/hosts and on Linux in /etc/hosts. Once you add a line in the format "IP host" in it, the computer will resolve the host with the provided IP. Every time there is a problem with DNS resolution, the extension will add the latest known IP into the hosts text. Since the extension doesn't have access to your hard drive, you need to edit the file yourself. The icon of DNS resolver will show the number of hosts that it wants to resolve locally or nothing, if everything is OK.

The extension allows manual selection of an IP for a host and forced inclusion or exclusion from the list of IP/host lines. Data can be erased (all at once for now) as well. The extension does not communicate with the outside, but it does store a list of all domains you visit, so it is a slight privacy risk - although if someone has access to the local store of a browser extension, it's already too late. There is also the possibility of the extension to replace the host with IP directly in the browser requests, but this only works for the browser and fails in case the host name is important, as in the case of multiple servers using the same IP, so I don't recommend using it.

There are two scenarios for which this extension is very useful:
  • The DNS server fails for some reason or gives you a wrong IP
  • Someone removed the IP address from DNS servers or replaced it with one of their own, like in the case of governments censorship

I have some ideas for the future:
  • Sharing of working IP/host pairs - have to think of privacy before that, though
  • Installing a local DNS server that can communicate locally with the extension, so no more hosts editing - have to research and create one
  • Upvoting/Downvoting/flagging shared pairs - with all the horrible head-ache this comes with

As usual, let me know what you think here, or open issues on GitHub.

Seveneves, by Neal Stephenson

Book cover Neal Stephenson is known for writing speculative science fiction with focus on technological advancements and Seveneves is all about space. He thought about the idea in 2006, while he was an adviser with Blue Origin and he let the idea fester for years, while getting feedback from all kinds of people knowledgeable about and invested in space technology, like Planetary Resources, so at least the science is good. Personally, I believe that he gathered so much material that he just had to write the book, regardless if he had a story to tell or not. Never have I read a book that is so obviously written by an engineer, with long descriptions about how space stuff works and how a culture is like or how people solve problems. It's all about the how, never about the why or the who. As such, I consider it a failed book, because it could have been so much better as a well thought, well edited trilogy of books, with compelling characters, rather than a humongous enumeration of space technologies.

The story is split into three parts, mostly unconnected: the cataclysm that dooms Earth in two years and the solution found by the people of the planet, the cataclysm and what people do afterwards and the aftermath, 5000 years into the future.

What happens is that the Moon suddenly gets splintered apart by some unknown agent, possibly a miniature black hole, which just breaks it into seven pieces (it already starts with the number 7), that are destined to further break in collisions with each other and cause a catastrophic meteor bombardment of Earth, heating its atmosphere and boiling and smashing away all life. People decide to invest everything into expanding the International Space Station, having a few thousand people escape certain death by going into space. Everything is done very orderly and the book focuses exclusively at what people do to reach the stars, with today's technology. Nothing about what 7 billion people (see? I can use seven all over the place, too) feel or do when faced with certain doom. The book continues quickly over the inevitable deaths and accidents caused by rushing into something that is not really researched, proceeding towards a part of the story where almost everything just works, as by magic. The devastating problems that people would face in space are solved quickly by engineering solutions, ignoring the unsolvable ones.

So far the book does have a sort of a main character, a woman working with robots, sent to the ISS as part of a partnership with an asteroid mining company. Before we know enough about her, the story shifts into its second part, which splits attention between several important characters. At this point it is almost impossible to empathize with anyone, a problem compounded by using personalities "slightly towards the Asperger side of the spectrum", as the author points out several times.

To continue explaining the story is pointless and would spoil it, enough said that even as I am an engineer and always complaining that there is not enough science in science fiction, I got really bored with reading this book. Long long (mobile) pages of two of three paragraphs each, containing no dialog, explaining things that had nothing to do with the story, puny and underfed as it was. The only thing that made me react emotionally was the villain of the second part, who was written well enough to make me hate. To add insult to injury, after fighting through the 880 (normal) pages, the third part just abruptly ends, like he was just tired of writing, now that the tech was all explained away and there was some human story there.

Bottom line: As someone interested in the technology necessary to colonize the Solar System, this book should have been gold. Instead, I caught myself skimming over the long descriptions, just wanting the book to end. Too bad, since the subject could have easily been split into three or even several books, each with their own story to tell in a well structured fictional universe. Also, while the author swears he was "peer reviewed" on the concepts, he also admits making huge leaps of faith over what would work or not.

Wednesday, May 11, 2016

Unit testing Javascript: QUnit

QUnit logo I have started writing Chrome extensions, mainly to address issues that my browser is not solving, like opening dozens of tabs and lately DNS errors/blocking and ad blocking. My code writing process is chaotic at first, just writing stuff and changing it until things work, until I get to something I feel is stable. Then I feel the need to refactor the code, organizing and cleaning it and, why not, unit testing it. This opens the question on how to do that in Javascript and, even if I have known once, I needed to refresh my understanding with new work. Without further ado: QUnit, a Javascript testing framework. Not that all code here will be in ES5 or earlier, mainly because I have not studied ES6 and I want this to work with most Javascript.


QUnit is something that has withstood the test of time. It was first launched in 2008, but even now it is easy to use with simple design and clear documentation. Don't worry, you can use it even without jQuery. In order to use it, create an HTML page that links to the Javascript and CSS files from QUnit, then create your own Javascript file containing the tests and add it to the page together with whatever you are testing.

Already this raises the issue of having Javascript code that can be safely embedded in a random web page, so consider how you may encapsulate the code. Other testing frameworks could run the code in a headless Javascript engine, so if you want to be as generic as possible, also remove all dependencies on an existing web page. The oldest and simplest way of doing this is to use the fact that an orphan function in Javascript has its own scope and always has this pointing to the global object - in case of a web page, this would be window. So instead of something like:
while (i<+(document.getElementById('inpNumber').value)) {
  // do something
do something like this:
(function() {

  var global=this;

  var i=0;
  while (i<+(global.document.getElementById('inpNumber').value)) {
    // do something


It's a silly example, but it does several things:
  • It keeps variable i in the scope of the anonymous function, thus keeping it from interfering with other code on the page
  • It clearly defines a global object, which in case of a web page is window, but may be something else
  • It uses global to access any out of scope values

In this particular case, there is still a dependency on the default global object, but if instead one would pass the object somehow, it could be abstracted and the only change to the code would be the part where global is defined and acquired.

Let's start with QUnit. Here is a Hello World kind of thing:
QUnit.test("Hello World", function (assert) {
    assert.equal(1+1, 2, "One plus one is two");
We put it in 'tests.js' and include it into a web page that looks like this:
<!DOCTYPE html>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width">
  <title>Unit Tests</title>
  <link rel="stylesheet" href="">
  <script src=""></script>
  <div id="qunit"></div>
  <div id="qunit-fixture"></div>

  <script src="tests.js"></script>

The result:

As you can see, we declare a test with the static QUnit.test function, which receives a name and a function as parameters. Within the function, the assert object will do everything we need, mainly checking to see if a result conforms to an expected value or a block throws an exception. I will not go through a detailed explanation on simple uses like that. If you are interested peruse the QUnit site for tutorials.


What I want to talk about are slightly more advanced scenarios. The first thing I want to address is the concept of modules. If we declare all the tests, regardless on how many scripts they are arranged in, the test page will just list them one after another, in a huge blob. In order to somehow separate them in regions, we need a module. Here is another example:
QUnit.test("One plus one", function (assert) {
    assert.equal(1+1, 2, "One plus one is two");
QUnit.test("Two by two", function (assert) {
    assert.equal(2*2, 4, "Two by two is four");
resulting in:

It may look the same, but a Module: dropdown appeared, allowing one to choose which module to test or visualize. The names of the tests also includes the module name. Unfortunately, the resulting HTML doesn't have containers for modules, something one can collapse or expand at will. That is too bad, but it can be easily fixed - this is not the scope of the post, though. A good strategy is just to put all related tests in the same Javascript file and use QUnit.module as the first line.


Another interesting issue is asynchronous testing. If we want to test functions that return asynchronously, like setTimeout or ajax calls or Promises, then we need to use assert.async. Here is an example:
QUnit.config.testTimeout = 1000;
QUnit.module("Asynchronous tests");
QUnit.test("Called after 100 milliseconds", function (assert) {
    var a=assert.async();
    setTimeout(function() {
        assert.ok(true, "Assertion was called from setTimeout");

First of all, we needed to declare that we expect a result asynchronously, therefore we call assert.async() and hold a reference to the result. The result is actually a function. After we make all the assertions on the result, we call that function in order to finish the test. I've added a line before the test, though, which sets the testTimeout configuration value. Without it, an async test that fails would freeze the test suite indefinitely. You can easily test this by setting testTimeout to less than the setTimeout duration.

Asynchronous tests raise several questions, though. The example above is all nice and easy, but what about cases when the test is more complex, with multiple asynchronous code blocks that follow each other, like a Promise chain? What if the assertions themselves need to be called asynchronously, like when checking for the outcome of a click handler? If you run jQuery(selector).click() an immediately following assertion would fail, since the click handler is executed in another context, for example. One can imagine code like this, but look how ugly it is:
QUnit.test("Called after 500 milliseconds", function (assert) {
    var a = assert.async();
    setTimeout(function () {
        assert.ok(true, "First setTimeout");
        setTimeout(function () {
            assert.ok(true, "Second setTimeout");
            setTimeout(function () {
                assert.ok(true, "Third setTimeout");
                setTimeout(function () {
                    assert.ok(true, "Fourth setTimeout");
                }, 100);
            }, 100);
        }, 100);
    }, 100);
    setTimeout(function () {
        assert.notOk(true, "Test timed out");
    }, 500)

In order to solve at least this arrow antipattern I've created a stringFunctions function that looks like this:
function stringFunctions() {
    if (!arguments.length)
        throw 'needs functions as parameters';
    var f = function () {};
    var args = arguments;
    for (var i = args.length - 1; i >= 0; i--) {
        (function () {
            var x = i;
            var func = args[x];
            if (typeof(func) != 'function')
                throw 'parameter ' + x + ' is not a function';
            var prev = f;
            f = function () {
                setTimeout(function () {
                }, 100);
which makes the previous code look like this:
QUnit.test("Called after 500 milliseconds", function (assert) {
    var a = assert.async();
    stringFunctions(function () {
        assert.ok(true, "First setTimeout");
    }, function () {
        assert.ok(true, "Second setTimeout");
    }, function () {
        assert.ok(true, "Third setTimeout");
    }, function () {
        assert.ok(true, "Fourth setTimeout");
    }, a);
    setTimeout(function () {
        assert.notOk(true, "Test timed out");
    }, 500)

Of course, this is a specific case, but at least in a very common scenario - the one when the results of event handlers are checked - stringFunctions with 1ms instead of 100ms is very useful. Click on a button, see if a checkbox is available, check the checkbox, see if the value in a span has changed, stuff like that.

Testing average jQuery web code

Another thing I want to address is how to test Javascript that is intended as a web page companion script, with jQuery manipulations of the DOM and event listeners and all that. Ideally, all this would be stored in some sort of object that is instantiated with parameters that specify the test context, the various mocks and so on and so on. Since it is not an ideal world, I want to show you a way to test a typical such script, one that executes a function at DOMReady and does everything in it. Here is an example:
$(function () {

    $('#btnSomething').click(function () {

The code assumes $ is jQuery, then it adds a handler to a button click to empty another item. Think on how this should be tested:
  1. Declare a QUnit test
  2. In it, execute the script
  3. Then make some assertions

I was a bit lazy and changed the scripts themselves to check if a testContext exists and use that one. Something like this:
(function ($) {

    var global = this;
    var jQueryContext = global.testContext && global.testContext.document ? global.testContext.document : global.document;
    var chrome = global.testContext && ? :;
    // etc.

    $(function () {

        $('#btnSomething', jQueryContext).click(function () {
            $('#divSomethingElse', jQueryContext).empty();


which has certain advantages. First, it makes you aware of all the uses of jQuery in the code, yet it doesn't force you to declare everything in an object and having to refactor everything. Funny how you need to refactor the code in order to write unit tests in order to be able to refactor the code. Automated testing gets like that. It also solves some problems with testing Javascript offline - directly from the file system, because all you need to do now is define the testContext then load the script by creating a tag in the testing page and setting the src attribute:
var script = document.createElement('script');
script.onload = function () {
    // your assertions here
script.src = "";
In this case, even if you are running the page from the filesystem, the script will be loaded and executed correctly. Another, more elegant solution would load the script as a string and execute it inside a closure where jQuery was replaced with something that uses a mock document by default. This means you don't have to change your code at all, but you need to be able to read the script as a text, which is impossible on the filesystem. Some really messy script tag creation would be needed
QUnit.test("jQuery script Tests", function (assert) {

    var global = (function () {
        return this;

    function setIsolatedJquery() {
        global.originalJquery = jQuery.noConflict(true);
        var tc = global.testContext.document;
        global.jQuery = global.$ = function (selectorOrHtmlOrFunction, context) {
            if (typeof(selectorOrHtmlOrFunction) == 'function')
                return global.originalJquery.apply(this, arguments);
            var newContext;
            if (!context) {
                newContext = tc; //if not specified, use the testContext
            } else {
                if (typeof(context) == 'string') {
                    newContext = global.originalJquery(context, tc); //if context is a selector, use it inside the testContext
                } else {
                    newContext = context; // use the one provided
            return global.originalJquery(selectorOrHtmlOrFunction, newContext)
    function restoreJquery() {
        global.jQuery = global.$ = global.originalJquery;
        delete global.originalJquery;

    var a = assert.async();

    global.testContext = {
        document : jQuery('<div><button id="btnSomething">Something</button><div id="divSomethingElse"><span>Content</span></div></div>')

    var script = document.createElement('script');
    script.onload = function () {

        assert.notEqual($('#divSomethingElse').children().length, 0, "SomethingElse has children");
        setTimeout(function () {
            assert.equal($('#divSomethingElse').children().length, 0, "clicking Something clears SomethingElse");
        }, 1);
    script.src = "sample.js";


There you have it: an asynchronous test that replaces jQuery with something with an isolated context, loads a script dynamically, performs a click in the isolated context, checks the results. Notice the generic way in which to get the value of the global object in Javascript.

Bottom-Up or Top-Bottom approach

A last point I want to make is more theoretical. After some consultation with a colleague, I've finally cleared up some confusion I had about the direction of automated tests. You see, once you have the code - or even in TDD, I guess, you know what every small piece of code does and also the final requirements of the product. Where should you start in order to create automated tests?

One solution is to start from the bottom and check that your methods call everything they need to call in the mocked dependencies. If you method calls 'chrome.tabs.create' and you have mocked chrome, your tabs.create method should count how many times it is called and your assertion should check that the count is 1. It has the advantage of being straightforward, but also tests details that might be irrelevant. One might refactor the method to call some other API and then the test would fail, as it tested the actual implementation details, not a result. Of course, methods that return the same result for the same input values - sometimes called immutable - are perfect for this type of testing.

Another solution is to start from the requirements and test that the entire codebase does what it is supposed to do. This makes more sense, but the combination of possible test cases increases exponentially and it is difficult to spot where the problem lies if a test fails. This would be called acceptance testing.

Well, the answer is: both! It all depends on your budget, of course, as you need to take into consideration not only the writing of the tests, but their maintenance as well. Automated acceptance tests would not need to change a lot, only when requirements change, while unit tests would need to be changed whenever the implementation is altered or new code is added.


I am not an expert on unit testing, so what I have written here describes my own experiments. Please let me know if you have anything to add or to comment. My personal opinion on the matter is that testing provides a measure of confidence that minimizes the stress of introducing changes or refactoring code. It also forces people to think in terms of "how will I test this?" while writing code, which I think is great from the viewpoint of separation of concerns and code modularity. On the other hand it adds a relatively large resource drain, both in writing and (especially) in maintaining the tests. There is also a circular kind of issue where someone needs to test the tests. Psychologically, I also believe automated testing only works for certain people. Chaotic asses like myself like to experiment a lot, which makes testing a drag. I don't even know what I want to achieve and someone tries to push testing down my throat. Later on, though, tests would be welcome, if only my manager allows the time for it. So it is, as always, a matter of logistics.

More info about unit testing with QUnit on their page.