Have you ever had an “Internet Afternoon”? I mean, one of those days where a 10 minute search turns into hours after falling into follow-the-link rabbit holes, assisted by Wikipedia. If you can relate to this xkcd webcomic then you know what I mean:
Let me tell you the tale of my Internet Afternoon Time Vortex. Be advised that I will pepper this story with links to Wikipedia and other sites, as an in-joke and lame attempt to make you fall into the same trap. I am also going to use a lot of pictures for your amusement.
It went like this…
Once upon a time, I had this idea that it would be possible to use a Raspberry Pi to run a full bitcoin node. That thought lead to the impulse purchase of said Raspberry Pi, which of course I did, before checking out the specs of a full bitcoin node.
Hold it right there! I know what you are thinking, I am not that dumb! Off course, I can’t mine with a Raspberry Pi! I only wanted a full node which would listen to the network and record transactions, so I could try the examples in Andreas Antonopoulos Mastering Bitcoin book (2nd Edition to be precise). I blame my impulse purchase on the kit’s good looks. Isn’t it nice!
As it usually happens with impulse purchases for ill-considered ideas, I had a very good time unboxing the kit, followed by the realization that “This will not work!”, and then “Meh time to do something else”. The Kit ended up in a drawer keeping company to all the other ill-fated gadgets.
What is the plural of Raspberry Pi? Is it Raspberry Pis? Does not seem right. Raspberry Pies? Yummy, but no. I can’t figure it out! Oh well… please just put up with ideas of an urine related electronic device or something edible, your preference.
Time went by and so the Pi slept (what do gadgets dream of?), until the day when I was looking to block some annoying ads which slowing down my browsing. Thanks to the power of Google I found what seems to be a good solution: The Pi-Hole(tm), trademark included, as the makers insist on it. The only problem was I did not have a Raspberry Pi.
Don’t get ahead of me! Let me tell my story! To answer the obvious: yes, I remembered the sleeping Pi. After searching many drawers, and discovering many sleeping gadgets, I triumphantly put the Kit together, booted the thing and had a working Raspberry Pi with Raspbian on it.
I also found DYI instructions for an excellent setup that would not only disable ads, but would also not share DNS lookups with my ISP (link at the bottom). The install was a breeze, and I had the thing running and sending those annoying ads into the void.
Now that I had the Raspberry Pi doing its thing I did some more browsing in the official Pi site, just to see what else was there. My thinking was: if one Pi project went so well, maybe there are other cool things I could try (and feed impulse purchases).
I envisioned many Raspberries all busy doing their thing, whatever that could be. Maybe a Raspberry Certificate Authority, a Raspberry Mail Server , or some home automation. There are many Raspberry projects out there.
One of the items on the site caught my attention: the Raspberry Pi Compute Module 3. That’s a mouthful! It is also very skinny since it is designed for industrial applications. Just look at the thing, is almost like a memory stick. This form factor is called “gumstick” for obvious reasons. It probably comes in a foil candy wrapper.
As I admired the Compute Model 3 this thought surfaced: imagine a Beowulf cluster of these! If you are wondering how I made that leap then you are not a reader of Slashdot.org, so I will clarify. The “imagine a Beowulf cluster of these” is a typical Slashdot meme, that is applied to anything and everything without any regards if the thing computes or not.
You should read Slashdot just for kicks, although it is not what it used to be. Even XKCD agrees with me:
Back from the Slashdot detour… Where were we? Ah… imagining rows upon rows of Compute Module 3 into some sort of fantastic supercomputer. I am probably not the only one with such ideas , so more Googling ensued which resulted in some findings. I confirmed I’m not the only one with such preoccupations. Just look at the images in the search results.
By now I have forgotten about DNS, Ad-blocks and the Compute Module 3. My only thought was “what can I do with this?”. It turns out that it cannot do much in the realm of “supercomputing”, and this is because the Raspberry has Input/Output bottleneck between its memory and the CPU. More of this down the storyline, be patient.
The bottleneck is compounded by the use of network interfaces to connect the cluster, which will be outperformed by just couple of GPUs. This is why Bitcoin mining is not done with clusters of Raspberry Pis, but with a bunch of gaming video cards.
So, why are the Pi clusters being built? Well, it seems that it will be very useful to model a real supercomputer and test the software before actually building one. Those things are expensive! You can read all about it here: https://www.raspberrypi.org/blog/raspberry-pi-clusters-come-of-age/. Clever.
That bottleneck thing bothered me. Back in the day when I was going to school (the Uni) there was something that did not have that bottleneck, allegedly. It was called the Transputer. From what I remembered, the idea was to have CPUs that could be connected together just like transistors, by using 4 interfaces and into somewhat odd configurations. The interfaces were Serial not Parallel, which reduced wiring and it was hip at the time as Parallel ports were square. Like round glasses to the square glasses.
Transputers were supposed to be the Next Big Thing that would usher an age of massive parallel computing , yet here I am years later and barely remembering they existed. I do remember what it looked like, and small they were not.
These things failed! But why? Reading page after page about the whys and whynots of Transputers, it seems they were way ahead of their time in to multi-core and parallel programming. The market was not ready for it.
After I mourned the passing of the Transputer, for about 2 minutes, I had this itch that I needed to scratch. Just like the Transputer, what other unique and non-standard things are or were out there? There must be more, and I needed to know.
If you are wondering why I needed to know, here is why: I have always been in awe of the Babbage machine. Just think about it. Someone back in the time where steam engines were hot stuff (literally and figuratively), and way before the first digital computer, claimed to be the ENIAC, Charles Babbage designed a fully working mechanical, general-purpose computer . To top that, there was even a programmer for it, as Ada Lovelace “wrote” some code for the machine.
Just imagine for a moment the following “what-if”: what if Charles Babbage managed to complete the Analytical Engine? As I see it, we would have had something very close to a steampunk civilization. Maybe the transistor would have not been invented. Who knows! The problem was, Babbage tried to do this without an IPO. Babbage should have used Bitcoin instead, or do an ICO from the Analytical Engine whitepaper.
We missed that narrowly, in my opinion, as the machining equipment at the time did not have the precision required for producing the gears. I added a link to Steampunk fiction at the bottom to explore what-if, written by William Gibson and Bruce Sterling. And some movies, just in case you don’t read.
To close this wonderful sidetrack in my tale, here is a video of a fully functional Difference Engine which is the beta version of the Analytical Engine. Don’t forget to put the volume down as the engine is very noisy. For the impatient, the action starts at 15:32
At this point there was no stopping. I was possessed by the need to find out more. All those what-ifs! Time to clear out my screen and search like a maniac.
According to Wikipedia generous definition, the analog computer could use various sources of “computation”. That means not only electricity but water, air, or hamster wheels. As long as the calculation is done with continuous and not discrete variables, all is good.
As I see it, the oldest continuous “variable” is muscle power. That means the oldest analog computer was probably hand powered (or hamster powered). That brings me to the Antikythera Mechanism, which is couple of thousand years old and the only one I can think of or find in Google (I chose the one with the most rust):
It looks like something that needs to be taken to the scrap yard. Considering how old it is, and what it did (allegedly), I would call this a “computer”, because it computed. What did it compute? It computed things very useful for people who believe in the Zodiac: tracking the movements of the Sun and Moon across constellations, and predicting eclipses.
Here is the modern version of that device, as per a reconstruction:
According to Google and Wikipedia, there is more to Analog Computing than just a gentleman adoringly looking at his recreation of an ancient hamster wheel. In terms of old or weird ways to compute I found these:
Graphical, aka Nomograms. I think I used this in school as I did not have batteries for the digital one. It just took forever to calculate anything, but it made an excellent sword to play Musketeers even though it hurts when getting hit with it. Nomograms were used extensively to calculate ballistics fairly fast, assuming they are not badly scratched, otherwise all bets are off.
There is also microfluidics which is what I imagine happens when Moore’s Law drinks too much coffee. Unlike regular size Fluidics, the micro side is new and under research as it can enable all sorts of miniaturized machines such as “lab-on-a-chip“. I think this is what Theranos over-promised and under-delivered. It did not compute.
The odd thing about Fluidics is that they are still with us (think hydraulic cranes and excavators), and are bound to come back again as we go further into biomimicry as nature makes good use of Fluidics.
By now I had completely forgotten where I started, as the land of Raspberry Pi was just a faint memory. Like Odysseus, I went from one weird encounter to another, the difference being that he knew where he was going and probably had the Antikythera device.
Just when I thought I have seen enough, Wikipedia managed to throw a curveball that makes analog computing seem down right ordinary. Here are the weirdos of the “compute” family: Reversible, Chaos and Stochastic computing (I think the last two are cousins).
I am going to try to explain those even though I don’t seem to be able to form long term memories from my readings. I am also partially to blame, as I fell into the trap of following the embedded links, for further whack-a-mole.
Reversible Computing: this must have been invented by Dr Who since he is the only character I can think of that would worry about computation working backwards in time. It must be how the Tardis is steered, using “Janus” the time-reversible computing programming language.
The definition includes many references to “adiabatic circuits” which seem to be a real thing. Wikipedia even has a helpful definition which states: adiabatic circuits are low power circuits which use “reversible logic” to conserve energy. Notice the quotation marks on “reversible logic”. I see your irony there Wikipedia!
There are further explanations on “Adiabatic” being of Greek origin and associated with classical thermodynamics. My advice: whenever “thermodynamics” comes up in the context of computing it is a good time to stop reading, unless you make habit of duelling in rabbit holes.
Here is a good question: is the Antikythera device an Adiabatic computer? I assume the crank can be turned forwards and backwards, so it must be. On the other hand, the rusted version would require a lot of power, but that can be fixed with cold beer.
Somehow I believe useful computation needs deterministic outcomes. If I run the thing and get X, then you should also get an X when you run the thing. That’s how it is supposed to work. Whenever you hear the words “not-deterministic” being uttered by a programmer you know someone is seriously pissed.
So what is the purpose of this thing? Uh… none that I can see, but MIT does not agree with my assessment. The expert cited in the article says: “The common notion that chaotic systems are unstable and unpredictable is not accurate”. What?!
There is a description of the “ChaoGate”, which for some reason sounds familiar. It was developed by the same people cited in the MIT article. I see what’s happening here.
I was lucky as Wikipedia spared me from further “See also” hyperlinks torment. It has this “Chua’s circuit” that does not look any different than all the other circuits I had to deal with before: one resistor, two capacitors , an inductance and… what is that? The last thing is a ” nonlinear negative resistance“. I don’t think I ever heard of that. I don’t like the words “nonlinear” as someone called me that with the intent to offend.
More reading produces what you expect when “chaos” is brought up, in the form of something called “Strange Attractor“. I don’t like it as I am not attracted to strangers. Looks like the next picture, but not exactly as I am taking an artistic license. If you like strangers, it can be reproduced with Fractint.
Side note: Fractals are a serious rabbit hole. More like a black hole, with a beautiful event horizon from which you don’t feel like escaping at all. I spent many hours playing with fractals in Fractint, generating all sorts of amazing stuff. Thanks Benoit Mandelbrot for your fractal!
I have done all I can to spare you the fractal rabbit hole (not really). I included the links as some people like to test their sanity.
And I just realized why the “ChaoGate” was tickling my brain! It went from “ChaoGate” -> Heavens Gate -> cult -> religion -> Mock Religion” -> Sacred Chao. That is how my brain works, as you may have figured out by now.
Let me just give you a glance into a rabbit hole I felt into years ago, and from which I never fully recovered. The Sacred Chao, or Discordianism, was first described in the pamphlet Principia Discordiam as the foundation text for the mock religion of Discordianism. It has some real followers so be careful.
You have two choices here: adquire and read the pamphlet (not recommended), or just read the Wikipedia page (also not recommended). I rather eat soap than reading that again. If you buy it, it also comes with a recommendation to buy “The Book of the SubGenius : The Sacred Teachings of J.R. ‘Bob’ Dobbs“. I don’t recommend doing that either.
I just read that “recommendation system” definition again. It seems I have a problem as Amazon believes I would like these other books given my purchase history. It recommends I read about Dudeism, but this dude will not abide.
There is a third choice that is more enjoyable and surprisingly has not been banned: just read the “The Illuminatus! Trilogy“. It won’t be long before you are examining a dollar bill and looking at this with new interest. If you buy any of those books, which I am not telling you to, there could be consequences.
Stochastic computing: now that we have come this far, may as well continue.
Stochastic computing is defined as “a collection of techniques that represent continuous values by streams of random bits”. Wikipedia also goes into stating it is not the same as Randomized algorithm which is not to be confused with Algorithmic randomness. I think someone at Wikipedia is having too much fun with the words “random” and “algorithm”.
Some more reading and it seems, the weaknesses of Stochastic computing are due to its random nature. Lucky enough there are no “see also” sections, but I have been throwing the work “Stochastic” out way too many times so may as well look it up.
When reading about Stochastic it does not take long before Artificial Intelligence comes up. It says verbatim: “In artificial intelligence, stochastic programs work by using probabilistic methods to solve problems, as in simulated annealing, stochastic neural networks, stochastic optimization, genetic algorithms, and genetic programming. A problem itself may be stochastic as well, as in planning under uncertainty”. What a minefield that is! It includes links to artificial idiots.
I managed to extricate myself from the temptation of looking into “genetic algorithm” and “stochastic neural networks”, by pulling my stack back to Stochastic computing.
I went over all I have read so far, and this name comes up again and again: John von Neumann. I know who that is of course, and I also know his presence looms large in computing. His namesake architecture introduced the notion of stored program, that still bedevils modern digital CPUs. He also had some troubles with the necks of bottles.
From what I gather, Computing was a minor hobby for John von Neumann. He was really busy with other scientific endeavours, like calculating Pi in his head and memorizing phone books.
This would be a good time to stop reading, get a something to drink and stretch your legs. We are almost in the home stretch. Just play this video and do what it recommends, minus the sugar. Or the processed food. Or the atomic bomb watching. Or the weekend activity
Oh hell! Here is an alternate video if the one above offended you somehow, which is very likely.
Now that you are back from munching on celery sticks, let’s continue, and bring this home.
In addition to von Neumann Architecture, there are references to non-von Neumann architectures. The descriptions sound like some sort of rebel alliance trying to overthrow the empire of stored programs and code centric computing.
How would that work? Replace the stored programs and control flow architecture with X, where X is something other than code. What would be the result? Behold Dataflow Architecture! Apparently not very useful, unless used for signal processing.
And then just like I did with Steam Computing, this thought popped in: what if this Dataflow thing would have been the one to dominate computing? I did not see a reason to stop there and not expand that thought into: if von Neumann was not around to dictate his bottlenecks, how would modern computing be?
As you can see, I like what-ifs. I also like Science Fiction, which is all about major What-Ifs. Time travel is a regular trope in Science Fiction. It is natural (for me that is) to wonder what would happen if I went back, and made sure von Neumann Architecture did not see the light of day (the link says nothing good will come out of it, as shown by the Simpsons). I could also help Babbage spend his money and build a steam powered iPhone.
Lucky for me I did not have to think hard about it, as someone smarter explained the notion in Quora. Here is the shortened URL if you have phobia to long URLs. The answer is focused on Data Flow Architecture, which is convenient. The author also dinged von Neumann couple of times, which is fine by me.
TL;DR what-if number 1: replace the list-of-instructions model by a dataflow model, where instructions form a Directed Acyclic Graph, with each instruction indicating explicitly what others they depend on. If two instructions both have all their inputs available, they can be executed in parallel, if the hardware allows it.
I like Directed Acyclic Graphs. I like all sort of Graphs, and what can be done with them. And also Graph Theory and its applications. There are many references to DAGs computing advantages with Big Data. This is a rabbit hole I can just dive in.
I was getting ready to grok that post and all the other proposals with follow ups on DAGs and Graphs, when my brain went off the rails from Directed Acyclic Graph -> DAG -> Dagger Hashimoto -> Satoshi Nakamoto -> Bitcoin -> Blockchain -> Hashgraph . Quite a derailment.
Based on the brain blip I just mentioned, it would be natural for me to wonder if Hashgraph is just a form of non-von Neumann architecture, that computes inputs and outputs (transactions) into states in a DAG and using dataflow algorithms based on network gossip and consensus. It would be a massively parallel yet very slow blockchain-like computer. Or complete nonsense.
I am not surprised about blockchains coming up. This blog is generally and allegedly about blockchains (among other things), and they are very hard to escape since we are somewhere near the apex in the hype cycle.
I am going to spare you a lot of reading about blockchain. If I wrote all that then I would not have anything to blog about. Let’s put the train back on the DAG track and continue there.
From my readings, it seems DAGs are good for scheduling, data processing networks, and also Bayesian Networks. Ah, Bayesian Networks! I was wondering where they would show up, as they make good material to dump in ICO whitepapers.
I am just going to say this: I don’t like AI, and there I said it. I have some issues with the conflation of AI with Machine Learning with Deep Neural Networks (aka Deep Learning). The objective of creating the intelligence has been lost due to the hype and the shower of money being dump into AI (deep learning), and the people doing it.
I am not saying that experimenting is not part of the scientific method, just that researchers seem to have forgotten what AI was all about. Too much empirical falsification and not a lot of theory coming out of the experiments. As Karl Popper saw it, science is about developing scientific theories. Experiment with no theory is alchemy.
I am feeling righteous here! Just let me rant a little bit more and I will be done.
Why am I bothered? Well.. I did work on AI way back in the day. I will date myself by mentioning I worked on expert systems (yes, there are Bayesian networks involved). I have seen this hype before.
Let’s say you wanted an “artificial doctor“. The solution at the time was to “extract” all sorts of knowledge from prominent doctors, and codify those into a knowledge base from which the inference engine would deduce your ailment. You provide the symptoms such as nausea, headache, and vomiting, and the AI doctor would diagnose pregnancy even when you are not equipped for it.
Just like a real doctor (well, not really). You can see the cost reduction appeal, but it did not work. Knowledge and experience are fuzzy things, and carbon based units have a hard time expressing those rules coherently. It was an attempt at Knowledge Engineering.
So what became of those? The world moved on. There were valuable lessons learned about Knowledge Engineering, and the rest went into the heap. You can still see the footprint of the “artificial doctors” in Clinical Decision Support Systems, which are still under AI research.
There are other problems where a rigid and codified knowledge base could work, and you may see the successor of the expert systems at work. An example would be a car diagnostic expert mechanic.
Closing on Expert Systems: The work I did used Arity Prolog which is very funky as “programing languages” go. Wasted some time looking for AI and found a whole lot of nothing. I solved the “monkey and banana” problem and lets leave it at that.
And that is the end of my long and winded tale of how I spent a lazy afternoon following links in Wikipedia, while reminiscing of things future and past. I did not record all my ins and outs, just the highlights. I skipped over the “AI+Blockchain” topic which deserves its own post.
The whole thing lasted about six hours while it rained outside. Not a bad way to spend a rainy and gray day. Glad I could share it with you.
If any of the topics above are of interest, here are some additional links for future time devouring .