Top of the S-Curve

This short essay is going to be entirely focused on the end of Moore’s law and Ray Kurzweil and his S-curves.

As of my writing this in 2019, its become very clear that computer circuits are bottoming out, or topping out, depending on how you want to phrase it. What I mean is that the once famed (and still often parroted) Moore’s Law is at an end (well I think it ended technically a couple years ago by the strictest definition, but it doesn’t really matter, for this essay’s purposes anyway).

Ray Kurzweil is quite famous for pointing out the exponential growth that things like Moore’s Law embody, and often uses this to point out what he sees as the path to the singularity. I won’t be focusing on the singularity now, only bringing Kurzweil, to bring up one of the first terms I learned from reading his books, the S-Curve.

Essentially if you were to plot something like Moore’s law on a line graph (x-time, y- transistors) you would be presented with most of an S, though skewed a bit. The bottom showing the slower growth, then middle being where growth seems almost out of control, and then the projected top of the s being where growth slows to a crawl and essentially plateaus.

I don’t really think transistors follow a perfect S curve, but it makes for a powerful visual. And after reading Michio kaku’s recent book, where he directly references Moore’s law as ending soon, I thought I would write my thoughts on what this actually means for computers and computation.

In his book, Michio off handedly comments that the end of transistor development would turn silicone valley into a ‘rust belt’. This is, in truth, what spurred me to write this, as a sort of response, as I found this comment absurd.

So assuming that transistors are reaching their peak, in their current form, what does it mean for the tech industry as a whole, for our development towards the singularity, and for everyday consumers?

I would argue that the plateau in the physical technological development may prove to be a sort of godsend. There have been so many innovations in both CPUs and GPUs that it is hard, as just a consumer, to keep up with them. This problem of keeping up with the innovations has also been proven to be a real issue with software engineers that are tasked with trying to utilize these new features, such as multi-threading, higher and higher definition displays, and a myriad of other hardware innovations. So as a result it seems like the software has been truly lagging behind the hardware for sometime. I personally have held off on buying new hardware that could enhance things like my gaming experience, because of issues with actual games that could utilize it!

Obviously the early bird gets the worm, and the software that is adapted to this new hardware can do the best, make the most cash, and further push the hardware further and further. But how much does that actually matter? I personally believe the hardware we have today could yield magnificent things for years to come, especially given a more consistent hardware base could finally allow for developers to ‘catch up’.

I should mention this is a sensitive issue and I personally know people that will disagree, but I also know that for many projects, especially open source, catching up with more and more hardware innovations is an up hill battle. And considering how much needs to be added or updated, for even a base operating system, it does seem like a huge investment to update! Or at least doing so without just making older software work, regardless of optimization.

However I think there are huge gains to be had in software efficiencies, and things like mobile development have kind of shown this! Considering mobile CPUs are intentionally low power, and anything built for them really needs to be more optimized, its pushed developers, especially Google’s Android OS, to be more efficient, use less power, and give more overhead to the users’ apps, rather than running the base system. This is likely to continue to go up and up, and my current low end smart phone has a smaller battery size, while still lasting all day with a good amount of use, as it kills anything sucking power that I’m not explicitly using at the moment (this has its upsides and downsides obviously, but is a far cry from my previous phone, which would run all the apps I had, pretty much all the time). There are a lot of clever tricks these smart phones use to make the phone snappy and power efficient, and it has become the sign of a garbage app for android if it runs in the background when it really doesn’t need to, and asks for rights to use hardware it doesn’t obviously need (a game needing access to your contacts and the ability to control your camera and send SMSs for example).

For computer operating systems, Microsoft has made major leaps with its windows operating system (love it or hate it, as of my writing, windows 10 runs better on my current hardware than windows 7 ever did, and boots faster than previous windows versions did either, while seemingly using less resources overall!). Now imagine if companies like microsoft were certain of what next years hardware market would look like, and not have to worry about whether a new hardware feature (such as multithreading, or who knows what) will be adapted or not, and instead focus a bit more on optimizing the software for a narrower range of hardware.

This idea isn’t new, and if you look at the bottom end of the S curve, when computers were first starting to be developed, we can see that the software that was being written then was arguably as efficient as it could have been, far more so than it probably could ever be today, given its relative simplicity, but written to run very well for one specific piece of hardware or a couple of similar computers.

Obviously today there is far too much complexity and competition for hyper efficiently coded software (tailored for the hardware entirely) to exist. But there is a lesson here, which is that there is plenty of room for growth in efficiencies. And indeed this is what we’ve seen as Moore’s law has slowed down. software is already being made better and better, as the architecture it runs on is sticking around longer and longer, as well as more standards being implemented.

But I still argue that a plateau in hardware will spur innovation in software, and eventually end most of the market! Its hard to think of a good analog for software + hardware development, but lets go with another consumer product, the car. Cars have barely innovated in any meaningful way in a couple decades (note i’m talking about combustion engines not EVs or something else, and not self driving cars. Both of these are still extreme minorities in the car market today). Fuel injection, air bags, even computer control circuits, are all not new. The asthetic of cars has changed over the last couple decades, and there have been modest gains in MPGs (or KPLs I guess for the metric users out there). However cars, even 10 year old cars, do the same thing today as before, they drive, highway, city street, doesn’t matter. They are all optimized to meet consumer needs and choosing a brand is more about preference and style, and less about which car is more car than another. There are obviously advanced features and not all cars are created equal, but once you have a price range, the kinds of cars you can buy in that range are largely the same as ones made 5 or 10 years ago (I suppose there are some minor things like a computer display or more bluetooth support, but largely nothing you couldn’t do with a tablet and some duct tape).

And there’s nothing wrong with this, aside from environmental concerns about internal combustion engines and pollution. People like buying cars, every decade or so, depending on socio-economic position and need/desire for a new vehicle. But what consumers shopping for cars won’t see are drastically different setups they need to independently verify will work for the roads they want to drive on. Instead a consumer can confidently go into a dealership in their country and know the car they are buying will drive on all the roads, have the safety features we have all come to expect as standard, and operate in the same fashion as their last car, ie gas pedal, steering wheel, etc.

To an extent you can argue that the same is in fact true of computers, but there are some big differences. Consider trying to play a game made in 2019 on a machine that was made in 2010! It may be possible, but the ‘road’ to use our analogy, has changed so much it doesn’t really seem reasonable. But in a world where we’ve reached the top of the hardware development path, in its current medium, an older computer can likely run a new game, even if its more intensive with little more than a software update!

And this is where I really want to drive my point home, hardware stagnation, in terms of Moore’s law, or any of the other hardware based laws/trends, does not in any way dictate that software will also die, and software, for the foreseeable future, is where the big bucks will continue to be!

Think about our analogy again, have you ever bought a custom speedometer? Maybe if you are car modder or you race cars, the answer is yes. But for almost everyone else, there’s no need, a speedometer does one thing well and its barely something you ever think about when you are using your car. So why is software for our computer not quite the same yet? How many versions of something like a web browser do we really need? there are a lot of versions of a lot of software out there and they aren’t always compatible (I personally get very annoyed with trying to use libre office and microsoft office together!). But really why? Do I really need to keep getting updates for my word processor? And yet new versions just keep coming out. And they likely will continue to for years. But if the hardware isn’t as much a concern what will updates look like? And what will newer applications look like?

I suspect we could reach a place where we don’t really need more updates for some basic software, aside from security patches and the like, as there just isn’t much point, as the software tops out, in a sense, running great on your hardware and using very little resources. And without the need to push compatibility with newer and newer hardware, this might be a great place to go. Because then hardware can start to push limits in new ways.

Imagine having the newest version of say Android, which is as efficient as we as humans seem to be able to make it for the current hardware set, which has been around for around 10 years, and now imagine it also has several ‘versions’ depending on the device. Each one only containing what is needed to run it on the very specific hardware it will use. Think about smart glasses, which today just used modded versions of android (for android based smart glasses), where in the future, where the hardware is more preference than an every changing set of possible hardware combinations, the software could be truly optimized and only contain what you need to run your AR setup, greatly increasing battery life and speed. Then for your tablet or smartphone, it could be the same idea, running only what you need for that hardware setup, which would further open up more robust applications, as again, the extra freed up overhead can go to user applications, which themselves can be further optimized for the hardware environment!

This could also have another side effect, which would be disposable apps, user created apps that are made basically from prebuilt applications or very high level coding languages, to serve the user’s needs. Highly efficient high level languages (Here I mean things like today’s scratch but made as efficient as possible, or other visually based languages that are meant to be easily understood and used, even by children.) could turn everyone into a coder, of sorts, and with more predictable use cases and standardization with software development, could mean things like smart homes actually being taken more seriously (they are somewhat serious today, but are still quite a novelty), as configuring more complex setups, becomes a few simple if thens.

Lastly, I would also argue that this plateau of hardware could also aid in things like artificial intelligence, as the substrate for an AI becomes standardized and the less interesting aspects of developing an AI, such as operating system overhead, hardware compatibility, hardware bottlenecks, etc. are all addressed and not needed to be readdressed once solved, as the hardware hasn’t leaped ahead again since things were solved. This would allow developers/researchers to focus more on the AI and identifying inefficiencies with running it, and getting the actual software behind the AI to run better and better on a static hardware environment (which will still be the top of the line of course! And with end of the S-curve processors no less!).

There are quite a few objections that could be made to what I am saying and some I don’t fully disagree with. Specifically I don’t think the hardware innovation will end any time soon, but perhaps we will still find ourselves with a much more static environment, at least for a time. And this static environment could force hardware manufacturers to start meeting the consumer rather than the other way around (I have hesitated to update my current PC as I would need a whole new motherboard, as the current CPUs on the market cannot be installed on my current motherboard!). Others will argue that the software out there today is already really good, and wouldn’t be easily improved in terms of efficiencies, and I agree with this, it wouldn’t be easy, but if you are innovating you are by definition not taking the easy road.

So to conclude, I think the end of Moore’s law is not the end of silicone valley or technological innovation, but may lead to new innovations and show us just how much our current hardware truly can do!