A List of Computing Predictions

RAKTHEUNDEAD

First time out of the vault
I, like many technophiles, maintain an interest in a lot of hardware and software developments. Recently, however, I've noted a proportion of technological developments which I am assured will become dead-ends. With that in mind, I'd like to present a list of my own computing predictions.

- There will be such a drastic increase in computing power over the next five years that the software industry will be unable to keep up.

With the advent of multi-core processors, personal computing no longer strives to do one thing faster than before, but many things at once. Symmetric multiprocessing and multi-threaded applications have existed for a long time in technical computing and supercomputing, but these developments have infrequently been applied to personal computer operating systems. With the quad-core processor released into the commercial market, and octo-core processors on the way, PC hardware finally has the chance to have useful multiprocessing capabilities.

That is, if programmers could apply their skills to multi-threaded applications. We're seeing problems with this already, with computer games having increasingly protracted development periods and operating systems blooming out of control with bloat, but the multi-core processor has been largely untouched in terms of consumer applications. Creating support for multithreading through software is difficult and will likely increase the protracted periods of development for games developers.

Another area which presents potential problems is that of increasing hardware miniaturisation. As technology gets smaller, people are logically going to try to apply that technology to smaller applications, including portable music players, mobile phones and handheld consoles. While I'm highly optimistic about having more power in the palm of my hand, and greatly enjoy using my current smartphone, increasing complexity in these devices has rarely been taken well. We already have massive problems with the Luddites going, "Why can't a phone just be a phone?" (I strongly object to your opinions, and disagree vehemently to your objections on principle, BTW), and people finding it difficult to navigate interfaces on portable devices. While this is improving, with clearer interfaces and bigger screens than before, including those on hybrid slider and touch-screen phones, there's a long way to go before these will become appreciated in the same way as a PC. The problem is on the software side, not the hardware side, and that's something that's going to have to improve.

- Cloud computing will not catch on within the next ten years, and will remain a niche application.

Ah, cloud computing. I've heard so much about this, with applications like Google Docs presenting office software over the internet. It's completely overrated. You know, it reminds me of something I've read about, something that was beginning to die out about the time I was born; a concept called time-sharing.

You see, back in the 1950s and 1960s, when electronic computers really started to come into their own, computers were hugely expensive devices, only within the financial reach of scientists, universities, businesses, governments and military organisations. They were crude, often accepting their entry through Hollerith punch cards and front-panel switches, and later through mechanical teletypes, which were loud, crude machines vaguely representative of a typewriter. The problem was that most of these control mechanisms only allowed one person to use the computer at once, and so, the idea of time-sharing was devised. With time-sharing operating systems, a single computer could be connected to by several terminals at once, and the computer would divide processing power to users through controls programmed into their applications. This persisted throughout the 1960s and 1970s, used with the increasingly powerful mainframes and minicomputers, to the point where hundreds of people could be supported on some of these computers at a time.

Then, during the late 1970s and 1980s, there was a development which drastically changed the face of computing. The development of the personal computer meant that people would no longer have to use a massive centralised minicomputer or mainframe for many of their applications, and time-sharing began to die out as the PC became more powerful. The personal computer was developed to get people away from the concept of a massive centralised computing facility.

And therein lies my objections to cloud computing. Right now, the computer I'm typing on has more power than the fastest 1980s supercomputer. My computer at home would be in the TOP500 list all the way to the mid-1990s. Why then, when we have computers that can do so much, would we willingly move ourselves metaphorically back in time to the idea of an centralised application server? I mean, it's not even like most of the consumer-targeted programs are any faster than the ones we have on our home computers. Indeed, because many of them are programmed in Java, and because our internet connections are generally so woefully inadequate for using a fully-featured office suite, these applications tend to be slower!

Now, I can strictly understand the idea of internet archiving, although I still think that it would be more logical to carry around a USB stick, but I do not understand why you'd want to move most of your workload to a slow, inadequate office suite or imaging program, and therefore, I must conclude that cloud computing will not find success outside of niche applications, and that it will not even catch on for those within ten years. People are making too much of a technology which was actually rendered obsolete more than twenty years ago.

- There will be no input technology within the next ten years that will displace the keyboard and mouse.

We've all seen the success of the Wii, using its strictly technically inferior but still groundbreaking motion controls, and we've seen massive success for touch-screen phones, despite the notable inadequacies of the iPhone and many of its competitors. With all this buzz around these new input methods, it would be easy to presume that we'll have new input devices which will displace the ones we're all using right now.

I'm not so convinced.

You see, people have been predicting virtual reality and new input methods for years, and yet, devices developed several decades ago are still riding strong. The mouse was first developed in the 1960s, and the keyboard can trace its past back to the mid-1800s, via visual display terminals and mechanical teleprinters. The fact remains that there is no technology currently faster for entering text data than the keyboard. Specialist stenographic keyboards work more quickly, but they still operate on many of the same principles as a typewriter's keyboard. The mouse as well has many advantages which are hard to ignore. It has the sort of sensitivity, accuracy and precision which motion controls and touch screens would kill for, were they personified.

I have mobile devices with both a QWERTY-style keypad and an on-screen touch-sensitive keyboard. When it comes to entering data in quickly, the keypad will completely destroy the touch screen in a speed test, and that's with a more precise stylus-based resistive touch screen as well. I'd absolutely loathe to try typing in an entire review with the iPhone, as I've actually done with my Nokia E71.

There are other reasons why I feel that touch screens aren't going to displace the keyboard. Using a touch screen with your finger or thumb feels even worse than using a chiclet-style keyboard, of the type most derided when they were presented with the IBM PCjr or ZX Spectrum. There are reasons why people still buy typewriters for hard-copy writing. There are reasons why people spend over $100 to buy twenty-year-old IBM Model M keyboards. It's because of the superior tactile feel of these devices and the audible response that the keys make when they're successfully depressed, that tactile feel which is almost completely eliminated when you try to engage a touch screen with your finger.

I think, once again, that people are missing the big picture, and that's why I'm predicting that I'll still be considered normal for using my keyboard on my computer in 2020.

- Social networking is a fad and its use shall have sharply declined in three years' time.

And finally, we move on to my most controversial issue. I don't like social networking. Not one bit - I've even considered writing an essay entitled, "Social Networking Considered Harmful". You see, I reckon that it's a fad, just like all of those other fads that I've grown up with. When I speak to people at college, I don't find many who actually want to engage with technology on a level any greater than the internet and office software. I don't find many that even want to use Photoshop or who admit to using it to aid fictional writing. Perhaps I'm talking to the wrong people, but these don't seem like people that are particularly interested in computers, and that makes me inclined to believe that they're not going to continue to use computers in the same way they do now.

The problem is that internet communication is often devoid of any real meaning. The limitations of text in e-mails and electronic messages remove most of the emotion of a message, which is perfect for business e-mails and acceptable in personal e-mails, but not so adequate when it comes to expressing yourself on a social level. For that reason, when you look at a Bebo or MySpace page, it generally looks something like a GeoCities page, circa 1998. Clashing colours, poorly-chosen backgrounds and hideous spelling and grammar lend the impression of something that's been utterly hacked together. There's a reason why the web-pages of sites like Google and even the W3M, maintainers of the World Wide Web standard, stick to minimalist designs. Well-designed corporate websites stick to clean designs. Social networking pages, however, do not.

And that's even before we get into the actual content of the pages. It strikes me as quite frightening that this is the impression that some people actually want to create of themselves. That these pages are checked by companies for background information is even more frightening. When your pages don't exactly give the impression that you are even fully literate, let alone a potentially intelligent and creative employee, you are well and truly fornicated up the rectal cavity.

As somebody who's never signed up to a social networking service, it's very difficult to understand those snippets of insider information that I hear from the news, often distorted as the newscasters fail to understand the true nature of the technology. However, I still find it impossible to understand why there are people who spend their time virtually befriending people they intend to have no contact with on the internet at all, even though I spend my time writing reviews, articles and technological rants for people I've never met. Maybe there's some sort of leap of faith that has to be made, but I'm still convinced that social networking will be a phenomenon on life support within three years, just like the "dot-com bubble" burst.
 
I agree with you on all points but the last.

I believe social networking sites are here to stay. Its fair to say each individual one is a fad, here in the UK, I remember when bebo was big, then it got swapped for myspace, and then finally facebook.

While these individual networking sites may change during the years, I think the concept of it is here to stay. The content on it is shallow, but thats just a reflection of the majority of users on it - teenagers and people in their 20's.

I think its highly stupid to actualy write anything of importance on it, because as you have put it, companies, possible employers and perhaps even the government will find out. What people fail to realise (being the first generation with access to such technology) is that they are leaving an online footprint on the internet, and trust me, this will really kick off in the next 5-10 years, when people start realising all the time they have wasted online will come back to get them.

The stories we see these days on the news about people losing their jobs due to online remarks is only the tip of the iceberg.
 
The reason why multithreading and multicore support haven't picked up with the speed that it should have is that software development houses have very little motivation to do that.

In server level applications were raw performance determines whether you sell or not, you see that most software that can benefit from multicore already has support for it. Database apps, number crunching and several other applications, have multicore support.

The only hope that we have to change this unfortunate trend of consumer apps is to pressure and boycott software developers that do not improve their code.

Microshaft is most notorious in that respect. XP has little to no support for multicore (the cores are seeing, but most of the time, they are idling). Five years later, they release Vista and you still don't see good multicore support. Five years. Looks like they got their priorities mixed up.

Microshaft seems to be trapped into an endless cycle. They know that to develop a natively multicore operating system, they would have to spend a long time and lots of money to do so. It would require a major rewrite of the Windows code and in some cases, the code would have to be redesigned because of scaling problems. They won't take that step because of the cost so they take the idiotic approach of adding bells and whistles to their software, knowing that they are bloating the code and that they are not responsible in the eyes of the consumer if the app runs like shit.

Consumers, however, have started to catch on to Microsoft's dirty little game. They are starting to realize that most of their hardware is capable of running an operating system with great performance if only it wasn't bloated with unnecessary crap.

I recall reading an article that said that several software developing houses were pressuring Intel to focus on Clock Speed instead of Optimizing the execution pipeline and adding cores. I won't focus on the article itself (mainly because I have been unable to find it for your perusal) but on the fact that software houses were terrified of the new Intel's new road map. They knew that their bloated crapware would run poorly on multi-cores (or at least with very little improvement) and that they would be the ones where the finger would be pointed at. The article also mentioned a software developer complaining about people expecting software to magically run in a parallel manner and sarcastically asked for some magical compiler that turned sequential code to parallel-optimized code.

There is one very strong reason to focus on parallelism: if we take the increase of transistor count as a measure of progress, in less than a decade, we might be looking a dead end, development-wise. As transistor fabrication processes shrink, certain factors such as leakage currents and quantum tunneling become more important than the regular electric laws and properties. The fabrication roadmap goes from 45nm to 32nm, then to 22nm, then to 16nm. The barrier seems to be at 11nm~6nm depending on who you ask. At distances smaller than that, the spacing between components would be almost as small as an atom.

This causes problems in every single facet of processor fabrication: we still don't have processes that are accurate enough to allow use to grow the silicon dies at these densities in a reliable way. Photolitography and vapor chemical deposition are not precise enough. There have been some advances in that respect by using x-rays and other types of radiation and metamaterials to create lenses that can focus to those distances, so this might not be a strong reason. Quantum tunneling, proves a different beast to tame. How can you control leakage currents and electron flow if your electrons can pass through other materials. If you can't direct or control the electron flow, you can't use it to transmit intelligent signals. Optical computing might possible help alleviate this with the delopment of metamaterials.

In short, we will see a temporary increase of computer performance until we hit the fabrication limit of the transistor. Then, its up to the software houses to get their heads out of their ass and do their part.
 
Roflcore said:
so basically you are saying that everything seems to stay the same?!

Not everything, but it's finding the places where technology will progress that's the hard part. I'm not a complete pessimist, and I think there are areas which technology's going to rapidly improve in within a relatively short period. But those will have to wait for a post update, because I'm still trying to think of the technologies that actually will work. Mobile computers (handhelds, laptops, netbooks, etc.) seem to be one of the technologies that will work.
 
Chancellor Kremlin said:
I believe social networking sites are here to stay.

I would have to agree. For better or for worse they aren't going anywhere. I have a younger sister who is in high school. Her generation is growing up using these sites as one of their main forms of communication. They will reach adulthood having learned to communicate using social networking sites. If anything I expect these to become more advanced and more popular.
 
Back
Top