The Future of Computing (According to Me)

Started by PabloMack, May 22, 2019, 07:29:52 PM

Previous topic - Next topic

PabloMack

Hi All,

I haven't been doing TG lately but I sure have enjoyed the experience. Don't get me wrong. I don't plan to go away any time soon. Last August I started on a Gen6 of my development tools starting with the editor. After that I decided to write an AMD64 assembler. Then I followed that with a library manager then a linker. After becoming an expert on the x86-64 instruction set I thought to myself that I could design an architecture a lot better than that. So about two weeks ago I started to design a computer architecture for what I call an Array Processor. I call it ϕEngine. Thinking about what I would like to have in a CPU architecture is not a new thing with me. For the past 30 years I have often sat down and started to design a CPU architecture and instruction set. It is one of my favorite things. I have programmed many different computers in assembly language including the 68K, PDP11, VAX, and several smaller processors. I've looked at PowerPC (which is almost incomprehensible), SPARC, MIPS, ARM and most recently the Itanium (now called the Itanic). I've learned a lot about what the designers of those systems did right and what I really want to avoid. I am not a fan of RISC and I consider it just a special-purpose technology for special niches.

But now I'm getting seriously interested in parallel processing hardware in a design role.  One of my most important personal experiences for use of this technology is 3D image rendering. TG has been a major part of that experience and as I move forward, TG will be on my mind as an example of a target application to support on this new system.

Matt

That's interesting. Have you written about your work in more detail anywhere? Will your new architecture more closely resemble CPUs or GPUs, and how will it compare to them?
Just because milk is white doesn't mean that clouds are made of milk.

Dune

This is already incomprehensible to me, but the gist of it seems very intriguing. You might find a way for extremely fast rendering, if I get it right (parallel processing). Good luck!

PabloMack

#3
Quote from: Matt on May 22, 2019, 08:44:59 PM
That's interesting. Have you written about your work in more detail anywhere? Will your new architecture more closely resemble CPUs or GPUs, and how will it compare to them?

All I've publicized so far is purely software-tool related. I don't think anything will replace the GPU any time soon. Without going too much into detail (which may change anyway), the idea is to "upgrade" the CPU's role into a CPU(few)/EPU(many) combination. The CPU will be able to do scalar as usual and will have hardware-assisted array processing that is aimed at SIMD in a different way than is currently done with SSE and AVX. Data Registers of specific size and instruction extensions to handle them will be gone. Instead, Array Units will look at arrays (and vectors as the trivial case) as data structures in memory. Execution acceleration will come from hardware that is not visible to the programmer. With hardware version upgrades, the programmer's view will not change but hardware behind the acceleration could make execution faster without the trouble we've had with explicit hardware and instruction set extensions as in the x86. The implementation will look something like x86's repeat instructions except that this feature will apply to many instructions, not just a few.

But SIMD will not handle all parallel processing needs because often times different array elements require different paths in program execution for their computation. This is called MIMD. So this is where the EPU (Element Processing Unit) comes in. It is similar to a GPU but it is more tightly coupled with the CPU cores and is for general-purpose programming. Each of the EPUs has is own dedicated private memory and is more RISC-like with a reduced instruction set and fewer registers. But it has direct access to the CPU's virtual address space via Argument Registers. It's own private registers are only used for accessing its private stack and memory. It's program, though, also comes directly from the CPU's address space on-demand and doesn't not have to be downloaded explicitly by the CPU. It is just mapped in and cached. I may make the CPU have a control bit in the Control Register to make it run in EPU mode. This is so that it can assist in processing the arrays. Also, the first implementation may have only one core which serves as both the CPU and EPU. This is vaguely similar to the ARM/THUMB mode feature. When the final element of an array is to be processed, it makes sense for the CPU to jump in and assist the dedicated EPUs (if there are any) for completion. The CPU's application will not know how many EPUs are available for use at any one time. Dispatching of EPUs will be invoked by a CPU instruction in hardware. When it sees that there are no free EPUs then it could assign the CPU to work as an EPU for one of the elements of an array (or all of them (one at a time) in a low-end design).

The whole ϕSystem is a co-design of hardware, software and operating system. Unlike the DEC Alpha which was designed to be "software and programming language neutral", the ϕEngine is part of the co-design. To many, source code may look like something that came out of Area 51 (Hangar 18)  because it is full Unicode-based. But ASCII with which all programmers are familiar, is a trivial subset of Unicode. I foresee no problems using other programming languages running under ϕOS since this environment is really a superset of what we are using today.

Asterlil

If I understand you correctly, and I haven't programmed in ASM since the days of segment-offset addressing, your design will bring about a machine that will be natively disposed to 3D artwork?

PabloMack

Quote from: Asterlil on May 23, 2019, 01:17:35 PMIf I understand you correctly, and I haven't programmed in ASM since the days of segment-offset addressing, your design will bring about a machine that will be natively disposed to 3D artwork?

3D art work is inherently array-oriented so it could benefit from this technology a lot. This architecture is aimed at many different applications, but I think 3D artwork is a major one.

PabloMack

Quote from: PabloMack on May 23, 2019, 08:48:28 AMTo many, source code may look like something that came out of Area 51 (Hangar 18)  because it is full Unicode-based.

I really meant this as a joke, not to alienate people. When you think about it, it was fortunate that the IBM keyboard that led to our limited (QWERTY) ASCII character set that we use to write nearly 100% of our source code was enough to get where we are today. But I don't think it is enough to get very far beyond where we are now. We should all know what it is like to look at machine code that is all 1's and 0's and know that it is not easy to comprehend. Future programmers will look at our 21st-century ASCII code and feel like they are looking at 1's and 0' because we use and reuse and reuse different combinations of the same characters to mean different things. It leaves lots of room for ambiguity and doesn't do a lot for communicating higher abstract ideas. In large part, what C++ does in its object-orientation is to hide what you are doing so that anyone looking at the code has little idea what is going on because the meaning is hidden in class libraries. I don't call this abstract so much as cryptic.

Asterlil

I lived pretty close to Roswell for a long time, and took you at your word.  ;D

The idea of working in Unicode sounds about as daunting as learning Russian or Arabic or Hebrew, though. Not only are you learning to speak the language and master its syntactic intricacies, but you have to learn the alphabet. There really would have to be a keyboard just for programmers ... and don't other alphabetic sets (like, say, Japanese kanji keyboards) use different subsets of the Unicode symbol set now? Perhaps that future keyboard will look more like an organ, the musical instrument, with two or more keyboards (called manuals on an organ)* offset one atop the other: one for the code and the other for the oh so much more necessary comments.

At the moment my laptop is considered a gaming machine, which makes it good for 3D artwork. I really like the idea of your tech flipping that, such that gamers are drawn to use the ϕEngine-based machines.


*Disclosure: my mother was an organist. Organs have always had programmable function keys too, called stops.

PabloMack

#8
Quote from: Asterlil on May 24, 2019, 12:00:27 PM
I lived pretty close to Roswell for a long time, and took you at your word.  ;D

The idea of working in Unicode sounds about as daunting as learning Russian or Arabic or Hebrew, though. Not only are you learning to speak the language and master its syntactic intricacies, but you have to learn the alphabet. There really would have to be a keyboard just for programmers ... and don't other alphabetic sets (like, say, Japanese kanji keyboards) use different subsets of the Unicode symbol set now? Perhaps that future keyboard will look more like an organ, the musical instrument, with two or more keyboards (called manuals on an organ)* offset one atop the other: one for the code and the other for the oh so much more necessary comments.

At the moment my laptop is considered a gaming machine, which makes it good for 3D artwork. I really like the idea of your tech flipping that, such that gamers are drawn to use the ϕEngine-based machines.

*Disclosure: my mother was an organist. Organs have always had programmable function keys too, called stops.

Funny about the organ idea. It would be awesome to use just for fun but I wouldn't have any room on my desk for anything else. I've literally been using a large character set for over 30 years before I migrated to Unicode in 2009. I don't need the large physical keyboard because I have about 50 "soft keyboards" for different purposes. There are 26 keyboards (A~Z) that I can program any way I want. I keep the A Keyboard standard ASCII. The P Keyboard is my usual programming keyboard and it is active by default. I also have about 22 special operator keypads along with the Unicode Keypad which I can use to look at the whole blooming thing if I want to hunt for somethng. It really isn't as bad as you might think because the vast majority of characters in Unicode are not part of the ϕPPL language. You can put any Unicode text in quotes and it doesn't care. But the character set that the language actually uses is not terribly large. I can get to all of them easily with my soft keyboards and keypads. I have programmed the Russian alphabet into the R Keyboard as you can see. And the source code you can see in the window really doesn't look as alien as the characters in the movie.
[attach=1]

PabloMack

#9
It seems everyone these days are after speed of execution. The ϕSystem's primary goal is not speed of execution but improving the architecture that programmers have to deal with. It is to raise the programmer's mind to a higher level so that their job is made to be a lot easier. When building software is a nightmare, development takes a lot longer, it requires a lot more labor, you get a lot more bugs and the system ends up being slower because of the terribly complex software that hardware has to run. End users of software may not seem to know the difference but when it takes long cycles for new software to become available, it is buggy, it is bloated, it crashes easily and they have to worry about such things as "Do I have the right DLL version?" and nonsense like that, then I think they care because it affects them directly. Even some of the nastiness hidden below bubbles up and end users still have to deal with it anyway. DLL's are a good example.  But end users don't necessarily understand that their lives are made more complicated because of all of the dirt that has been swept under the rug. Yes, I'm talking about the Wintel/Unix-tel systems we've all been using. Current systems are nasty and patched together with duct tape and baling wire. It's not pretty. Linux and Windows experts might like the mess because it makes them feel needed and their roles are more exclusive because they know that end users are helpless without them. But I think it can change for the better for those who are willing to go beyond this mess.

Asterlil

I was only a journeyman programmer, not brilliant, but I loved doing it. When PC's came out, it was like we were all Model T owners, expected to know about and fix what was under the hood. And crank-start Windows! You have lucidly described what our machines have become.

masonspappy

Quote from: PabloMack on May 25, 2019, 11:33:51 AM
Current systems are nasty and patched together with duct tape and baling wire.
Wait...at least you can depend on duct tape and bailing wire.....
;D

PabloMack

Quote from: masonspappy on May 25, 2019, 09:15:11 PMWait...at least you can depend on duct tape and bailing wire..... ;D

I'm not one to advocate anyone giving up their PCs tomorrow. I'm certainly going to need mine to build what comes next. Technological progress doesn't happen by throwing out everything and starting over. The khmer rouge in Cambodia proved that doesn't work very well. And even after I have the next platform, I'll still have a lot of applications on the old one that I'll want to use.

Trying to move duct tape after it has sat there sweltering for a while becomes a real mess. The sticky residue needs some serious solvent to get it off and if the solvent destroys what it is stuck to, well?  And baling wire acquires some interesting kinks when you try to reuse it. It never goes back on as straight after you remove it and try to reapply it. Best leave it where it is until something better comes along.

;D

N-drju

All of what you say PabloMack seems extremelly complicated and I'm afraid I don't understand most of it. You seem to be after a computing mechanism that is created from the scratch rather than using Windows or Linux and all the (yes, agreed) mess that these systems have, buried somewhere between the lines of code, slowing many systems down.

It would sure be great to have computing done fast, cheap and bug-free. But if it's some new system that does computing... what will happen to our favorite programs? And no, I mean apart from TG.

This was actually the same story with Apple computers. Some users value them a lot. But you could never run your fav Windows game on it "just like that." Unless you actually go for the trouble and install Bootcamp.

Progress - fine. But I still want to use other applications without having to turn my system upside down or creating special HDD sectors... Don't you think that trying to run a program on a system that it was not designed for will, inevitably, cause problems and errors pop up anyway? And on something that was supposed to remove the "dust" of the contemporary systems.

Also, and in good faith too, I am at a loss trying to understand why would you publish such an idea, in its nascent stages, out here in the Internet...
"This year - a factory of semiconductors. Next year - a factory of whole conductors!"

PabloMack

#14
Sorry N-drju. This is the wrong place to get into these philosophical (and especially technical) issues.