Planetside Software Forums

General => Open Discussion => Topic started by: PabloMack on May 22, 2019, 07:29:52 PM

Title: The Future of Computing (According to Me)
Post by: PabloMack on May 22, 2019, 07:29:52 PM
Hi All,

I haven't been doing TG lately but I sure have enjoyed the experience. Don't get me wrong. I don't plan to go away any time soon. Last August I started on a Gen6 of my development tools starting with the editor. After that I decided to write an AMD64 assembler. Then I followed that with a library manager then a linker. After becoming an expert on the x86-64 instruction set I thought to myself that I could design an architecture a lot better than that. So about two weeks ago I started to design a computer architecture for what I call an Array Processor. I call it ϕEngine. Thinking about what I would like to have in a CPU architecture is not a new thing with me. For the past 30 years I have often sat down and started to design a CPU architecture and instruction set. It is one of my favorite things. I have programmed many different computers in assembly language including the 68K, PDP11, VAX, and several smaller processors. I've looked at PowerPC (which is almost incomprehensible), SPARC, MIPS, ARM and most recently the Itanium (now called the Itanic). I've learned a lot about what the designers of those systems did right and what I really want to avoid. I am not a fan of RISC and I consider it just a special-purpose technology for special niches.

But now I'm getting seriously interested in parallel processing hardware in a design role.  One of my most important personal experiences for use of this technology is 3D image rendering. TG has been a major part of that experience and as I move forward, TG will be on my mind as an example of a target application to support on this new system.
Title: Re: The Future of Computing (According to Me)
Post by: Matt on May 22, 2019, 08:44:59 PM
That's interesting. Have you written about your work in more detail anywhere? Will your new architecture more closely resemble CPUs or GPUs, and how will it compare to them?
Title: Re: The Future of Computing (According to Me)
Post by: Dune on May 23, 2019, 01:07:36 AM
This is already incomprehensible to me, but the gist of it seems very intriguing. You might find a way for extremely fast rendering, if I get it right (parallel processing). Good luck!
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on May 23, 2019, 08:48:28 AM
Quote from: Matt on May 22, 2019, 08:44:59 PM
That's interesting. Have you written about your work in more detail anywhere? Will your new architecture more closely resemble CPUs or GPUs, and how will it compare to them?

All I've publicized so far is purely software-tool related. I don't think anything will replace the GPU any time soon. Without going too much into detail (which may change anyway), the idea is to "upgrade" the CPU's role into a CPU(few)/EPU(many) combination. The CPU will be able to do scalar as usual and will have hardware-assisted array processing that is aimed at SIMD in a different way than is currently done with SSE and AVX. Data Registers of specific size and instruction extensions to handle them will be gone. Instead, Array Units will look at arrays (and vectors as the trivial case) as data structures in memory. Execution acceleration will come from hardware that is not visible to the programmer. With hardware version upgrades, the programmer's view will not change but hardware behind the acceleration could make execution faster without the trouble we've had with explicit hardware and instruction set extensions as in the x86. The implementation will look something like x86's repeat instructions except that this feature will apply to many instructions, not just a few.

But SIMD will not handle all parallel processing needs because often times different array elements require different paths in program execution for their computation. This is called MIMD. So this is where the EPU (Element Processing Unit) comes in. It is similar to a GPU but it is more tightly coupled with the CPU cores and is for general-purpose programming. Each of the EPUs has is own dedicated private memory and is more RISC-like with a reduced instruction set and fewer registers. But it has direct access to the CPU's virtual address space via Argument Registers. It's own private registers are only used for accessing its private stack and memory. It's program, though, also comes directly from the CPU's address space on-demand and doesn't not have to be downloaded explicitly by the CPU. It is just mapped in and cached. I may make the CPU have a control bit in the Control Register to make it run in EPU mode. This is so that it can assist in processing the arrays. Also, the first implementation may have only one core which serves as both the CPU and EPU. This is vaguely similar to the ARM/THUMB mode feature. When the final element of an array is to be processed, it makes sense for the CPU to jump in and assist the dedicated EPUs (if there are any) for completion. The CPU's application will not know how many EPUs are available for use at any one time. Dispatching of EPUs will be invoked by a CPU instruction in hardware. When it sees that there are no free EPUs then it could assign the CPU to work as an EPU for one of the elements of an array (or all of them (one at a time) in a low-end design).

The whole ϕSystem is a co-design of hardware, software and operating system. Unlike the DEC Alpha which was designed to be "software and programming language neutral", the ϕEngine is part of the co-design. To many, source code may look like something that came out of Area 51 (Hangar 18)  because it is full Unicode-based. But ASCII with which all programmers are familiar, is a trivial subset of Unicode. I foresee no problems using other programming languages running under ϕOS since this environment is really a superset of what we are using today.
Title: Re: The Future of Computing (According to Me)
Post by: Asterlil on May 23, 2019, 01:17:35 PM
If I understand you correctly, and I haven't programmed in ASM since the days of segment-offset addressing, your design will bring about a machine that will be natively disposed to 3D artwork?
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on May 23, 2019, 03:09:08 PM
Quote from: Asterlil on May 23, 2019, 01:17:35 PMIf I understand you correctly, and I haven't programmed in ASM since the days of segment-offset addressing, your design will bring about a machine that will be natively disposed to 3D artwork?

3D art work is inherently array-oriented so it could benefit from this technology a lot. This architecture is aimed at many different applications, but I think 3D artwork is a major one.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on May 24, 2019, 09:28:59 AM
Quote from: PabloMack on May 23, 2019, 08:48:28 AMTo many, source code may look like something that came out of Area 51 (Hangar 18)  because it is full Unicode-based.

I really meant this as a joke, not to alienate people. When you think about it, it was fortunate that the IBM keyboard that led to our limited (QWERTY) ASCII character set that we use to write nearly 100% of our source code was enough to get where we are today. But I don't think it is enough to get very far beyond where we are now. We should all know what it is like to look at machine code that is all 1's and 0's and know that it is not easy to comprehend. Future programmers will look at our 21st-century ASCII code and feel like they are looking at 1's and 0' because we use and reuse and reuse different combinations of the same characters to mean different things. It leaves lots of room for ambiguity and doesn't do a lot for communicating higher abstract ideas. In large part, what C++ does in its object-orientation is to hide what you are doing so that anyone looking at the code has little idea what is going on because the meaning is hidden in class libraries. I don't call this abstract so much as cryptic.
Title: Re: The Future of Computing (According to Me)
Post by: Asterlil on May 24, 2019, 12:00:27 PM
I lived pretty close to Roswell for a long time, and took you at your word.  ;D

The idea of working in Unicode sounds about as daunting as learning Russian or Arabic or Hebrew, though. Not only are you learning to speak the language and master its syntactic intricacies, but you have to learn the alphabet. There really would have to be a keyboard just for programmers ... and don't other alphabetic sets (like, say, Japanese kanji keyboards) use different subsets of the Unicode symbol set now? Perhaps that future keyboard will look more like an organ, the musical instrument, with two or more keyboards (called manuals on an organ)* offset one atop the other: one for the code and the other for the oh so much more necessary comments.

At the moment my laptop is considered a gaming machine, which makes it good for 3D artwork. I really like the idea of your tech flipping that, such that gamers are drawn to use the ϕEngine-based machines.


*Disclosure: my mother was an organist. Organs have always had programmable function keys too, called stops.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on May 24, 2019, 05:21:54 PM
Quote from: Asterlil on May 24, 2019, 12:00:27 PM
I lived pretty close to Roswell for a long time, and took you at your word.  ;D

The idea of working in Unicode sounds about as daunting as learning Russian or Arabic or Hebrew, though. Not only are you learning to speak the language and master its syntactic intricacies, but you have to learn the alphabet. There really would have to be a keyboard just for programmers ... and don't other alphabetic sets (like, say, Japanese kanji keyboards) use different subsets of the Unicode symbol set now? Perhaps that future keyboard will look more like an organ, the musical instrument, with two or more keyboards (called manuals on an organ)* offset one atop the other: one for the code and the other for the oh so much more necessary comments.

At the moment my laptop is considered a gaming machine, which makes it good for 3D artwork. I really like the idea of your tech flipping that, such that gamers are drawn to use the ϕEngine-based machines.

*Disclosure: my mother was an organist. Organs have always had programmable function keys too, called stops.

Funny about the organ idea. It would be awesome to use just for fun but I wouldn't have any room on my desk for anything else. I've literally been using a large character set for over 30 years before I migrated to Unicode in 2009. I don't need the large physical keyboard because I have about 50 "soft keyboards" for different purposes. There are 26 keyboards (A~Z) that I can program any way I want. I keep the A Keyboard standard ASCII. The P Keyboard is my usual programming keyboard and it is active by default. I also have about 22 special operator keypads along with the Unicode Keypad which I can use to look at the whole blooming thing if I want to hunt for somethng. It really isn't as bad as you might think because the vast majority of characters in Unicode are not part of the ϕPPL language. You can put any Unicode text in quotes and it doesn't care. But the character set that the language actually uses is not terribly large. I can get to all of them easily with my soft keyboards and keypads. I have programmed the Russian alphabet into the R Keyboard as you can see. And the source code you can see in the window really doesn't look as alien as the characters in the movie.
[attach=1]
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on May 25, 2019, 11:33:51 AM
It seems everyone these days are after speed of execution. The ϕSystem's primary goal is not speed of execution but improving the architecture that programmers have to deal with. It is to raise the programmer's mind to a higher level so that their job is made to be a lot easier. When building software is a nightmare, development takes a lot longer, it requires a lot more labor, you get a lot more bugs and the system ends up being slower because of the terribly complex software that hardware has to run. End users of software may not seem to know the difference but when it takes long cycles for new software to become available, it is buggy, it is bloated, it crashes easily and they have to worry about such things as "Do I have the right DLL version?" and nonsense like that, then I think they care because it affects them directly. Even some of the nastiness hidden below bubbles up and end users still have to deal with it anyway. DLL's are a good example.  But end users don't necessarily understand that their lives are made more complicated because of all of the dirt that has been swept under the rug. Yes, I'm talking about the Wintel/Unix-tel systems we've all been using. Current systems are nasty and patched together with duct tape and baling wire. It's not pretty. Linux and Windows experts might like the mess because it makes them feel needed and their roles are more exclusive because they know that end users are helpless without them. But I think it can change for the better for those who are willing to go beyond this mess.
Title: Re: The Future of Computing (According to Me)
Post by: Asterlil on May 25, 2019, 04:12:28 PM
I was only a journeyman programmer, not brilliant, but I loved doing it. When PC's came out, it was like we were all Model T owners, expected to know about and fix what was under the hood. And crank-start Windows! You have lucidly described what our machines have become.
Title: Re: The Future of Computing (According to Me)
Post by: masonspappy on May 25, 2019, 09:15:11 PM
Quote from: PabloMack on May 25, 2019, 11:33:51 AM
Current systems are nasty and patched together with duct tape and baling wire.
Wait...at least you can depend on duct tape and bailing wire.....
;D
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on May 27, 2019, 11:17:10 AM
Quote from: masonspappy on May 25, 2019, 09:15:11 PMWait...at least you can depend on duct tape and bailing wire..... ;D

I'm not one to advocate anyone giving up their PCs tomorrow. I'm certainly going to need mine to build what comes next. Technological progress doesn't happen by throwing out everything and starting over. The khmer rouge in Cambodia proved that doesn't work very well. And even after I have the next platform, I'll still have a lot of applications on the old one that I'll want to use.

Trying to move duct tape after it has sat there sweltering for a while becomes a real mess. The sticky residue needs some serious solvent to get it off and if the solvent destroys what it is stuck to, well?  And baling wire acquires some interesting kinks when you try to reuse it. It never goes back on as straight after you remove it and try to reapply it. Best leave it where it is until something better comes along.

;D
Title: Re: The Future of Computing (According to Me)
Post by: N-drju on May 28, 2019, 06:06:37 AM
All of what you say PabloMack seems extremelly complicated and I'm afraid I don't understand most of it. You seem to be after a computing mechanism that is created from the scratch rather than using Windows or Linux and all the (yes, agreed) mess that these systems have, buried somewhere between the lines of code, slowing many systems down.

It would sure be great to have computing done fast, cheap and bug-free. But if it's some new system that does computing... what will happen to our favorite programs? And no, I mean apart from TG.

This was actually the same story with Apple computers. Some users value them a lot. But you could never run your fav Windows game on it "just like that." Unless you actually go for the trouble and install Bootcamp.

Progress - fine. But I still want to use other applications without having to turn my system upside down or creating special HDD sectors... Don't you think that trying to run a program on a system that it was not designed for will, inevitably, cause problems and errors pop up anyway? And on something that was supposed to remove the "dust" of the contemporary systems.

Also, and in good faith too, I am at a loss trying to understand why would you publish such an idea, in its nascent stages, out here in the Internet...
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on May 28, 2019, 10:24:14 AM
Sorry N-drju. This is the wrong place to get into these philosophical (and especially technical) issues.
Title: Re: The Future of Computing (According to Me)
Post by: WAS on May 31, 2019, 03:44:39 PM
You seem to have similar interests as me, though it seems you're decades more experienced in the fields. I just research and occasionally dabble with CPUs with emulation and OS's.

Particularly I love your interest with TG as a target application. Definitely interested in hearing about updates here.

I recently have had peak interest in Windows 10 ARM64. I just started figuring out what's wrong with some drivers and trying to diagnose to write/wrap my own drivers. My goal is to get Windows 10 running on my older LG with AM64 Cortex-A53 in preparation for use on Cortex-A76 (my LG G8 ThinQ). I am hoping to get a native installation going, solve driver issues for what is needed... and  than try to install Terragen so I can see all it's errors and what it needs (if it's not compatible. I'm not sure what all it's based on, I know some win32 apps work fine in ARM64).

I want to take TG with me anywhere, and honestly, the Cortex-A76 with it's 8 cores, 8 threads, at 1,800 MHz, 2,420 MHz, 2,840 MHz is pretty adequate for TG and could get decent performance on mobile machines. And with Microsoft targeting ARM64 (and ditching exclusive phones) for new small end devices it's something id' like to see.

I know you're not hot with RISC, and neither am I in the end, I'd love to see more conventional architecture in mobile devices and, even computers so your post here really peaks my interest.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 03, 2019, 03:06:34 PM
Quote from: WASasquatch on May 31, 2019, 03:44:39 PMYou seem to have similar interests as me,

Thanks for the post. Now I don't feel so lonely. I have written a couple of recent
articles on LinkedIn that you might be interested in. I don't think I need to replicate
that information here.

https://www.linkedin.com/pulse/hypocrisy-hollow-promises-risc-paul-mckneely/
https://www.linkedin.com/pulse/lessons-learned-cpu-design-paul-mckneely/

I was just looking at the ARM64 manual and I'd hate to have to write a code
generator for that monster. I really miss Motorola's manuals because they
were so much better than anyone else's. What I want for the average
programmer is something that they can easily understand. I think it is a
tragedy when CPUs are so complicated that it puts the control of the whole
market into the hands of a few billion-dollar companies.
Title: Re: The Future of Computing (According to Me)
Post by: WAS on June 04, 2019, 12:37:21 PM
Quote from: PabloMack on June 03, 2019, 03:06:34 PM
I think it is a tragedy when CPUs are so complicated that it puts the control of the whole
market into the hands of a few billion-dollar companies.

I agree! That's been a serious issue of mine as well. "Research Teams" and "Labs" for researching CPU architecture and innovation, like it's national security level development. LOl

I remember a guy brought in a CPU he made to school, even was attempting his own CGA graphics. I was just simply amazed that his apparatus was running, and even ran basic ASCII games (fairly well). That was the real start of my interest over just "hey this tech is cool, I want it".

Checking out your papers now.
Title: Re: The Future of Computing (According to Me)
Post by: N-drju on June 05, 2019, 07:36:48 AM
Quote from: PabloMack on June 03, 2019, 03:06:34 PM
I think it is a tragedy when CPUs are so complicated that it puts the control of the whole
market into the hands of a few billion-dollar companies.

On the other hand, this is just business and this is how things work. You can't blame them for winning a market or two.

Do you seriously think these companies have turned "billion-dollar" overnight? Back in the days, they had to make some "first steps" too!
Title: Re: The Future of Computing (According to Me)
Post by: WAS on June 05, 2019, 12:01:30 PM
Quote from: N-drju on June 05, 2019, 07:36:48 AM
Quote from: PabloMack on June 03, 2019, 03:06:34 PM
I think it is a tragedy when CPUs are so complicated that it puts the control of the whole
market into the hands of a few billion-dollar companies.

On the other hand, this is just business and this is how things work. You can't blame them for winning a market or two.

Do you seriously think these companies have turned "billion-dollar" overnight? Back in the days, they had to make some "first steps" too!


Intel's 4004 research was actually quite costly, additionally it came in 4 variants, and each chip at $5 dollars for production cost... That would be like developing a chip with a fab cost of $31 dollars today.  By contrast today Intel is punching out about 10-15$ dies. The rest is all markup based on FAB and consumer market.

Most these companies are actually invested in. They're hardly ever out of pocket.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 05, 2019, 05:26:16 PM
Quote from: N-drju on June 05, 2019, 07:36:48 AM
On the other hand, this is just business and this is how things work. You can't blame them for winning a market or two.
Do you seriously think these companies have turned "billion-dollar" overnight? Back in the days, they had to make some "first steps" too!


I think you know I never said otherwise so I don't know where it is you think we disagree.

Quote from: WASasquatch on June 05, 2019, 12:01:30 PMIntel's 4004 research was actually quite costly, additionally it came in 4 variants, and each chip at $5 dollars for production cost... That would be like developing a chip with a fab cost of $31 dollars today.  By contrast today Intel is punching out about 10-15$ dies. The rest is all markup based on FAB and consumer market.

This is interesting. The engineering technology that is available today to you and me makes
designing processors far easier than it was back in 1970 time-frame when this took place.
That's pretty exciting, don't you think?
Title: Re: The Future of Computing (According to Me)
Post by: N-drju on June 06, 2019, 06:50:29 AM
Quote from: PabloMack on June 05, 2019, 05:26:16 PM

I think you know I never said otherwise so I don't know where it is you think we disagree.


The first quote.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 06, 2019, 07:12:41 AM
Quote from: N-drju on June 06, 2019, 06:50:29 AM
Quote from: PabloMack on June 05, 2019, 05:26:16 PM

I think you know I never said otherwise so I don't know where it is you think we disagree.


The first quote.


I'm still puzzled. Of course what you wrote is true and I haven't written anything to the
contrary.  You do imply that I said that billion dollar company's don't deserve market share
and that they happen overnight but of course I never said any such things. I will accept
that as a misunderstanding on your part. So I guess we don't have an issue seems to me. 
What is it that is bothering you and you are not saying? ;)
Title: Re: The Future of Computing (According to Me)
Post by: N-drju on June 06, 2019, 07:35:05 AM
I just thought that it was hard for you to accept the fact that the aforesaid companies are leading the CPU development and are not eager to share their findings with anyone else. The "tragedy" bit gave me this impression.

CPU architecture is not a trivial thing in general, so, in a way I don't consider it "lamentable" that this equipment is complicated and some companies consider it (rightfuly, I'd say) their "specialty". But then again, by the end of the day, you have millions of users who don't care at all and will just buy their PCs and laptops with whatever CPU on board is offered.

I'm not trying to be a jerk, please understand. I just think that considerations about regular CPUs are not relevant when a CG-dedicated CPU is what you try to investigate. These are two completely different concepts that can co-exist, regardless of what happens to the other concept.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 06, 2019, 07:43:46 AM
N-drju,

I think we are on the same page as they say. What you say is all true. But I
just noticed your Personal Text and I think it expresses what I have been
trying to say:

"When dictatorship is a fact, revolution becomes a right."

Thank you for saying it for me.  :D

Because you are not a programmer I think you may not fully appreciate that programmers are
under duress because CPUs are made unnecessarily complex. It is not just you who think
they are not easy to deal with. Mega-corporations are known for killing small company's so that
they can take their market share.

One time my sister married a bank president because she wanted a husband that could provide
her with lots of money. This guy had scary ties to the Mafia. She said one time he was drunk
and told a story about his "boss" who hired a hit-man to murder a competitor. The crime was
never solved. Of course it wouldn't bode well for a company like Intel to become publicly
embroiled in a scandal like that. But lot's of large companies still do unethical things that they
are sure they can get away with. It happens.

One tactic that really works is that they can add a lot of unnecessary complexity to technology
they control so that only they can bear the burdens. Of course, it is not always intentional and
it often happens because of poor designs that come out of "design by committee". I saw this
happen on a large scale when ARM came out. A lot of small software tool vendors folded. Of
course, this strategy can backfire when a small company finds a better way to do something.
Many innovations often happen with small companies because they can make decisions much
more quickly and don't carry the baggage that comes with large companies. That's why we
need our small business to remain healthy. But that's also a reason why the big companies
don't want them around. Many big companies have strong lobbies in Congress so that they
can build barriers that keep small businesses out. When a few large corporations control a
market it is a form of dictatorship. As in political dictatorships, the average bum doesn't want
to rock the boat because of the trouble he can see that it could bring to him personally. He's
not concerned about anyone else like you say. But this bum doesn't understand that the other
things that put him in chains (I'm speaking figuratively) are indirectly caused by the very things
that he doesn't see any reason to change.

I have a client who is having difficulty getting his products certified because there are people
on the standards committee that have adopted standards that are aimed at killing their
competition. They expressly will not allow my client to use a technique that is very effective
and safe because the companies they work for don't do it that way. My client became a member
of the standards committee so he can get the rules changed. He talked with the people who
were responsible for the rules and they admitted that they made the rules so they could
eliminate their competitors and not for any logical reasons that relate to safety or anything
else that the rules exist for. Only large companies can afford to send their employees around
the world to participate in this form of dictatorship.
Title: Re: The Future of Computing (According to Me)
Post by: N-drju on June 07, 2019, 05:55:16 AM
Quote from: PabloMack on June 06, 2019, 07:43:46 AM

"When dictatorship is a fact, revolution becomes a right."

Thank you for saying it for me.  :D


Touché.  ;)

I get your point. You are right - I do not know much about the computer industry and its business intricacies but I can believe that companies like Intel would lobby to secure a customer base for themselves. However, note one thing - don't you think it's telling that a billion-dollar company is afraid of small tech companies influence and ideas? This reminds me of the other discussion down here that I had with archonforest and other guys where it was agreed that once people get used to something, they loose their interest trying to seek alternatives...

Just a rhetorical question - would it be possible at all to cooperate with a big company, in order to capitalize on a concept? Even if for a 15% profit share?

On a different note, I would like to remind you of an interesting story that perfectly illustrates how small companies can beat the big ones.

I am sure you know Maxis - the company which made millions from the "Sims" and "Sim City" series.

Maxis was a monopolist and a pioneer as far as the city-building genre is concerned. Somewhat like Intel now. They scored one success after another since SC 2000. SC 3000 (and its "Unlimited" edition) were cash cows plain and simple. Note that all of this has been intertwined with new editions of Sims and games like "Spore" which were also increasing the firm's popularity and prestige.

And then, Sim City 5 came. This game has been a spectacular flop since day one. This title has been plagued with problems. Overloaded servers (the game was online-exclusive at one point, unlike other titles) and a ludicrously small building area for a city (one square mile? seriously?) were among some of the most serious. Maxis tried hard to salvage the title with new updates and patches but it was a mess beyond belief.

Few months later, a small, virtually unknown company named "Colossal Order" (don't be fooled by the name - it's just about 10 people!!) reluctantly released their own city-building game - "Cities: Skylines." And guess what? Every gamer out there, jumped right on this train as the game proved to be more interesting and technically superior to its counterpart. The CO's release was a complete sucker punch to Maxis and believe me when I say they lost a lot of their customer base back in 2013.

So as you can see - there is hope and a lot of possibilities for small companies. Sure, sometimes it means that you need to wait until they screw something up.  ;) But I generally think that consumers are not as much of a sheeple as businesses would like to think and one should also count on their support. A good example of that appeared a couple of days ago when Apple announced their new touchscreen product and declared that one can optionally buy a $999 stand for it. Contrary to what Apple spin-doctors assumed, the audience were not reaching out for their wallets...
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 08, 2019, 09:38:29 PM
N-drju,

I am with you on what you wrote.  ;)
Title: Re: The Future of Computing (According to Me)
Post by: Oshyan on June 08, 2019, 09:40:38 PM
Looks like the EU has a venture to "reinvent" CPU architecture as well:
https://www.techspot.com/news/80425-european-union-cpu-development-branch-delivers-first-designs.html

Any thoughts?

- Oshyan
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 09, 2019, 11:03:45 AM
Quote from: Oshyan on June 08, 2019, 09:40:38 PM
Looks like the EU has a venture to "reinvent" CPU architecture as well:
https://www.techspot.com/news/80425-european-union-cpu-development-branch-delivers-first-designs.html

Any thoughts?

- Oshyan

Thank you for the link. I was unaware of such a thing.

According to the article, it is nothing but a re-branding of ARM and RISC-V.
Apparently, Advanced RISC Machines has been lobbying in government
to garner more (mandatory) support for its products. No surprise there. And
RISC-V is license free. No wonder about that. The article also says that it will
not be used by the consumer markets but only in embedded commercial
applications. That's the part of the article that I like.

To my knowledge, there has been no new development of a 64-bit CISC
architecture since the whole world has jumped onto the RISC band wagon.
RISC proponents would claim that RISC performance is better than CISC
on the basis of comparing 30-year old CISC processors to new RISC ones.
How objective is that? There are many flaws in the arguments by RISC
promoters and one is that RISC processor instruction sets are more
orthogonal that CISC. That is just a lie. Because RISC instructions are
forced to be all the same size, they can't be as orthogonal as a well-
designed CISC processor CAN BE. Nobody has seen what CISC can do
with modern methods and component densities because nobody has
tried in 30 years. The x86 is an old architecture that has had many face-
lifts but current implementations perform admirably when compared
modern RISC machines. Imagine how well a MODERN CISC machine
could perform when implemented using modern methods. Nobody
knows because nobody has tried because of herd mentality.

While almost all processors are targeted at using current software technology
(an understandably short-sighted strategy), ϕEngine is a co-design effort to
correct past mistakes and bring about changes that can't happen unless the
ways we do things are changed simultaneously. The attitude has been "I'm
not going to change unless you do it first." So we fail to move forward because
nobody wants to take the risk (no pun intended).

But change HAS happened and not for the advancement of computer technology
itself. It has happened to accommodate ancient writing techniques. It is called
Unicode. A while back I wrote this article that you might find interesting.

https://www.linkedin.com/pulse/what-most-people-dont-know-ascii-good-bad-unicode-paul-mckneely/

My dream has been to develop a system that implements many advanced features
that have been lacking in our computers since the beginning. This system will be
used mostly by enthusiasts who want a better way of doing things. The machine
will be easy to write code for because it will make sense. It will not depend on
hardware and compilers that can only be developed by multi-billion dollar corporations
for obscure, incomprehensible or undocumented processor architectures.  Apple has
always been a pretty closed system and they soon plan to dump Intel and will be
using processors of their own design. This seems to be a good strategy for
getting control over all of the aspects of its design so that 3rd-party developers can
be left out in the cold. You will no longer be able to get documentation on how
to program the processor because it will all be inside confidential information.
So all you can do is to choose to buy what they offer or go somewhere else.

My vision is to do something like Linux has done in the way of open-source operating
system development. But Linux has gone in so many directions and it carries a
lot of old baggage. It would be too difficult for it to evolve into what I see as the
future computer. The ϕSystem is more revolutionary. It will incorporate many of
the good standards we are already using but it will discard things that should be
relegated to the past. Of course, we will have to use our current systems to develop
the new platform.
Title: Re: The Future of Computing (According to Me)
Post by: Oshyan on June 09, 2019, 01:43:03 PM
Thanks for the detailed reply. I sort of wondered if you might be able to seek funding from them given the stated goals of the EPI itself (separate from the current choice of approach).

- Oshyan
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 09, 2019, 02:16:02 PM
Quote from: Oshyan on June 09, 2019, 01:43:03 PM
I sort of wondered if you might be able to seek funding from them given the stated goals of the EPI itself (separate from the current choice of approach).

- Oshyan

What does this part mean? (I'm a native Texan. You can connect the dots.)

"To fund all that research, it hopes to become profitable in a few years though it will remain owned and operated as a relative of the European Union."
Title: Re: The Future of Computing (According to Me)
Post by: Oshyan on June 09, 2019, 02:17:32 PM
Ohh, somehow I thought you were in Europe. Ah well :D

- Oshyan
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 09, 2019, 02:23:32 PM
Quote from: Oshyan on June 09, 2019, 02:17:32 PM
Ohh, somehow I thought you were in Europe. Ah well :D
- Oshyan

It's probably because I talk so much about Unicode. When I say the word to most Americans,
I just get a blank stare. "Uni-what?". Americans in particular are quite happy with ASCII. They
don't understand why everybody else doesn't just dump their character sets and do everything
the way they do it.  After all, the 'A' in ASCII stands for American! :o
Title: Re: The Future of Computing (According to Me)
Post by: WAS on June 09, 2019, 02:58:13 PM
Quote from: PabloMack on June 09, 2019, 02:23:32 PM
Quote from: Oshyan on June 09, 2019, 02:17:32 PM
Ohh, somehow I thought you were in Europe. Ah well :D
- Oshyan

It's probably because I talk so much about Unicode. When I say the word to most Americans,
I just get a blank stare. "Uni-what?". Americans in particular are quite happy with ASCII. They
don't understand why everybody else doesn't just dump their character sets and do everything
the way they do it.  After all, the 'A' in ASCII stands for American! :o

Ok, I gotta share this to a programming group. They're going to bust up xD
Title: Re: The Future of Computing (According to Me)
Post by: Matt on June 09, 2019, 09:11:23 PM
Hi Paul,

Quote from: PabloMack on June 03, 2019, 03:06:34 PM
I was just looking at the ARM64 manual and I'd hate to have to write a code
generator for that monster. I really miss Motorola's manuals because they
were so much better than anyone else's. What I want for the average
programmer is something that they can easily understand. I think it is a
tragedy when CPUs are so complicated that it puts the control of the whole
market into the hands of a few billion-dollar companies.

With current architectures I think most "average programmers" expect the compiler to do the work of translating from the high level language to the CPU so that the CPU doesn't need to be easy to understand. This should free CPU designers to make choices based on what executes faster rather than what's easier to understand. There will always be enough smart people to write compilers for difficult-to-understand CPUs (not just in multi-billion dollar companies, but also independents) so that the rest of us don't have to understand them, but they are not "average programmers" IMO. I gather that you would like to bring programmers closer to the hardware by designing new language(s) and CPUs simultaneously. That sounds like a noble goal, and a fun one. But I believe there will always be a plethora of high level languages out there to solve the ease-of-use problem. High level languages often make things easier with no performance penalty. But they can't make things faster than the hardware allows. So, what happens at the low level should emphasize raw execution speed (as well as all the other physical goals such as low power consumption, size, etc.) because software can insulate the user from the implementation, as it has been doing so increasingly since the beginning of computing. If you can make the CPU and the lowest level language one and the same thing while simultaneously finding a faster and more efficient execution paradigm, that would be a win. But I suspect that in the long run that bump would end up being dwarfed by the other layers of abstraction which will be built on top of it.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 10, 2019, 02:55:40 PM
Quote from: Matt on June 09, 2019, 09:11:23 PM
There will always be enough smart people to write compilers for difficult-to-understand CPUs (not just in multi-billion dollar companies, but also independents) so that the rest of us don't have to understand them, but they are not "average programmers" IMO.

When I was in 1st grade, my elementary school held a softball game between the faculty and sixth-graders.  I thought (as did all of the other 1st-graders) "That should be an even match seeing that 6-graders are basically adults". Of course I thought very differently after I actually reached adult-hood myself. As a child, I thought everything above my level was over my head but they were all about the same. Perhaps this is the basis for the Dunning–Kruger Effect. I had to reach those levels before I realized how vastly different complexities were. It reminds me of something my dad once said. When he started medical school, he thought that once he climbed to the top of that ladder he would have it made. Instead he found himself at the bottom of another ladder that he couldn't see when climbing the first one. It didn't take very many ladders before he realized that he could probably never reach "the top" because his career just became an endless series of ladders. So experience blew away my notions such as "child vs. adult" and "those who understand how computers work vs. those who don't". So its easy to dismiss casualties and say that's just part of business because it was "just one of them and not one of us."

Computer complexity comes in at least two varieties. One is easy to manufacture but difficult to understand (RISC). The other is easy to understand but difficult to manufacture (CISC) . During the last 20 years, all new designs have been RISC. But there are basic principles that say that "smarter is better". CISC processors have smarter instructions so shouldn't they run faster?

I have another story about when I was working for a large corporation. I was part of a department called "Advanced Process Control". We used DEC computers to automate data collection while they were using an IBM mainframe (AS/400). The IS department hated us because we could do it better than they could. As the saying went "Nobody got fired for choosing IBM." One time my boss (a DEC proponent) attended a meeting between IS people and IBM. He laughed at our IS management because they would say "This is really complicated and we can't understand it. It must be good!"

Movie-makers: "The complexity of CG software is not important because the artists will deal with that to make life simpler for us. That is their job. There will always be enough artists to fill that need."

Paul
Title: Re: The Future of Computing (According to Me)
Post by: Matt on June 10, 2019, 05:34:45 PM
Dunning-Kruger effect is probably in full effect here in my reply  ;D

Quote from: PabloMack on June 10, 2019, 02:55:40 PM
Computer complexity comes in at least two varieties. One is easy to manufacture but difficult to understand (RISC). The other is easy to understand but difficult to manufacture (CISC) . During the last 20 years, all new designs have been RISC. But there are basic principles that say that "smarter is better". CISC processors have smarter instructions so shouldn't they run faster?

For the "ease of use" goal, from my layman's perspective it appears that this might result in a shift of responsibilities between the CPU and the compiler. Within the definition of "compiler" I would also include a code generator whose input is some very low-level language that's more in tune with the needs of the CPU but whose instructions might not map 1:1 to the CPU itself. Outsourcing some of the implementation from the hardware to the first software layer (as long as it's compiled) doesn't seem unreasonable to me. To put it another way, you could define a virtual CPU that is easy to program. It may be fully implemented in hardware, or there might be a software layer. The boundary between hardware and software would be decided by the implementation, not the specification.

I tend to think that the RISC vs. CISC argument on the basis of ease-of-use alone *at the hardware level* is not something that can be fully answered without considering how it's actually programmed by the majority of programmers. I would also say that the majority of programmers will be using high level languages regardless of interface to the CPU, unless the interface to the CPU looks like a very expressive high level language that makes others unnecessary. Perhaps your ϕPPL is that language? If so, that could be very significant.

For the speed of execution goal, I'm definitely not qualified to have an opinion on RISC vs. CISC.

Quote
I have another story about when I was working for a large corporation. I was part of a department called "Advanced Process Control". We used DEC computers to automate data collection while they were using an IBM mainframe (AS/400). The IS department hated us because we could do it better than they could. As the saying went "Nobody got fired for choosing IBM." One time my boss (a DEC proponent) attended a meeting between IS people and IBM. He laughed at our IS management because they would say "This is really complicated and we can't understand it. It must be good!"

Hah, well I would never claim that complicated and difficult to understand is a good thing in isolation.

Quote
"Movie-makers don't care about how complex CG software is because the artists will deal with that to make life simpler for us. That is their job. There will always be artists to fill that need."

I agree that this would be the wrong way for movie makers to think about CG software if they were the ones choosing, because artists cost more than software and simpler software reduces the cost spent on artists.

If you have simpler CG software that's built-upon lower level CG libraries, is it bad that the low level libraries (e.g. Embree, CUDA, RTX etc.) are not directly-usable by the CG artists when there are developers who can use the libraries? We need fewer developers to understand the libraries than the number of artists who will use the higher level software. Is it necessarily a bad thing to have a system built upon lower levels that require more specialized knowledge, or is this how we gain more efficiency overall?

Perhaps that's not where the debate lies. I suppose it depends on whether the very lowest level can be made simple enough to make some of the intermediate levels unnecessary without other compromises. Of course that would be a good thing.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 10, 2019, 05:55:47 PM
Matt, this is good stuff. Thank's for the thoughts.

My assembler is twisting my brain because it is so different from anything I've used before.
I am struggling to free my mind from the chains of experience using standard systems.
Hahaha....I'm in Heaven!

Paul
Title: Re: The Future of Computing (According to Me)
Post by: Asterlil on June 10, 2019, 06:19:00 PM
I don't want to make a discussion-stopping comment, but this is all such interesting stuff -- keep talkin'!

Since I retired from programming (sort of)(you can check out any time you want, but you can never leave), I've noticed that a gazillion script- or high-level languages seem to have sprung up. When the fact that programming above assembler level is more or less CPU-independent was mentioned upthread by Paul, it got me to thinking about all those noob languages, and I wondered why languages are sprouting like weeds...

...and I wonder if the reason might be hardware-based. I mean, I don't have to know microcode to write javascript, but does my Intel citizenship steer me toward javascript?
Title: Re: The Future of Computing (According to Me)
Post by: Matt on June 10, 2019, 07:03:11 PM
Quote from: Asterlil on June 10, 2019, 06:19:00 PM
...and I wonder if the reason might be hardware-based. I mean, I don't have to know microcode to write javascript, but does my Intel citizenship steer me toward javascript?

I think the size of the Javascript ecosystem is what leads people towards Javascript. I can't imagine why else anybody would choose it  :P (j/k, kind of)
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 10, 2019, 09:35:23 PM
Quote from: Asterlil on June 10, 2019, 06:19:00 PM
Since I retired from programming (sort of)(you can check out any time you want, but you can never leave), ...

When you were programming, what languages did you use before the noob ones came out?

Java Script is really part of the HTML web-based group of languages so its strength is portability.
It and others like it (Python comes to mind) are like an RV where you take everything with you as
you go. But your programs will be plodding along like tortoises. Languages like C, C++ and ϕPPL
are more like Ferrari's or Corvettes that make good use of the machine's performance. They are
more like Cheetahs or Falcons.
Title: Re: The Future of Computing (According to Me)
Post by: WAS on June 10, 2019, 10:40:10 PM
Quote from: PabloMack on June 10, 2019, 09:35:23 PM
Quote from: Asterlil on June 10, 2019, 06:19:00 PM
Since I retired from programming (sort of)(you can check out any time you want, but you can never leave), ...

When you were programming, what languages did you use before the noob ones came out?

Java Script is really part of the HTML web-based group of languages so its strength is portability.
It and others like it (Python comes to mind) are like an RV where you take everything with you as
you go. But your programs will be plodding along like tortoises. Languages like C, C++ and ϕPPL
are more like Ferrari's or Corvettes that make good use of the machine's performance. They are
more like Cheetahs or Falcons.

In web development we were specifically taught through various books that JavaScript was  "TOOL" and not to be abused. Boy have things changed. jQuery (which was praised at first for it's tools, and than everyone started doing whole websites requiring it), nodeJS, etc, all go against the principles we were originally taught. This may be from a security standpoint. Back in the late 90s and early 2000s you could play some havoc on machines with JS which is why the popularity of no-JS browsers grew.

An entire new generation of web developers spawned through JavaScript, and honestly ruined the industry. Website/tools/data generators everywhere. CPU wastage everywhere. Flashy stuff fading in and out and sliding around...  It sucked having to assimilate into the new age.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 11, 2019, 11:12:26 AM
Quote from: Matt on June 10, 2019, 05:34:45 PM
For the "ease of use" goal, from my layman's perspective it appears that this might result in a shift of responsibilities between the CPU and the compiler. Within the definition of "compiler" I would also include a code generator whose input is some very low-level language that's more in tune with the needs of the CPU but whose instructions might not map 1:1 to the CPU itself. Outsourcing some of the implementation from the hardware to the first software layer (as long as it's compiled) doesn't seem unreasonable to me. To put it another way, you could define a virtual CPU that is easy to program. It may be fully implemented in hardware, or there might be a software layer. The boundary between hardware and software would be decided by the implementation, not the specification.

The over-lying CPU programming model would likely be compatible with the underlying RISC hardware so it seems to be a viable strategy if that was all you wanted to do. It might be comparable to applying a ceramic roof on top of a composite to make it look different. However, the ϕEngine's virtual memory model is much more sophisticated than any RISC system (or any CPU I've seen for that matter). Trying to make a RISC virtual memory model look like ϕEngine's would require replacing the foundation of the building. Programming models are directly visible to the programmer but virtual memory is not. It has to work underneath and out of sight. So I am back to having to design a new hardware system from the ground up any way. The alternative would perform on a trace track like Frankenstein's Monster and would look just as beautiful.

The goal of the ϕSystem is to surpass what Kernighan, Ritchie, Pike and bunch did at Bell Labs when C and UNIX were co-developed. C was developed to both write operating systems as well as applications. But quite a bit of assembly was also needed because C/C++ knows almost nothing about Hardware. The goal of the ϕSystem is to do this again but eliminate the need for assembly language in the source code for the operating system and associated drivers. But assembly still has to remain available because (1) It may take some time before all of the features of ϕPPL are working (2) for optimization (3) to meet various unforeseen needs and (4) as an educational tool to teach students principles of assembly language.  I think there is no better tool for teaching engineers and programmers how computers work than writing code in assembly language. HLL's can never get you there. In order for this to work, though, ϕPPL will have to be aware of the Hardware and that makes it a lot different from most HLL's. It will have to be able to set up and tear down stacks, call its supervisors, satisfy service calls from its subordinates and program the special purpose resources that systems have and are only visible to operating system software. And those resources will have to be there and they won't be at the application programming level. No other language to my knowledge does those kinds of things. Most HLLs were designed to hide these things and are only for writing applications. So their awareness is stripped away to simplify their programming models. So they are dumb and blind when it comes to system software in general.
Title: Re: The Future of Computing (According to Me)
Post by: Asterlil on June 11, 2019, 01:51:37 PM
Languages I have known?
BAL, the assembly language for IBM-370s, was my first language, if you don't count BASIC on a land line to a mainframe in Milton Keynes! With Assembler, I hand wrote my code sitting in Coventry (I was living in England then), and the coding sheets were delivered to Liverpool, where they were transcribed by key punch operators. I then got a job deck of hollerith cards, and if there were errors, I punched a new card by hand.

Does anybody here remember Convergent Technologies, and their operating system CTOS? What a beautiful machine that was. 8086-based, then 286. I wrote in PL/M-86 and COBOL, and I knew my chip and motherboard architecture. Then it was consumed by Burroughs, and some descendant of that marriage was used by the post office for some time, maybe still is.

Then PC's, and I wrote in ASM-286 and Lattice C. When I became a caregiver to my parents (in Florida) I lost my career mobility. After a stint of heritage code maintenance in COBOL/SQL on a Wang minicomputer, I got back to PCs and developed a DBMS in FoxPro DOS, which I then converted to FoxPro for Windows. And then I got out of the bidness, while OOP was getting a proper foothold.

So I did get back into it and did some web development in the unholy triad of HTML/CSS/Javascript, and it seemed so loosey-goosey compared to the languages of my career. And then, just as I was mastering JQuery, all these other process libraries started popping up like Angular, and languages I know nothing of for performing specific tasks like handling website backend stuff, or expediting yet other languages, and I just backed off, basically feeling confused. Actionscript, anybody? I stick to my triad and JQuery in Dreamweaver, and ignore the rest. Well. I might learn Python, seeing as it's the language of Blender, but that's it. All the other stuff seems faddish; I completely agree with WASasquatch.

Oh, I still have the Wang boot floppy! It's pinned to my corkboard.

And don't get me started on the loss of creativity and teamwork after business/marketing gained ascendancy over computers.

Anyway, I would gladly go back to Ferraris and Corvettes if I were still in the career.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 11, 2019, 04:03:08 PM
Asterlil,

You have a very interesting and varied background. COBOL gives me the creeps because it is so wordy.  I think doing most of everything with operators makes a lot more sense and source code is a lot more concise. So you do have some C and assembly experience. That's down close to the hardware where I like to be. But I find the standard MicroSoft syntax a bit ambiguous. I spent many years with embedded Motorola 68K and the syntax was very clear and easier for me to understand. I've only done a little with GNU.

The Web-based languages (HTML/XML/CSS, JavaScript, PHP, and even Python and Java) are indeed very loosey-goosey as you say. They were never planned very well and they just grew over and into each other until they got to be what we have now. The Web is based on them so there is no escape if you want your work to run on the web. I am more of a hardware guy so I am happy just doing little bits of Web here and there. The current language I am learning is Verilog which is an HDL (Hardware Definition Language) and not really a programming language. I used ABEL many years ago and had great fun. But there is very little support for it now. So Verilog/SystemVerilog will be my main focus in the foreseeable future. It has become an IEEE standard and there seems to be a movement away from VHDL (another ADA-like HDL).
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 24, 2019, 06:20:57 PM
I just ordered a book that tells the story about the IBM AS/400 and its /34~38 ancestors.
It is a very strange story and a very strange architecture that was developed in isolation
from other parts of IBM including their mainframes. The location was Rochester Minnesota.
This was a business system that, apparently, did virtual mapping of all of the data that it
would ever access. Once the data was written, it could never be changed (or presumably
deleted). This is ostensibly so that data could not be altered to ensure integrity.

Back in 1988, I was working for a large corporation that had an IS department that had
been using a System /34 upgraded to a /38 and was being upgraded to an AS/400
(because nobody got fired for choosing IBM). The department I worked for was called
Advanced Process Control. We used a lot of DEC PDP/11 and VAXes as well as PCs but IS
was all IBM. My boss had to deal with the IS department for many reasons. After IS had
been using the new AS/400 for a while, my boss said that they could never figure out how
to do a backup so they just had to keep adding memory and hard drives. I never knew
how much RAM and disk storage they ended up having but it must have been huge.
They had a row of big racks full of just RAM and hard drives.

It will be interesting reading about it in more detail. There is still a lot of secrecy around
this mysterious line of computers called the iSeries which was eventually replaced with
the POWER series. I am not clear about how this relates to the PowerPC microprocessor
architecture but I have one or more books about that as well. It is also strange.

https://www.amazon.com/gp/product/1882419669/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1

I now have the basic programmer's model worked out for ϕEngine and a working assembler.
Today I started writing a document that will define how the HW works internally.
Title: Re: The Future of Computing (According to Me)
Post by: masonspappy on June 25, 2019, 04:18:18 AM
Interesting comments about the AS/400.  I worked for IBM for 25 years (1979-2002) and spent 3 years as a member of their mid-range support group that included support for AS/400s (which was later renamed to "i-Series".) After IBM I worked with an IBM Business Partner as a Project Manager  until 2007 .

Not sure why backups couldn't be done - that  was pretty  routine in my customer accounts.

AS/400s were the darlings of data processing in my part of the country (which was predominately industrial manufacturing in remote/rural  areas)  because they could be installed in less-than-pristine environments and they simply ran. And Ran. And Ran.  The Operating System was written specifically to the unique physical and logical architecture of the AS/400.  Much of my AS/400 work work was related to various attempts to keep the box relevant by installing a special board containing an Intel chipset and using some of the AS/400's memory and other resources as its own. Basically, a portion of the AS/400 was turned into a PC server.  It was a cool concept but never really caught on. 

I've been to the plant in Rochester several times prior to 2007.  It's a lovely area and the people were nice, although often a bit odd because they seemed so isolated from the "outside world" (ie, any computing platform that was not AS/400.)   And I believe that this isolation contributed to AS400/iSeries decline and end.

There were a lot of things right & wonderfull about the AS/400, but there were shortcomings as well.  And customers who loved their AS400/iSeries were very vocal about those shortcomings.  But the changes didn't happen fast enough and often enough, and customers believed  IBM simply wasn't paying attention.  Once IBM realized that customers were increasingly deserting the platform there were some frantic attempts to staunch the bleeding (that's where I came in - I was a PC server/Network/Project manager) , but by then it was too late.  IBM's strategy was to move the AS400/iSeries  to the PowerPC architecture,  which allowed it to run on the same hardware as RS/6000 (once upon a time AIX).  This in turn allowed the two OS's to be consolidated into the "IBM Power systems" which allowed customers more latitude in what OS they chose to use.   

By this time there were a lot of former AS400/iSeries support folks looking for new jobs.

I had pursued a Project Manager ticket (through IBM) around 1994, so I had another horse to ride long before leaving IBM in 2002 (and I rode that horse hard until my retirement 5 months ago.  But the downward spiral pattern I observed is repeated constantly in this industry and other industries as well:
- Company releases a great, innovated product that does its job well but has a few shortcomings
- Customers repeatedly point out shortcomings but increasingly feel that needed changes don't happen quickly enough or are only given lip service.
- Frustrated customers increasingly desert the product in favor of another, competing product.
- A bit of panic as the Company realizes they have are loosing/have already lost a sizeable chunk of their customer base.
- Company attempts to shore up the product but frequently it's "too little too late".
- {sound of bugle playing 'Taps"}


Old joke told at iSeries marketing meeting:

Young woman sits down with her husband-to-be the night before their wedding and explains that she's been married 3 time before but none of the marriages were ever consummated.
"How is that even possible?" exclaims the bewildered man.
The woman explains: "My first husband was pulling wheelies on his motorcycle during the wedding reception and flipped the bike and killed himself.  My second husband was this really old guy who died of a heart attack as he carried me over the threshold at our honeymoon hotel. And the third guy was an IBM iSeries sales rep who just sat on the edge of the bed night after night telling me how great it was going to be, and I shot him out of sheer frustration."







Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 25, 2019, 11:49:52 AM
Quote from: masonspappy on June 25, 2019, 04:18:18 AM
I worked for IBM for 25 years (1979-2002) and spent 3 years as a member of their mid-range support group that included support for AS/400s...

Very interesting background. Thank you for sharing.

Quote from: Matt on June 09, 2019, 09:11:23 PM
There will always be enough smart people to write compilers for difficult-to-understand CPUs (not just in multi-billion dollar companies, but also independents) so that the rest of us don't have to understand them...

Apparently, the complexity of writing compilers for Itanium even broke the backs of Intel and HP.

https://softwareengineering.stackexchange.com/questions/279334/why-was-the-itanium-processor-difficult-to-write-a-compiler-for
Title: Re: The Future of Computing (According to Me)
Post by: WAS on June 27, 2019, 01:54:37 AM
Quote from: masonspappy on June 25, 2019, 04:18:18 AM
Interesting comments about the AS/400.  I worked for IBM for 25 years (1979-2002) and spent 3 years as a member of their mid-range support group that included support for AS/400s (which was later renamed to "i-Series".) After IBM I worked with an IBM Business Partner as a Project Manager  until 2007 .

Not sure why backups couldn't be done - that  was pretty  routine in my customer accounts.

AS/400s were the darlings of data processing in my part of the country (which was predominately industrial manufacturing in remote/rural  areas)  because they could be installed in less-than-pristine environments and they simply ran. And Ran. And Ran.  The Operating System was written specifically to the unique physical and logical architecture of the AS/400.  Much of my AS/400 work work was related to various attempts to keep the box relevant by installing a special board containing an Intel chipset and using some of the AS/400's memory and other resources as its own. Basically, a portion of the AS/400 was turned into a PC server.  It was a cool concept but never really caught on. 

I've been to the plant in Rochester several times prior to 2007.  It's a lovely area and the people were nice, although often a bit odd because they seemed so isolated from the "outside world" (ie, any computing platform that was not AS/400.)   And I believe that this isolation contributed to AS400/iSeries decline and end.

There were a lot of things right & wonderfull about the AS/400, but there were shortcomings as well.  And customers who loved their AS400/iSeries were very vocal about those shortcomings.  But the changes didn't happen fast enough and often enough, and customers believed  IBM simply wasn't paying attention.  Once IBM realized that customers were increasingly deserting the platform there were some frantic attempts to staunch the bleeding (that's where I came in - I was a PC server/Network/Project manager) , but by then it was too late.  IBM's strategy was to move the AS400/iSeries  to the PowerPC architecture,  which allowed it to run on the same hardware as RS/6000 (once upon a time AIX).  This in turn allowed the two OS's to be consolidated into the "IBM Power systems" which allowed customers more latitude in what OS they chose to use.   

By this time there were a lot of former AS400/iSeries support folks looking for new jobs.

I had pursued a Project Manager ticket (through IBM) around 1994, so I had another horse to ride long before leaving IBM in 2002 (and I rode that horse hard until my retirement 5 months ago.  But the downward spiral pattern I observed is repeated constantly in this industry and other industries as well:
- Company releases a great, innovated product that does its job well but has a few shortcomings
- Customers repeatedly point out shortcomings but increasingly feel that needed changes don't happen quickly enough or are only given lip service.
- Frustrated customers increasingly desert the product in favor of another, competing product.
- A bit of panic as the Company realizes they have are loosing/have already lost a sizeable chunk of their customer base.
- Company attempts to shore up the product but frequently it's "too little too late".
- {sound of bugle playing 'Taps"}


Old joke told at iSeries marketing meeting:

Young woman sits down with her husband-to-be the night before their wedding and explains that she's been married 3 time before but none of the marriages were ever consummated.
"How is that even possible?" exclaims the bewildered man.
The woman explains: "My first husband was pulling wheelies on his motorcycle during the wedding reception and flipped the bike and killed himself.  My second husband was this really old guy who died of a heart attack as he carried me over the threshold at our honeymoon hotel. And the third guy was an IBM iSeries sales rep who just sat on the edge of the bed night after night telling me how great it was going to be, and I shot him out of sheer frustration."
Wow interesting background you have. Thanks for sharing your history with us.

Are those old machines worth anything? My best friend has this as400 E series desktop tower in his garage.
Title: Re: The Future of Computing (According to Me)
Post by: masonspappy on June 27, 2019, 05:30:40 AM
Quote from: WASasquatch on June 27, 2019, 01:54:37 AM
Are those old machines worth anything? My best friend has this as400 E series desktop tower in his garage.

Probably only as scrap or museum piece.   ;)
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 27, 2019, 10:22:58 AM
Quote from: WASasquatch on June 27, 2019, 01:54:37 AMAre those old machines worth anything? My best friend has this as400 E series desktop tower in his garage.

I think they would be worth more if their internal architecture had been made public. But IBM has a habit of keeping their internals very secret. I remember that we looked the /3X IBM mini that was being replaced. We took out the boards and all of the chips had their labels blacked out with a magic marker to hide what they were. I'm sure that IBM uses a lot of 3rd-party components but they don't like the labels their manufacturers put on them. These computers rely on a pile of software from IBM and the computers would be pretty useless without them. They were business machines so its not like their is any other kind of application available for them.
Title: Re: The Future of Computing (According to Me)
Post by: WAS on June 27, 2019, 01:55:21 PM
We did some digging online and seems this specific tower goes for 600-1000. He's pulling it apart now and testing components.

Here is one listed right now for 800: https://www.ebay.com/itm/IBM-AS-400-Computer/312566218959?hash=item48c66600cf:g:PeMAAOSwaplcnYap

Considering what it is, and the fact is has basically no use today, no PC-type utilities like browsers etc, that is very expensive.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on June 28, 2019, 05:19:55 PM
Quote from: WASasquatch on June 27, 2019, 01:55:21 PM
We did some digging online and seems this specific tower goes for 600-1000.

I seem to recall that the IS people were offering to give this larger than a washing machine
system to anyone who wanted it. Considering the size, lack of documentation etc. I turned
them down. If I had been a salesman I probably could have found a buyer. But I'm not much
of a salesman.
Title: Re: The Future of Computing (According to Me)
Post by: WAS on June 29, 2019, 03:03:14 AM
Quote from: PabloMack on June 28, 2019, 05:19:55 PM
Quote from: WASasquatch on June 27, 2019, 01:55:21 PM
We did some digging online and seems this specific tower goes for 600-1000.

I seem to recall that the IS people were offering to give this larger than a washing machine
system to anyone who wanted it. Considering the size, lack of documentation etc. I turned
them down. If I had been a salesman I probably could have found a buyer. But I'm not much
of a salesman.

They definitely seem to be selling as novelty pieces based purely in nostalgia.
Title: GP Addressing Modes can now Reach Full Address Space
Post by: PabloMack on July 24, 2019, 04:46:05 PM
Just an update. CPU architectures have many trade-offs between a large number of things such as number of registers, addressing modes, kinds of operations that are available etc. In trying to maintain high levels of such things as coding efficiency, simplicity and  capability, generally you have to give up some of one thing in order to get more of another. Designing a CPU architecture is a complicated problem. Today I decided to make a change to the standard addressing modes that are used with most of the general-purpose instructions in ϕEngine.

The x86 was given a new lease on life when AMD defined the AMD64. But it was somewhat of a stop-gap solution and I admire what they accomplished. However, even though the x86 has a pretty full set of 64-bit data operations, its addressing is still pretty weak and little improved. The architecture was basically upgraded to give some breathing room in its address space. In this regard, it is more like a 32-bit architecture with a 64-bit address space which can accommodate multiple 32-bit images. Programs are still limited to 2 G-Bytes unless you do a lot of back-flips in order to address more than this. The ModRM byte has only a 2-bit address mode field which accommodates offsets of either 8-bits or 32-bits. That is it! To reach farther than that requires a lot of manual address calculations. You'll get little help from the linker or link-loader. Jumps have the same limitations.

ϕEngine has double the number of address modes in its standard operations. What I did today replaced two auto-increment address modes with indirect with offsets that can reach the whole address space for pretty much the entire instruction set.

During the 1970's and 80's, Digital Equipment Corp. was a major player in the minicomputer market. Their operating system (VMS) was pretty powerful and was used by a lot of industrial companies. When DEC was acquired by HP, different pieces of the company went in different directions. Intel ended up with rights to the Alpha but VMS was spun off and became OpenVMS which is still in use today. As is the case with Microsoft, they are at the mercy of CPU hardware manufacturers and have found themselves like a polar bear on a melting ice berg. Support for the x86 is now in development. For the computer-philes among you, I found this talk so interesting that I watched it twice.

https://www.youtube.com/watch?v=FZN6LjuEgdw
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on April 11, 2021, 09:11:25 AM
Another update. I have been extensively using a program I wrote (I call "CPU") to manage the instruction set of ϕEngine. It spits out source code for the assembler to make that job a lot easier. I have reworked the ISA many times and now feel it is in pretty good shape and it lays out an excellent plan. Friday I officially started work on the hardware design. I anticipate that I will be spending the lion's share of my time during the next several months using a simulator that I purchased a license for a few days ago. It will be cool to finally be using tools that produce true parallelism which programming languages do not since they are inherently sequential. I am also planning to expand "CPU" to automate code generation for the hardware design. This is interdisciplinary co-design on steroids. I feel like I was born to do this.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on October 05, 2022, 03:45:52 PM
Another Update. Just before Russia's invasion of Ukraine, I started work on the next generation parser. I went from there to a start on the code generator. I also did a rework of the syntax and this meant I had to go back to the assembler for some weeks. I reworked the vector instructions and they are much better than they were. So now I am working on HDL again and CPU is expanding to fill some needs in anticipation of execution. CPU spits out an instruction decoder in Verilog. It compiles without error. Seeing if it will simulate is another matter.
Title: Re: The Future of Computing (According to Me)
Post by: PabloMack on October 27, 2022, 05:13:57 PM
Update. I'm really getting into the hardware design of ϕEngine now. I can't remember when I've had so much fun. Things are moving along smoothly so far. This first implementation will be micro-coded. I bought an FPGA-based SBC from Digilent. It will run a soft-core version of ϕEngine. Hopefully the FPGA will have enough logic and memory for it. It will be a new experience to have a computer with a user-definable processor. Even if it only has the performance of a 68030/80386, I will be ecstatic.
Title: Re: The Future of Computing (According to Me)
Post by: WAS on October 31, 2022, 04:43:22 PM
Quote from: PabloMack on October 27, 2022, 05:13:57 PMUpdate. I'm really getting into the hardware design of ϕEngine now. I can't remember when I've had so much fun. Things are moving along smoothly so far. This first implementation will be micro-coded. I bought an FPGA-based SBC from Digilent. It will run a soft-core version of ϕEngine. Hopefully the FPGA will have enough logic and memory for it. It will be a new experience to have a computer with a user-definable processor. Even if it only has the performance of a 68030/80386, I will be ecstatic.
That sounds awesome. Even if it's performance is small by today's standards, that still a huge step in heading in the right direction. I'd argue there is still a lot you could do with that sort of power anyhow. I'm pretty sure the original DART mission's central CPU was a 030.

Quote from: ExtremeTechAt heart, the system consists of one Motorola 68030 CPU, two 68302 I/O processors, 2MB of SRAM, and 1MB of Flash RAM
which was capable of real-time "Advanced Video Guidance Sensor"
Title: Re: The Future of Computing (According to Me)
Post by: WinterLight on February 11, 2023, 01:05:42 AM
Quote from: WAS on October 31, 2022, 04:43:22 PMThat's pretty close to the processing power of the computers in the original space shuttle.