The Future of Computing (According to Me)

Started by PabloMack, May 22, 2019, 07:29:52 PM

Previous topic - Next topic

PabloMack

Quote from: Oshyan on June 09, 2019, 01:43:03 PM
I sort of wondered if you might be able to seek funding from them given the stated goals of the EPI itself (separate from the current choice of approach).

- Oshyan

What does this part mean? (I'm a native Texan. You can connect the dots.)

"To fund all that research, it hopes to become profitable in a few years though it will remain owned and operated as a relative of the European Union."

Oshyan

Ohh, somehow I thought you were in Europe. Ah well :D

- Oshyan

PabloMack

Quote from: Oshyan on June 09, 2019, 02:17:32 PM
Ohh, somehow I thought you were in Europe. Ah well :D
- Oshyan

It's probably because I talk so much about Unicode. When I say the word to most Americans,
I just get a blank stare. "Uni-what?". Americans in particular are quite happy with ASCII. They
don't understand why everybody else doesn't just dump their character sets and do everything
the way they do it.  After all, the 'A' in ASCII stands for American! :o

WAS

Quote from: PabloMack on June 09, 2019, 02:23:32 PM
Quote from: Oshyan on June 09, 2019, 02:17:32 PM
Ohh, somehow I thought you were in Europe. Ah well :D
- Oshyan

It's probably because I talk so much about Unicode. When I say the word to most Americans,
I just get a blank stare. "Uni-what?". Americans in particular are quite happy with ASCII. They
don't understand why everybody else doesn't just dump their character sets and do everything
the way they do it.  After all, the 'A' in ASCII stands for American! :o

Ok, I gotta share this to a programming group. They're going to bust up xD

Matt

#34
Hi Paul,

Quote from: PabloMack on June 03, 2019, 03:06:34 PM
I was just looking at the ARM64 manual and I'd hate to have to write a code
generator for that monster. I really miss Motorola's manuals because they
were so much better than anyone else's. What I want for the average
programmer is something that they can easily understand. I think it is a
tragedy when CPUs are so complicated that it puts the control of the whole
market into the hands of a few billion-dollar companies.

With current architectures I think most "average programmers" expect the compiler to do the work of translating from the high level language to the CPU so that the CPU doesn't need to be easy to understand. This should free CPU designers to make choices based on what executes faster rather than what's easier to understand. There will always be enough smart people to write compilers for difficult-to-understand CPUs (not just in multi-billion dollar companies, but also independents) so that the rest of us don't have to understand them, but they are not "average programmers" IMO. I gather that you would like to bring programmers closer to the hardware by designing new language(s) and CPUs simultaneously. That sounds like a noble goal, and a fun one. But I believe there will always be a plethora of high level languages out there to solve the ease-of-use problem. High level languages often make things easier with no performance penalty. But they can't make things faster than the hardware allows. So, what happens at the low level should emphasize raw execution speed (as well as all the other physical goals such as low power consumption, size, etc.) because software can insulate the user from the implementation, as it has been doing so increasingly since the beginning of computing. If you can make the CPU and the lowest level language one and the same thing while simultaneously finding a faster and more efficient execution paradigm, that would be a win. But I suspect that in the long run that bump would end up being dwarfed by the other layers of abstraction which will be built on top of it.
Just because milk is white doesn't mean that clouds are made of milk.

PabloMack

#35
Quote from: Matt on June 09, 2019, 09:11:23 PM
There will always be enough smart people to write compilers for difficult-to-understand CPUs (not just in multi-billion dollar companies, but also independents) so that the rest of us don't have to understand them, but they are not "average programmers" IMO.

When I was in 1st grade, my elementary school held a softball game between the faculty and sixth-graders.  I thought (as did all of the other 1st-graders) "That should be an even match seeing that 6-graders are basically adults". Of course I thought very differently after I actually reached adult-hood myself. As a child, I thought everything above my level was over my head but they were all about the same. Perhaps this is the basis for the Dunning–Kruger Effect. I had to reach those levels before I realized how vastly different complexities were. It reminds me of something my dad once said. When he started medical school, he thought that once he climbed to the top of that ladder he would have it made. Instead he found himself at the bottom of another ladder that he couldn't see when climbing the first one. It didn't take very many ladders before he realized that he could probably never reach "the top" because his career just became an endless series of ladders. So experience blew away my notions such as "child vs. adult" and "those who understand how computers work vs. those who don't". So its easy to dismiss casualties and say that's just part of business because it was "just one of them and not one of us."

Computer complexity comes in at least two varieties. One is easy to manufacture but difficult to understand (RISC). The other is easy to understand but difficult to manufacture (CISC) . During the last 20 years, all new designs have been RISC. But there are basic principles that say that "smarter is better". CISC processors have smarter instructions so shouldn't they run faster?

I have another story about when I was working for a large corporation. I was part of a department called "Advanced Process Control". We used DEC computers to automate data collection while they were using an IBM mainframe (AS/400). The IS department hated us because we could do it better than they could. As the saying went "Nobody got fired for choosing IBM." One time my boss (a DEC proponent) attended a meeting between IS people and IBM. He laughed at our IS management because they would say "This is really complicated and we can't understand it. It must be good!"

Movie-makers: "The complexity of CG software is not important because the artists will deal with that to make life simpler for us. That is their job. There will always be enough artists to fill that need."

Paul

Matt

#36
Dunning-Kruger effect is probably in full effect here in my reply  ;D

Quote from: PabloMack on June 10, 2019, 02:55:40 PM
Computer complexity comes in at least two varieties. One is easy to manufacture but difficult to understand (RISC). The other is easy to understand but difficult to manufacture (CISC) . During the last 20 years, all new designs have been RISC. But there are basic principles that say that "smarter is better". CISC processors have smarter instructions so shouldn't they run faster?

For the "ease of use" goal, from my layman's perspective it appears that this might result in a shift of responsibilities between the CPU and the compiler. Within the definition of "compiler" I would also include a code generator whose input is some very low-level language that's more in tune with the needs of the CPU but whose instructions might not map 1:1 to the CPU itself. Outsourcing some of the implementation from the hardware to the first software layer (as long as it's compiled) doesn't seem unreasonable to me. To put it another way, you could define a virtual CPU that is easy to program. It may be fully implemented in hardware, or there might be a software layer. The boundary between hardware and software would be decided by the implementation, not the specification.

I tend to think that the RISC vs. CISC argument on the basis of ease-of-use alone *at the hardware level* is not something that can be fully answered without considering how it's actually programmed by the majority of programmers. I would also say that the majority of programmers will be using high level languages regardless of interface to the CPU, unless the interface to the CPU looks like a very expressive high level language that makes others unnecessary. Perhaps your ϕPPL is that language? If so, that could be very significant.

For the speed of execution goal, I'm definitely not qualified to have an opinion on RISC vs. CISC.

Quote
I have another story about when I was working for a large corporation. I was part of a department called "Advanced Process Control". We used DEC computers to automate data collection while they were using an IBM mainframe (AS/400). The IS department hated us because we could do it better than they could. As the saying went "Nobody got fired for choosing IBM." One time my boss (a DEC proponent) attended a meeting between IS people and IBM. He laughed at our IS management because they would say "This is really complicated and we can't understand it. It must be good!"

Hah, well I would never claim that complicated and difficult to understand is a good thing in isolation.

Quote
"Movie-makers don't care about how complex CG software is because the artists will deal with that to make life simpler for us. That is their job. There will always be artists to fill that need."

I agree that this would be the wrong way for movie makers to think about CG software if they were the ones choosing, because artists cost more than software and simpler software reduces the cost spent on artists.

If you have simpler CG software that's built-upon lower level CG libraries, is it bad that the low level libraries (e.g. Embree, CUDA, RTX etc.) are not directly-usable by the CG artists when there are developers who can use the libraries? We need fewer developers to understand the libraries than the number of artists who will use the higher level software. Is it necessarily a bad thing to have a system built upon lower levels that require more specialized knowledge, or is this how we gain more efficiency overall?

Perhaps that's not where the debate lies. I suppose it depends on whether the very lowest level can be made simple enough to make some of the intermediate levels unnecessary without other compromises. Of course that would be a good thing.
Just because milk is white doesn't mean that clouds are made of milk.

PabloMack

Matt, this is good stuff. Thank's for the thoughts.

My assembler is twisting my brain because it is so different from anything I've used before.
I am struggling to free my mind from the chains of experience using standard systems.
Hahaha....I'm in Heaven!

Paul

Asterlil

I don't want to make a discussion-stopping comment, but this is all such interesting stuff -- keep talkin'!

Since I retired from programming (sort of)(you can check out any time you want, but you can never leave), I've noticed that a gazillion script- or high-level languages seem to have sprung up. When the fact that programming above assembler level is more or less CPU-independent was mentioned upthread by Paul, it got me to thinking about all those noob languages, and I wondered why languages are sprouting like weeds...

...and I wonder if the reason might be hardware-based. I mean, I don't have to know microcode to write javascript, but does my Intel citizenship steer me toward javascript?

Matt

Quote from: Asterlil on June 10, 2019, 06:19:00 PM
...and I wonder if the reason might be hardware-based. I mean, I don't have to know microcode to write javascript, but does my Intel citizenship steer me toward javascript?

I think the size of the Javascript ecosystem is what leads people towards Javascript. I can't imagine why else anybody would choose it  :P (j/k, kind of)
Just because milk is white doesn't mean that clouds are made of milk.

PabloMack

#40
Quote from: Asterlil on June 10, 2019, 06:19:00 PM
Since I retired from programming (sort of)(you can check out any time you want, but you can never leave), ...

When you were programming, what languages did you use before the noob ones came out?

Java Script is really part of the HTML web-based group of languages so its strength is portability.
It and others like it (Python comes to mind) are like an RV where you take everything with you as
you go. But your programs will be plodding along like tortoises. Languages like C, C++ and ϕPPL
are more like Ferrari's or Corvettes that make good use of the machine's performance. They are
more like Cheetahs or Falcons.

WAS

#41
Quote from: PabloMack on June 10, 2019, 09:35:23 PM
Quote from: Asterlil on June 10, 2019, 06:19:00 PM
Since I retired from programming (sort of)(you can check out any time you want, but you can never leave), ...

When you were programming, what languages did you use before the noob ones came out?

Java Script is really part of the HTML web-based group of languages so its strength is portability.
It and others like it (Python comes to mind) are like an RV where you take everything with you as
you go. But your programs will be plodding along like tortoises. Languages like C, C++ and ϕPPL
are more like Ferrari's or Corvettes that make good use of the machine's performance. They are
more like Cheetahs or Falcons.

In web development we were specifically taught through various books that JavaScript was  "TOOL" and not to be abused. Boy have things changed. jQuery (which was praised at first for it's tools, and than everyone started doing whole websites requiring it), nodeJS, etc, all go against the principles we were originally taught. This may be from a security standpoint. Back in the late 90s and early 2000s you could play some havoc on machines with JS which is why the popularity of no-JS browsers grew.

An entire new generation of web developers spawned through JavaScript, and honestly ruined the industry. Website/tools/data generators everywhere. CPU wastage everywhere. Flashy stuff fading in and out and sliding around...  It sucked having to assimilate into the new age.

PabloMack

#42
Quote from: Matt on June 10, 2019, 05:34:45 PM
For the "ease of use" goal, from my layman's perspective it appears that this might result in a shift of responsibilities between the CPU and the compiler. Within the definition of "compiler" I would also include a code generator whose input is some very low-level language that's more in tune with the needs of the CPU but whose instructions might not map 1:1 to the CPU itself. Outsourcing some of the implementation from the hardware to the first software layer (as long as it's compiled) doesn't seem unreasonable to me. To put it another way, you could define a virtual CPU that is easy to program. It may be fully implemented in hardware, or there might be a software layer. The boundary between hardware and software would be decided by the implementation, not the specification.

The over-lying CPU programming model would likely be compatible with the underlying RISC hardware so it seems to be a viable strategy if that was all you wanted to do. It might be comparable to applying a ceramic roof on top of a composite to make it look different. However, the ϕEngine's virtual memory model is much more sophisticated than any RISC system (or any CPU I've seen for that matter). Trying to make a RISC virtual memory model look like ϕEngine's would require replacing the foundation of the building. Programming models are directly visible to the programmer but virtual memory is not. It has to work underneath and out of sight. So I am back to having to design a new hardware system from the ground up any way. The alternative would perform on a trace track like Frankenstein's Monster and would look just as beautiful.

The goal of the ϕSystem is to surpass what Kernighan, Ritchie, Pike and bunch did at Bell Labs when C and UNIX were co-developed. C was developed to both write operating systems as well as applications. But quite a bit of assembly was also needed because C/C++ knows almost nothing about Hardware. The goal of the ϕSystem is to do this again but eliminate the need for assembly language in the source code for the operating system and associated drivers. But assembly still has to remain available because (1) It may take some time before all of the features of ϕPPL are working (2) for optimization (3) to meet various unforeseen needs and (4) as an educational tool to teach students principles of assembly language.  I think there is no better tool for teaching engineers and programmers how computers work than writing code in assembly language. HLL's can never get you there. In order for this to work, though, ϕPPL will have to be aware of the Hardware and that makes it a lot different from most HLL's. It will have to be able to set up and tear down stacks, call its supervisors, satisfy service calls from its subordinates and program the special purpose resources that systems have and are only visible to operating system software. And those resources will have to be there and they won't be at the application programming level. No other language to my knowledge does those kinds of things. Most HLLs were designed to hide these things and are only for writing applications. So their awareness is stripped away to simplify their programming models. So they are dumb and blind when it comes to system software in general.

Asterlil

Languages I have known?
BAL, the assembly language for IBM-370s, was my first language, if you don't count BASIC on a land line to a mainframe in Milton Keynes! With Assembler, I hand wrote my code sitting in Coventry (I was living in England then), and the coding sheets were delivered to Liverpool, where they were transcribed by key punch operators. I then got a job deck of hollerith cards, and if there were errors, I punched a new card by hand.

Does anybody here remember Convergent Technologies, and their operating system CTOS? What a beautiful machine that was. 8086-based, then 286. I wrote in PL/M-86 and COBOL, and I knew my chip and motherboard architecture. Then it was consumed by Burroughs, and some descendant of that marriage was used by the post office for some time, maybe still is.

Then PC's, and I wrote in ASM-286 and Lattice C. When I became a caregiver to my parents (in Florida) I lost my career mobility. After a stint of heritage code maintenance in COBOL/SQL on a Wang minicomputer, I got back to PCs and developed a DBMS in FoxPro DOS, which I then converted to FoxPro for Windows. And then I got out of the bidness, while OOP was getting a proper foothold.

So I did get back into it and did some web development in the unholy triad of HTML/CSS/Javascript, and it seemed so loosey-goosey compared to the languages of my career. And then, just as I was mastering JQuery, all these other process libraries started popping up like Angular, and languages I know nothing of for performing specific tasks like handling website backend stuff, or expediting yet other languages, and I just backed off, basically feeling confused. Actionscript, anybody? I stick to my triad and JQuery in Dreamweaver, and ignore the rest. Well. I might learn Python, seeing as it's the language of Blender, but that's it. All the other stuff seems faddish; I completely agree with WASasquatch.

Oh, I still have the Wang boot floppy! It's pinned to my corkboard.

And don't get me started on the loss of creativity and teamwork after business/marketing gained ascendancy over computers.

Anyway, I would gladly go back to Ferraris and Corvettes if I were still in the career.

PabloMack

#44
Asterlil,

You have a very interesting and varied background. COBOL gives me the creeps because it is so wordy.  I think doing most of everything with operators makes a lot more sense and source code is a lot more concise. So you do have some C and assembly experience. That's down close to the hardware where I like to be. But I find the standard MicroSoft syntax a bit ambiguous. I spent many years with embedded Motorola 68K and the syntax was very clear and easier for me to understand. I've only done a little with GNU.

The Web-based languages (HTML/XML/CSS, JavaScript, PHP, and even Python and Java) are indeed very loosey-goosey as you say. They were never planned very well and they just grew over and into each other until they got to be what we have now. The Web is based on them so there is no escape if you want your work to run on the web. I am more of a hardware guy so I am happy just doing little bits of Web here and there. The current language I am learning is Verilog which is an HDL (Hardware Definition Language) and not really a programming language. I used ABEL many years ago and had great fun. But there is very little support for it now. So Verilog/SystemVerilog will be my main focus in the foreseeable future. It has become an IEEE standard and there seems to be a movement away from VHDL (another ADA-like HDL).