VOGONS


Reply 20 of 43, by LSS10999

User metadata
Rank Oldbie
Rank
Oldbie
the3dfxdude wrote on 2023-05-21, 21:31:

Well it's kind of like the ISA slot. They first made it "optional", and things like -5V not needed anymore. One by one the manufacturers decide to pull it out, and change chip sets breaking compatibility (you have no slot, so it doesn't need to work like a full ISA bus). Now about 2020 on, ISA bus signaling is getting removed completely that even integrated legacy stuff is long gone. They deleted that line like about -5V like it never existed in the ATX spec. 16/32-bit is kind of coming to that point, now it will be impossible to support a large portion of the PC's existence. In terms of secure boot being optional, again, that is just a recommendation by Microsoft. All that has to happen is the manufacturers start one by one locking down their products. Even if it is an option, eventually you have no option, and they'll delete that line too, like it never happened. So I probably shouldn't say that Win12 will for sure being the deciding factor, but I'm telling you, many things are starting to line up that the x86 platform is going to soon be radically different. I wonder who will pull the trigger, but it will be very subtle.

I recall reading this very interesting quote on ReactOS forum regarding UEFI-only systems.

Any machine that does not support BIOS (AKA Legacy) booting is no longer truly "IBM PC" compatible and does not deserve to be called a "PC"

They are PCs, but IBM-incompatible PCs, so I call them post-IBM PCs.

This left a strong impression inside me when I first saw it back then. The discussion started in 2014, and now, 9 years later, this has become a reality: PCs are indeed deviating farther and farther away from the original "IBM PC", first by being UEFI-only, and now x86S.

Reply 21 of 43, by Gmlb256

User metadata
Rank l33t
Rank
l33t

I see x86-S as an attempt which in theory would reduce CPU die size and make it more efficient by removing legacy stuff that aren't used in modern software development (x87 FPU, port-mapped I/O, 16-bit addressing mode, etc). For now, it is conceptual, and I don't think that Intel would implement it on future mainstream CPUs at least in the short term.

It is funny how some people compare this to the Itanium (IA-64) architecture, which has nothing in common with x86 (let alone x86-64) with the exception of poor emulators.

VIA C3 Nehemiah 1.2A @ 1.46 GHz | ASUS P2-99 | 256 MB PC133 SDRAM | GeForce2 GTS 32 MB | Voodoo2 12 MB | SBLive! | AWE64 | SBPro2 | GUS

Reply 22 of 43, by LSS10999

User metadata
Rank Oldbie
Rank
Oldbie
Gmlb256 wrote on 2023-05-23, 15:08:

I see x86-S as an attempt which in theory would reduce CPU die size and make it more efficient by removing legacy stuff that aren't used in modern software development (x87 FPU, port-mapped I/O, 16-bit addressing mode, etc). For now, it is conceptual, and I don't think that Intel would implement it on future mainstream CPUs at least in the short term.

It is funny how some people compare this to the Itanium (IA-64) architecture, which has nothing in common with x86 (let alone x86-64) with the exception of poor emulators.

Need to wait until an actual implementation (if ever) to know how much die size could be saved with x86S. Die size and efficiency matter much more in embedded use cases.

As for the comparison... Actually, x86-64 was originally designed by AMD (AMD64). Intel adopted it and implemented it as EM64T (Intel 64).

Itanium (IA-64) was the 64-bit architecture originally designed by Intel which was very different from x86, eventually replaced by x86-64.

Reply 23 of 43, by Big Pink

User metadata
Rank Member
Rank
Member
Gmlb256 wrote on 2023-05-23, 15:08:

It is funny how some people compare this to the Itanium (IA-64) architecture, which has nothing in common with x86 (let alone x86-64) with the exception of poor emulators.

Intel has a track record of trying to leave x86 behind (iAPX 432, i860, Itanium). Those all flopped and time has come again for another attempt, by warping x86 into something else.

I thought IBM was born with the world

Reply 24 of 43, by Gmlb256

User metadata
Rank l33t
Rank
l33t
Big Pink wrote on 2023-05-23, 17:15:
Gmlb256 wrote on 2023-05-23, 15:08:

It is funny how some people compare this to the Itanium (IA-64) architecture, which has nothing in common with x86 (let alone x86-64) with the exception of poor emulators.

Intel has a track record of trying to leave x86 behind (iAPX 432, i860, Itanium). Those all flopped and time has come again for another attempt, by warping x86 into something else.

Yep, I have no doubt on it. The only difference is that x86-S still has the relevant opcodes, hence the name.

VIA C3 Nehemiah 1.2A @ 1.46 GHz | ASUS P2-99 | 256 MB PC133 SDRAM | GeForce2 GTS 32 MB | Voodoo2 12 MB | SBLive! | AWE64 | SBPro2 | GUS

Reply 25 of 43, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++
Gmlb256 wrote on 2023-05-23, 15:08:

I see x86-S as an attempt which in theory would reduce CPU die size and make it more efficient by removing legacy stuff that aren't used in modern software development (x87 FPU, port-mapped I/O, 16-bit addressing mode, etc). For now, it is conceptual, and I don't think that Intel would implement it on future mainstream CPUs at least in the short term.

They can easily implement this in server space first. Just like Itanium.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 26 of 43, by javispedro1

User metadata
Rank Member
Rank
Member
Gmlb256 wrote on 2023-05-23, 15:08:

I see x86-S as an attempt which in theory would reduce CPU die size and make it more efficient by removing legacy stuff that aren't used in modern software development (x87 FPU, port-mapped I/O, 16-bit addressing mode, etc).

But the "die size" of this is completely insignificant -- it was already cheap as much as 20 years ago, and after dozens of process shrinks all the "legacy stuff" is likely to be a rounding error in terms of area. In fact, even the uncore parts of all Intel chips have multiple smaller x86 cores which do have all of this legacy support implemented (e.g. ME runs on an off-the-shelf Quark with real mode, vm86 and everything), so not even Intel thinks of it as much of an issue.

MMX, SSE, are likely to be orders of magnitude more important in terms of die cost, as observed in previous posts.

I can however see a significant reduction in terms of validation/verification cost for Intel, since each new execution mode exponentially grows your verification costs. But a reduction here is unlikely to be noticed by customers -- we'll not see faster nor more efficient chips, and likely not even cheaper ones.

And considering the negative memories that raising the idea of dropping x86 brings to customers... dunno why they'd do it.

Reply 27 of 43, by zyzzle

User metadata
Rank Member
Rank
Member

Intel does it because they think they can get away with it. They've tried before (Itanium) and failed, now they just might succeed because they (and Microsoft) don't care about legacy support any more. They're "pressing" these updates via software enforcement as well (even if the hardware could happily run them. TPM 2.0 anyone? Mandatory "secure boot"?)

Way to go: destroy 30 years of your legacy code base, crippling your user experience! For what?! To better serve Intel, yes.

The old catchword, that crutch "emulation" is worse than useless. Why should we be forced to limit speed to that of a 486 using emulation when it would be so simple to keep in legacy 16/32 bit modes?

I've already begun to stockpile older Intel chips. Now, how long before all hardware makers are "forced" to abandon us to Intel's whims and fancies? Yes, they'll say: emulation. No way, man. Bare metal is where it's at.

Reply 28 of 43, by LSS10999

User metadata
Rank Oldbie
Rank
Oldbie
zyzzle wrote on 2023-05-25, 00:51:

Intel does it because they think they can get away with it. They've tried before (Itanium) and failed, now they just might succeed because they (and Microsoft) don't care about legacy support any more. They're "pressing" these updates via software enforcement as well (even if the hardware could happily run them. TPM 2.0 anyone? Mandatory "secure boot"?)

Nowadays software vendors design security principles based on assumptions that "well-behaved" users should not know nor touch certain things, and discriminate/punish those who do, more than just voiding the warranty.

Of course, legacy features that predate those security principles tend to give experienced users more access than what vendors wished, so they get removed if they are too hard to be fixed/controlled.

While there are ways to circumvent those security checks so you may avoid being immediately discriminated, it can never last forever and sooner or later you'll be caught and lose access to everything.

There's little one could do for that. The only thing you could do is reject them outright, but in a good amount of cases, these are too essential to be abandoned, as many real-life routines depend on them, with very few alternatives.

Big Pink wrote on 2023-05-23, 17:15:

Intel has a track record of trying to leave x86 behind (iAPX 432, i860, Itanium). Those all flopped and time has come again for another attempt, by warping x86 into something else.

zyzzle wrote on 2023-05-25, 00:51:

Way to go: destroy 30 years of your legacy code base, crippling your user experience! For what?! To better serve Intel, yes.

The old catchword, that crutch "emulation" is worse than useless. Why should we be forced to limit speed to that of a 486 using emulation when it would be so simple to keep in legacy 16/32 bit modes?

I've already begun to stockpile older Intel chips. Now, how long before all hardware makers are "forced" to abandon us to Intel's whims and fancies? Yes, they'll say: emulation. No way, man. Bare metal is where it's at.

x86-64 (AMD64) was originally AMD's idea. The 16-bit and 32-bit foundations of the x86 were most if not all Intel's.

So yes, this can indeed be considered as another attempt by Intel trying to throw away x86. With x86S, Intel can finally leave most if not all of the x86 stuffs that were really theirs behind.

PS: A good amount of online games' security software do not like users having developer-related tools installed, like debuggers, IDEs, and VM hosts, to deter cheating or multiboxing. With x86S, VM hosts (which will also have to emulate more things than before) might become more common (at least during transition). Considering the established security principles regarding VM hosts, it could become a headache for both players and developers alike.

Reply 29 of 43, by Big Pink

User metadata
Rank Member
Rank
Member
zyzzle wrote on 2023-05-25, 00:51:

The old catchword, that crutch "emulation" is worse than useless. Why should we be forced to limit speed to that of a 486 using emulation when it would be so simple to keep in legacy 16/32 bit modes?

Apple changes architecture every decade, throws in an emulator and they're sitting on the biggest pile of cash imaginable! Intel, meanwhile, isn't doing so well.

I thought IBM was born with the world

Reply 30 of 43, by Start me up

User metadata
Rank Newbie
Rank
Newbie
LSS10999 wrote on 2023-05-21, 14:00:

"Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use."

I wonder if this means under this architecture, 32-bit programs on Windows x64 will be able to access as much memory as 64-bit ones, rather than being capped at ~2GB?

No, dropping the support for using multiple segments is a step in the opposit direction. When a 32 bit program wants to use more than 2 GBs of memory, then there are several ways:

  • Reduce the kernel space from 2 GB to 1 GB and increase the user space from 2 GB to 3 GB. This is a setting in Windows.
  • Use more than 1 segment (this option won't be available anymore)
  • Use the address windowing extension to remap your virtual memory during the runtime of the program

Both, 32 bit programs and 64 bit programs can use all available memory. Even if you got 16 GBs installed. This is valid since 1995 with the introduction of the Pentium Pro 32 bit processor which supports up to 64 GB (except that 64 bit programms don't run on this processor). It works also on processors from AMD.

The 64 bit operating mode offers no real benefit compared to the 32 bit operating mode:

  • They introduced a few more general purpose registers which nearly no compiler makes good use of.
  • They introduced some 64 bit instructions. But 64 bit instructions and even bigger ones like 128 bit instructions are also available on 32 bit processors with no support for the 64 bit operating mode. However in 64 bit mode some basic (more commonly used) instructions are no longer available so that operation codes became available for the new 64 bit instructions.
  • 64 bit programs tend to use more memory, have a bigger file size and run at about the same speed as 32 bit programs can run

It happened twice to me, that I asked a programmer which CPU architecture he is programming for and he didn't even know what a CPU architecture is. So how should they know how to use more than 2 GBs of RAM or 64 bit instructions in 32 bit mode?

Anyway, there is a trend at Intel to reduce the compatibility. For example I've got a graphics circuit which is about 10 years old. Intel offers drivers only for Windows 7 and no other version of Windows. That wasn't the case 20 years ago. 20 years ago you could often get drivers for many versions of Windows. This graphics circuit I mentioned even has only a broken VGA support so VGA can barely be used anymore. So the trend with the declining compatibility of Intel products is that the products just become less capable.

Customers can vote with their wallet whether they like or dislike the reduced capabilities.

Reply 31 of 43, by RetroGamer4Ever

User metadata
Rank Oldbie
Rank
Oldbie

Intel has released the latest draft of it's x86s spec, removing numerous 16/32-bit pieces.

https://t.co/vX9R6hINqt

Reply 32 of 43, by robertmo3

User metadata
Rank Oldbie
Rank
Oldbie
jmarsh wrote on 2023-05-21, 14:30:

64-bit operating systems will still run 32-bit apps exactly as they already do, there aren't any changes to that.

wondering if anything important changed to that in version 1.2? 😉
i guess we gonna have to track the changes from now on, you never know what gonna happen in the next days 😉

Reply 33 of 43, by robertmo3

User metadata
Rank Oldbie
Rank
Oldbie

btw, 16-bit was dropped 20 years after the release of 32-bit
we are 20 years after the release of 64-bit now

Reply 34 of 43, by gerwin

User metadata
Rank l33t
Rank
l33t
Start me up wrote on 2023-11-05, 19:02:

Customers can vote with their wallet whether they like or dislike the reduced capabilities.

Voted AMD for the new company computers this week.
1-Intel had their crappy spectre/meltdown mitigations.
2-Intel recently had reliability issues with their processors. And, as I read, they kept affected users in the dark for a long time.
3-Instead of giving reassurance on point 2, they tend to talk about how good their future processors are for AI rubbish with NPU cores.
4-What you wrote.

--> ISA Soundcard Overview // Doom MBF 2.04 // SetMul

Reply 35 of 43, by javispedro1

User metadata
Rank Member
Rank
Member

After AMD's debacle with VME, you think they will not drop x86 compatibility probably even before Intel does?

Reply 36 of 43, by gerwin

User metadata
Rank l33t
Rank
l33t
javispedro1 wrote on 2024-09-30, 19:16:

After AMD's debacle with VME, you think they will not drop x86 compatibility probably even before Intel does?

No illusions there. Voting with the wallet between those two (intel/AMD) won't change anything in the roadmaps.
Their competitor history seems to me somewhat fabricated and unnatural anyways.

The VME bug. Yes I was aware of it generally. But I forgot that abbreviation, and I will have to dig up these topics again to give them another read. VME Broken on AMD Ryzen

--> ISA Soundcard Overview // Doom MBF 2.04 // SetMul

Reply 37 of 43, by DosFreak

User metadata
Rank l33t++
Rank
l33t++

Yup, I won't touch Intel again for a CPU for 5 years from now, so I'll see what they are up to in 2029. Don't fuck up in the next 5 years AMD and don't implement x86s in that timeframe, thanks.
Hopefully when I buy another laptop next year I can get one with an AMD processor that won't be shit compared to the Intel ones.
For their graphics cards it will be 2027 and I'll see what they are like then. I refuse to be a beta tester for Arc.

How To Ask Questions The Smart Way
Make your games work offline

Reply 38 of 43, by RetroGamer4Ever

User metadata
Rank Oldbie
Rank
Oldbie

AMD's Strix Point Halo APUs are looking like sexy beasts, with up to 128GB of onboard LPDDR5 and the ability to shift a huge chunk of that to the GPU, leaving the rest to feed a large pile of cores. Maybe we can squeeze a few more FPS out of Crysis, eh?

Reply 39 of 43, by jtchip

User metadata
Rank Member
Rank
Member
robertmo3 wrote on 2024-09-30, 14:59:
jmarsh wrote on 2023-05-21, 14:30:

64-bit operating systems will still run 32-bit apps exactly as they already do, there aren't any changes to that.

wondering if anything important changed to that in version 1.2? 😉
i guess we gonna have to track the changes from now on, you never know what gonna happen in the next days 😉

No need to wonder, the document (the cdrdv2.intel.com link in the first post) has a revision history in section 1.2 on page 9.

The "removing numerous 16/32-bit features" comes from a misleading TechSpot article, those features were already gone in rev 1.0 of the document. Subsequent revisions simply clarify and document certain behaviours.