VOGONS


EISA Graphics card benchmark results

Topic actions

First post, by 386_junkie

User metadata
Rank Oldbie
Rank
Oldbie

This thread continues on from the earlier curiosity of EISA graphics cards and chipset differences: -

EISA Graphics / Video Cards

Since i'm in the habit of linking threads right now... i'll continue in the same vein.

Due to compatibility issues on NT 3.1 with some of the cards / drivers.... I plan to do these again on another rig and OS (an upgraded 40MHz Deskpro M), but for now the testing was done on the Systempro with the following specifications: -

Test Specs
Systempro EISA Motherboard @ 40MHz FSB
2x AMD 386DX40
DRAM board; 80Mb / 80ns
Adaptec 2740 EISA Controller
1.44Mb Floppy & 4Gb IDE HDD

8f439c509540497.jpg

Results are also here: -
https://www.dropbox.com/s/kgt1lq5usytws2c/results.jpg?dl=0

c5c0f7509540491.jpg 0eb040509540494.jpg c5ddb5509540499.jpg 9f8acf509540500.jpg

Below are the series of results I obtained when changing out the CPU configuration, each result is for a change in CPU not graphics chipsets. The additional changes were to: -

2x TI 486SXL2-50 (no cache)... & 2x TI 486SXL2-50 (8Kb cache)

2cd04e509541501.jpg 096f5e509541504.jpg b5617e509541507.jpg

Last edited by 386_junkie on 2016-10-17, 11:59. Edited 2 times in total.

Compaq Systempro; EISA Dual 386 ¦ Compaq Junkiepro; EISA Dual 386 ¦ ALR Powerpro; EISA Dual 386

EISA Graphic Cards ¦ EISA Graphic Card Benchmarks

Reply 1 of 50, by kixs

User metadata
Rank l33t
Rank
l33t

Nice results... For reference (if possible) put in the mix some standard ISA card - Tseng ET4000 or later Cirrus Logic.

Also what is the Ti486SXL-50 actual clock? 1x50 or 2x25 as I don't know if 1x50 ever existed.

Requests here!

Reply 2 of 50, by 386_junkie

User metadata
Rank Oldbie
Rank
Oldbie
kixs wrote:

Nice results... For reference (if possible) put in the mix some standard ISA card - Tseng ET4000 or later Cirrus Logic.

Also what is the Ti486SXL-50 actual clock? 1x50 or 2x25 as I don't know if 1x50 ever existed.

Yes, I am sorting a different test rig (Compaq Deskpro) for further testing, also with ISA. The Systempro was quite limiting due to the OS (NT 3.1) and the drivers I could find for cards.

For all the different EISA card tests the FSB was 40MHz with an AMD DX40. The extra tests used only the ELSA Winner 1000 (which is the best all round performer), but changed the CPU which is a TI486SXL2-50 running at 40MHz.

e8d5b5485938508.jpg
The picture above was taken when building the modified Systempro.

Compaq Systempro; EISA Dual 386 ¦ Compaq Junkiepro; EISA Dual 386 ¦ ALR Powerpro; EISA Dual 386

EISA Graphic Cards ¦ EISA Graphic Card Benchmarks

Reply 3 of 50, by Scali

User metadata
Rank l33t
Rank
l33t

Slightly offtopic, but I have an issue with people using the term 'GPU' for any kind of graphics chip, or even the entire videocard.
To draw the analogy with CPUs, not every circuit that can perform calculations is a CPU. There are more basic variations of circuits, such as the Arithmetic Logic Unit (ALU).
What makes a CPU different from just an ALU is that a CPU is a 'processing unit': it can process a list of instructions, it can execute a program.

Likewise, not all graphics chips are GPUs. Early graphics chips required the CPU to feed them instructions one-at-a-time. It wasn't until the era of the GeForce256 and the Radeon that you could actually have the graphics card process a list of instructions independently. That's when the term 'GPU' first came into use, to differentiate it from 'passive' 3D accelerators, which depended on the CPU.

In short, the graphics cards here do not contain chips that can be described as a 'GPU'.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 4 of 50, by 386_junkie

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

not all graphics chips are GPUs. Early graphics chips required the CPU to feed them instructions one-at-a-time. It wasn't until the era of the GeForce256 and the Radeon that you could actually have the graphics card process a list of instructions independently. That's when the term 'GPU' first came into use, to differentiate it from 'passive' 3D accelerators, which depended on the CPU.

In short, the graphics cards here do not contain chips that can be described as a 'GPU'.

Hi Scali,

Thanks for the post, and sorry about the mistake. Made a couple of edit's to take the GPU's out.

I had thought that graphics ASIC's and GPU's were the same thing... I didn't realise the difference. I'm glad you posted and told me... I've learned something new today and will read into it more.

I hope the EISA card results were of some interest.

Cheers

Compaq Systempro; EISA Dual 386 ¦ Compaq Junkiepro; EISA Dual 386 ¦ ALR Powerpro; EISA Dual 386

EISA Graphic Cards ¦ EISA Graphic Card Benchmarks

Reply 5 of 50, by Scali

User metadata
Rank l33t
Rank
l33t

Well, if you want the true definition of GPU, then you can get it straight from the horse's mouth here: http://www.nvidia.com/object/gpu.html
It was a marketing term cooked up by NVidia for the introduction of the GeForce256. They wanted to signify the hardware T&L on it, which basically made it capable of rendering entire scenes without any CPU intervention (the "minimum of 10 million polygons per second" requirement is pure marketing of course, doesn't make much sense technically).
ATi tried to market a similar term with "VPU", or Visual Processing Unit, for their version of the same concept. For some reason, the GPU term stuck, VPU did not.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 6 of 50, by Anonymous Coward

User metadata
Rank l33t
Rank
l33t

I finally had time to take a closer look at this thread. I am very disappointed that the "enhanced" version of the Compaq card is slower in DOS than the original!

"Will the highways on the internets become more few?" -Gee Dubya
V'Ger XT|Upgraded AT|Ultimate 386|Super VL/EISA 486|SMP VL/EISA Pentium

Reply 7 of 50, by Shadow Lord

User metadata
Rank Newbie
Rank
Newbie

Excellent thread here. EISA video cards have been rarities, outside of the Compaq QVision ones that show up all the time. I find the results for the Mach32 a bit surprising. The Mach32 was an excellent DOS card, and I am very happy with my 2MB MCA version. Of course I have an Elsa 4MB and a 4MB? Miro Crystal 32 so not too worried 😉. Thanks again for the excellent info.

Reply 8 of 50, by dogchainx

User metadata
Rank Member
Rank
Member

What EISA cards compared to their ISA counterpart for gaming?

I'm playing around with my EISA Gateway 486 system and enjoying the fiddling with SCSI EISA, but there's no way I'm going to procure an EISA Mach32 anytime soon, so I'm left with ET4000AX ISA, Stealth PRO ISA, Stealth64 ISA.

I might have to do my own benchmarks. 🤣

386DX-40MHz-8MB-540MB+428MB+Speedstar64@2MB+SoundBlaster Pro+MT-32/MKII
486DX2-66Mhz-16MB-4.3GB+SpeedStar64 VLB DRAM 2MB+AWE32/SB16+SCB-55
MY BLOG RETRO PC BLOG: https://bitbyted.wordpress.com/

Reply 9 of 50, by brassicGamer

User metadata
Rank Oldbie
Rank
Oldbie

I have just acquired an EISA / VLB combo board. Although it came with an EISA SCSI controller, I have no other EISA cards. I've got about 3 VLB cards and a selection of ISA ones too so, as soon I can hunt down some EISA equivalents, I'll be able to contribute to this cool project 😀

Check out my blog and YouTube channel for thoughts, articles, system profiles, and tips.

Reply 10 of 50, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Slightly offtopic, but I have an issue with people using the term 'GPU' for any kind of graphics chip, or even the entire videoc […]
Show full quote

Slightly offtopic, but I have an issue with people using the term 'GPU' for any kind of graphics chip, or even the entire videocard.
To draw the analogy with CPUs, not every circuit that can perform calculations is a CPU. There are more basic variations of circuits, such as the Arithmetic Logic Unit (ALU).
What makes a CPU different from just an ALU is that a CPU is a 'processing unit': it can process a list of instructions, it can execute a program.

Likewise, not all graphics chips are GPUs. Early graphics chips required the CPU to feed them instructions one-at-a-time. It wasn't until the era of the GeForce256 and the Radeon that you could actually have the graphics card process a list of instructions independently. That's when the term 'GPU' first came into use, to differentiate it from 'passive' 3D accelerators, which depended on the CPU.

In short, the graphics cards here do not contain chips that can be described as a 'GPU'.

Is it true that the Voodoo and Voodoo 2 could perform some geometry calculations, taking some load of the CPU? I remember reading that somewhere. Still, even if it is true, I don't think you could call them GPUs since they can't do 2D on their own.

Reply 11 of 50, by kixs

User metadata
Rank l33t
Rank
l33t
dogchainx wrote:

What EISA cards compared to their ISA counterpart for gaming?

I'm playing around with my EISA Gateway 486 system and enjoying the fiddling with SCSI EISA, but there's no way I'm going to procure an EISA Mach32 anytime soon, so I'm left with ET4000AX ISA, Stealth PRO ISA, Stealth64 ISA.

I might have to do my own benchmarks. 🤣

Never saw or heard about Stealth64 series on ISA bus!? 😕 Maybe you have Speedstar64 ISA?

Requests here!

Reply 12 of 50, by dogchainx

User metadata
Rank Member
Rank
Member
kixs wrote:
dogchainx wrote:

What EISA cards compared to their ISA counterpart for gaming?

I'm playing around with my EISA Gateway 486 system and enjoying the fiddling with SCSI EISA, but there's no way I'm going to procure an EISA Mach32 anytime soon, so I'm left with ET4000AX ISA, Stealth PRO ISA, Stealth64 ISA.

I might have to do my own benchmarks. 🤣

Never saw or heard about Stealth64 series on ISA bus!? 😕 Maybe you have Speedstar64 ISA?

oops. Yeah, Speedstar64 ISA. I also have a Diamond Stealth Pro ISA. I got the two fused. 😎

386DX-40MHz-8MB-540MB+428MB+Speedstar64@2MB+SoundBlaster Pro+MT-32/MKII
486DX2-66Mhz-16MB-4.3GB+SpeedStar64 VLB DRAM 2MB+AWE32/SB16+SCB-55
MY BLOG RETRO PC BLOG: https://bitbyted.wordpress.com/

Reply 13 of 50, by Scali

User metadata
Rank l33t
Rank
l33t
kanecvr wrote:

Is it true that the Voodoo and Voodoo 2 could perform some geometry calculations, taking some load of the CPU?

As far as I know they can only do things in post-perspective space.

kanecvr wrote:

I remember reading that somewhere. Still, even if it is true, I don't think you could call them GPUs since they can't do 2D on their own.

What makes something a 'GPU' is not whether it can do 2D or not, but rather whether it is an actual 'processing unit'. As in, you can feed it a program and it executes it by itself.
VooDoo chips cannot do this, they require the CPU to prepare each polygon and send it to the card. This is merely an 'accelerator'.
The first card that was a proper GPU was the GeForce256:
The geometry is stored entirely in video memory, and the CPU just has to send the parameters to display the geometry. Mainly the matrices to places the objects in the world, set up the camera, and such. In theory it would only have to do all this once, but then you would render the same image everytime. So in practice you'll want to update the camera position, move some objects/actors, and perhaps lights. every frame.
The GPU performs the entire 3D transformation by itself, from its own memory. Just like how a CPU runs an application entirely by itself, and the user just gives the application some input.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 14 of 50, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

I generally agree, although the original Geforce had only fixed function TCL. At the time, NVidia defined GPU a little bit differently.

All hail the Great Capacitor Brand Finder

Reply 15 of 50, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

I generally agree, although the original Geforce had only fixed function TCL. At the time, NVidia defined GPU a little bit differently.

I don't see how fixed function or programmable shaders make a difference here.
Also, I don't think nVidia changed the definition of GPU over time.
This is how they explain it: http://www.nvidia.com/object/gpu.html

The technical definition of a GPU is "a single chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second."

The minimum throughput requirement is somewhat dubious, but other than that it makes perfect sense.
Sure, you can argue that today's GPUs go above and beyond that original definition. Then again, the same can be said for CPUs. CPUs have been around since the 1950s or so, and today's CPUs are way more advanced than what we had back then.
You can say that our perception of what a CPU or GPU is has changed, but the minimum requirements/original definition still hold.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 16 of 50, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

I'm saying NVidia defined a GPU somewhat differently than how I read your definition. I feel it gels somewhat better to associate GPU with fully programmable pipelines. Using "general" to refer to a fixed function (even partially) pipeline just doesn't sit right, to me.

My personal ruminations aside, it's a moot point. NVidia coined the term, they can use it however they please, estoppal laws notwithstanding.

All hail the Great Capacitor Brand Finder

Reply 17 of 50, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

I'm saying NVidia defined a GPU somewhat differently than how I read your definition. I feel it gels somewhat better to associate GPU with fully programmable pipelines. Using "general" to refer to a fixed function (even partially) pipeline just doesn't sit right, to me.

Who is referring to "general" then?
I don't see that word being used either in my post or in the nVidia quote.

I think you have to separate "programmable" into two things:
1) A "program" to render a set of source geometry onto a screen.
2) Programming certain parts of the pipeline.

The first is what a GeForce256 does: there's basically just a few instructions being fed to the GPU for "Render this batch of source geometry, using these matrices, to this viewport".
That can be seen as a "program".
Think of a GPU as having 'instructions' like "Transform & light vertexbuffer" and "draw T&L triangles".

The second is somewhat orthogonal to the first.
It allows you to also influence how certain parts of the pipeline process the geometry, rather than using a few 'pre-baked' algorithms in a state-machine. So basically your "Transform & light vertexbuffer" and "draw T&L triangles" 'instructions' can call programmable 'subroutines' themselves, rather than just evaluating certain states to pick some hardwired 'programs'.

However, that is a very blurry line there.
Firstly, even today's GPUs still have quite a bit of hardwired/state-machine pieces of logic (think about clipping polygons to the viewport, projecting from 3D to 2D, or performing 'hardwired' types of texture filtering such as bilinear, trilinear or anisotropic).
Secondly, you could even have a programmable pipeline, without fulfilling 1).
An example of that would be early DX9-capable hardware from Intel: they had programmable pixelshaders, yet they had no T&L whatsoever, so the CPU had to perform T&L and feed the data from system memory, much like on a VooDoo card.
In theory you could have a piece of hardware that implemented all types of shaders, yet had no ability to render a batch of geometry autonomously, and requires the CPU to feed each triangle.

GPU stands for "Graphics Processing Unit"
There's the distinction of GPGPU for "General Purpose Graphics Processing Unit", for GPUs that also support compute shaders (which aren't graphics-related in the strict sense).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 18 of 50, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

Who is referring to "general" then?
I don't see that word being used either in my post or in the nVidia quote.

To differentiate from an ASIC.

All hail the Great Capacitor Brand Finder

Reply 19 of 50, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

To differentiate from an ASIC.

Again, where is ASIC coming from, and why do we need to differentiate in the first place?
You're making logical leaps here which I am unable to follow. You'll have to take smaller steps and explain yourself along the way.

I mean, are you saying a GPU can't be an ASIC? And if so, why not?
I don't see why. An ASIC can be a number of things, it's an umbrella-term.
'AS' stands for application-specific, and I would argue that 'graphics' would certainly qualify for that.
Of course it doesn't necessarily work the other way around: there are ASICs aimed at graphics processing, which are too limited to be considered a GPU.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/