smallseo.info

graphics-card interview questions

Top graphics-card frequently asked interview questions

How do I determine which graphics card I'm using?

I recently read this answer on Gaming.SE, which made me realize that I actually have no idea how to tell which graphics card I have in my PC. Where can I find this information?


Source: (StackOverflow)

What exactly is a laptop "display audio" driver?

I've downloaded an Intel HD Graphics driver for my Dell laptop and the installer's welcome screen says it will install the following components:

  • Intel Graphics Driver
  • Intel Display Audio Driver

What exactly is "display audio"? Dell's and Lenovo's pages are spectacularly unhelpful.


Source: (StackOverflow)

ATI CrossFire instability and horizontal bands?

I recently added another ATI 5870 card to my system to experiment with ATI Crossfire (dual GPU) performance increases.

However, I've had a lot of intermittent stability problems, most seriously a set of oscillating horizontal bands which appear during gameplay and become quite severe, to the point that you can barely see the screen to exit the game!

It looks a little like this:

ATI crossfire banding

My system has an overclocked Sandy Bridge CPU that has been rock stable with a single 5870, but adding the second video card and enabling CrossFire seems to be problematic. The cards are both installed fine, fully seated with plenty of space between them, have both PCI 6-pin power connectors connected, and my 850 W power supply should be ample.

The Catalyst hardware properties look fine:

Primary Adapter     
Graphics Card Manufacturer  Powered by AMD  
Graphics Chipset    ATI Radeon HD 5800 Series   
Device ID   6898    
Vendor  1002    

Subsystem ID    2289    
Subsystem Vendor ID 1787    

Graphics Bus Capability PCI Express 2.0 
Maximum Bus Setting PCI Express 2.0 x8  

BIOS Version    012.018.000.001 
BIOS Part Number    113-C00801-XXX  
BIOS Date   2010/02/08  

Memory Size 1024 MB 
Memory Type GDDR5   

Core Clock in MHz   875 MHz 
Memory Clock in MHz 1225 MHz    
Total Memory Bandwidth in GByte/s   156.8 GByte/s   

Linked Adapter      
Graphics Card Manufacturer  Powered by AMD  
Graphics Chipset    ATI Radeon HD 5800 Series   
Device ID   6898    
Vendor  1002    

Subsystem ID    2289    
Subsystem Vendor ID 1787    

Graphics Bus Capability PCI Express 2.0 
Maximum Bus Setting PCI Express 2.0 x8  

BIOS Version    012.020.000.001 
BIOS Part Number    113-C00801-100  
BIOS Date   2010/03/31  

Memory Size 1024 MB 
Memory Type GDDR5   

Core Clock in MHz   850 MHz 
Memory Clock in MHz 1200 MHz    
Total Memory Bandwidth in GByte/s   153.6 GByte/s

I've tried the following:

All to no avail!


Source: (StackOverflow)

How can I enable onboard graphics AND dedicated card simultaneously?

My PC (Compaq Presario) has an onboard Intel 3100 which is pretty lame wbut would be useful for testing on, or a 3rd monitor. I've then got a nVidia PCIx card installed. I can't seem to find a way to turn both on at once... is it likely this is a BIOS limitation?

Running Windows 7.

The official page suggests I can't do this but I wondered if there is a way?


Source: (StackOverflow)

Why do workstation graphics cards cost far more than equivalent consumer graphics cards?

An Nvidia GeForce GTX 780 Ti costs $700, while a Quadro K6000 costs $4000—yet they use the same underlying GK110 core!

The same can be said for other workstation GPUs from both Nvidia and AMD.

What exactly does this price difference pay for with a workstation GPU? It is my understanding that they have specially-tuned drivers for CAD and other intensive business applications, sacrificing speed in gaming applications for greater accuracy and performance in such business software, but this by itself can't explain the cost difference. They may have more memory, and often of the ECC type, but that still can't explain a nearly sixfold difference.

Would hardware validation explain the difference? I suspect it goes like this: among the GPU chips that test as usable, 30% go into a high-end consumer card, and 68% go into a slightly cheaper consumer card; the other 2% go through even deeper validation, and the few that pass get put into a workstation card. Could this be the case, and is this why they're so expensive?


Source: (StackOverflow)

Why do lots of games have Direct X 9 and 11 options, but NOT DX10?

I don't really know much about DirectX other than it is responsible of having better graphic options for games, for example, tessellation and Ambient Occlusion in DX11.

But my question is, why some games (most recent games I've played at least), have the option of choosing DX9 (default) or DX 11 (with advanced options, and obviously with compatible video cards), but there is NO option for DX 10?

Is DX10 a version that never got released? was it defective? or what about it? why those games don't show an option to use DX 10 along DX 9 and 11?

Are there ANY games that show those 3 options? or do they just 'jump' from DX 9 directly to 11? why?

thanks


Source: (StackOverflow)

How can I test my GPU memory/RAM? [duplicate]

This question already has an answer here:

I run MemTest86 a lot at work on customer's machines, and it's great for troubleshooting memory issues. My question is, how can I test that a GPU is starting to go?

I know of programs like 3DMark to push the graphics card to its limits, but what about with Video Memory? Is it worth testing? Is there a stress tool actually able to catch issues in the video card (memory), perhaps using CUDA/OpenCL?


Source: (StackOverflow)

What exactly is VGA, and what is the difference between it and a video card?

Operating system development tutorials pinpoint reaching screen data by writing directly to VGA or EGA or Super VGA, but what I do not get is what is the real difference between writing to a fixed address for display, and writing to a video card directly, either onboard or removable? I just want the basic clarification of my confusion on this on my issue

And since it's not such a simple case with variables in cards, connective-interfaces, buses, architectures, system on a chip, embedded systems, etc., I find it to be hard to find a way to understand the idea behind this 100%. Would the fixed addresses differ from a high-end GPU to a low-end onboard one? Why and why not?

It is one of my goals in programming to host a kernel and make an operating system, and a farfetched dream indeed. Failing to understand the terminology not only hinders me in some areas, but makes me seem foolish on the subjects of hardware.

EXTRA: Some of these current answers speak of using the processors maximum addressable memory in the specifics on 16-bits. The problem is some of these other arising issues:

1.What about the card's own memory? That would not need system RAM for screen data itself.

2.What about in higher-bit modes? And can't you not neglect BIOS in real mode(x86)and still address memory through AL?

3.How would the concept of writing to a fixed address remain unchanged on a GPU with multitudes of registers and performance at or above the actual microprocessor?


Source: (StackOverflow)

Broken mouse cursor on main monitor, Windows 7 64 Bit, ATI Radeon HD 7870

UPDATE: Please check out the new answer I've posted to this problem. It might be that a solution to this frustrating problem exists now. Scroll down to see it.

Quite a while ago my graphic card died and I had to buy a new one. I decided for an ASUS Radeon HD 7870.

While I love the power of the graphic card and have no problems while playing games, I'm experiencing an annoying problem while I'm just on Windows with my dual monitor setup. Sometimes my mouse cursor gets broken on my main monitor and simply looks like this:

1

This seems to happen just at random situations and also sometimes when I move the mouse from one monitor to the other one. I can also always use a "workaround" to "fix" the problem, which means if I just move the mouse from one monitor to the other one often enough it becomes normal again at some point. But I don't want to do this all the time, so I'm searching for a solution.

I did a lot of Google research (try typing "ATI brok" in Google and it will already show you a lot of search entries for a broken Cursor), but the results where mostly not helping at all. Often they are "old" (from 2009 and before) and deal with mouse problems while playing games, which is not my problem. I'm missing up to date results from someone with maybe the same graphic card and can help me.

What I read some times is that deactivating windows aero should "fix" the problem, but to be honest I enjoy Windows Aero a lot and would prefer something different (I don't want to sound arrogant). The same is that some people say it would help to activate mouse trails, but the look & feel (like lagging) then bothers me even more. I also tried to disallow that the mouse cursor gets changed through designs, but this didn't change anything

Here is for example a big thread where people are talking about a similar (same?) problem. Some also state that deactivating Catalst AI would solve it for them, but I can't find this option in my up to date Catalyst Control Center anymore (maybe possible in a file somewhere in the directory of the CCC?).

Well, what's left to say is that I always keep my system up to date and already often installed new graphic card drivers (even sometimes tried Beta Versions). But the problem never disappeared.

Can someone here help me, has some ideas or experienced the same? I would be glad to hear from you! I'm also curious if this could maybe mean my graphic card is broken? (Although somehow it's hard to imagine for me)

Thanks a lot for every thought you're sharing with me.

Edit: Today it has happened again with the new ATI drivers.

Edit 2: Please check out the new answer I've posted to this problem. It might be that a solution to this frustrating problem exists now. Scroll down to see it.


Source: (StackOverflow)

What is the use for built-in graphic card on a "gaming" motherboard?

Many motherboards marketed as "gaming" has an integrated Intel graphic cards. Examples are the ASUS B150I PRO GAMING/WIFI/AURA and the Gigabyte GA-Z170N-Gaming 5 but these are just a couple of many. Note the "Gaming" word in their respective names.

Now I understand, that if you want to build a gaming PC most likely you would opt for Nvidia or AMD. This is because integrated video do not have a chance to compare with higher end Nvidia/AMD offerings. Correct me if I'm wrong.

I understand that putting in an integrated graphics into a motherboard increases it's cost. So there must be a reason why manufactures do this. It looks to me that putting an integrated GPU on a gaming MB is more of a rule rather than an exception.

I however cannot figure it out, what this integrated graphic is good for. Could you please explain what it can be used for (I'm guessing the intentional use, but any other possible uses too) given that for a gaming PC one is most likely to utilize an external GPU?

If you think any of my assumptions are wrong please point that out, since the whole thing does not make a lot of sense to me it is quite likely that it's my assumptions that are wrong somewhere.


Source: (StackOverflow)

How does the CPU and GPU interact in displaying computer graphics?

Here you can see a screenshot of a small C++ program called Triangle.exe with a rotating triangle based on the OpenGL API.

enter image description here

Admittedly a very basic example but I think it's applicable to other graphic cards operations.

I was just curious and wanted to know the whole process from double clicking on Triangle.exe under Windows XP until I can see the triangle rotating on the monitor. What happens, how do CPU (which first handles the .exe) and GPU (which finally outputs the triangle on the screen) interact?

I guess involved in displaying this rotating triangle is primarily the following hardware/software among others:

Hardware

  • HDD
  • System Memory (RAM)
  • CPU
  • Video memory
  • GPU
  • LCD display

Software

  • Operating System
  • DirectX/OpenGL API
  • Nvidia Driver

Can anyone explain the process, maybe with some sort of flow chart for illustration?

It should not be a complex explanation that covers every single step (guess that would go beyond the scope), but an explanation an intermediate IT guy can follow.

I'm pretty sure a lot of people that would even call themselves IT professionals could not describe this process correctly.


Source: (StackOverflow)

Transatlantic ping faster than sending a pixel to the screen?

John Carmack tweeted,

I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?

And if this weren’t John Carmack, I’d file it under “the interwebs being silly”.

But this is John Carmack.

How can this be true?

To avoid discussions about what exactly is meant in the tweet, this is what I would like to get answered:

How long does it take, in the best case, to get a single IP packet sent from a server in the US to somewhere in Europe, measuring from the time that a software triggers the packet, to the point that it’s received by a software above driver level?

How long does it take, in the best case, for a pixel to be displayed on the screen, measured from the point where a software above driver level changes that pixel’s value?


Even assuming that the transatlantic connection is the finest fibre optics cable that money can buy, and that John is sitting right next to his ISP, the data still has to be encoded in an IP packet, get from the main memory across to his network card, from there through a cable in the wall into another building, will probably hop across a few servers there (but let’s assume that it just needs a single relay), gets photonized across the ocean, converted back into an electrical impulse by a photosensor, and finally interpreted by another network card. Let’s stop there.

As for the pixel, this is a simple machine word that gets sent across the PCI express slot, written into a buffer, which is then flushed to the screen. Even accounting for the fact that “single pixels” probably result in the whole screen buffer being transmitted to the display, I don’t see how this can be slower: it’s not like the bits are transferred “one by one” – rather, they are consecutive electrical impulses which are transferred without latency between them (right?).


Source: (StackOverflow)

How to Use 3 Monitors

Right now my setup has a nice big 24" flatscreen in the center with a 19" flatscreen to the left. But I have a big gaping hole on the right.

I have a 3rd monitor to put there, but I'm not sure how to get the computer to recognize it. Do I need a graphics card with 3 ports? Can I span the monitors over non SLI-Linked graphics cards? Is it possible to plug my 3rd monitor into the on-board VGA port and have it work?


Source: (StackOverflow)

PC with 32 screen matrix freaks

I've just finished building a new control room at work. It has 32 monitors and the plan was to have a single computer powering it. The old room had a few computers with odd screens keyboards/mice everywhere and decided it was time to simply things and have a single pc - with it been a single operator most of the time.

Theres not an awful lot of demanding stuff running on the machine, some scada packages, IP camera viewing software, office etc.

The issue that im having isnt down to performance. At least I don't think, the computer is of a fairly high spec. It's a HP Z840 with 2 Intel Xeon E5-2670's 4 nvidia nvs810s, 256GB of ram and a 500GB SSD. The operating system is windows 10 enterprise 64bit. The screens are all HP Z24n.

My slots are used as follows.

  1. PCIe3x4 - None
  2. PCIe3x16 - NVS 810 1
  3. PCIe3x8 - None
  4. PCIe3x16 - NVS 810 2
  5. PCIe3x8 - NVS 810 3
  6. PCIe3x16 - NVS 810 4
  7. PCIe2x4 - None

I've realised after looking at the manual 1 that I should have gpu 3 in slot 3. However the behavior of the machine is strange, I connected all 32 at first and most came on with the windows background and task bar. about 10 had no background but had the taskbar. The mouse moved at a snails pace and I was unable to postion the screens in nvidia control panel as it would crash/freeze. I unpluged the cables from gpus 1 and 2 and managed to get 16 screens on from cards 3 and 4. When I got to screen 21, 5th screen on gpu 2 the machine when crazy again. The mouse started to lag again, and some screens were showing as duplicates of each other.

I've had a look in task manager, I've not seen the cpu or ram go any higher than 4% when it locks up its just nvidia control panel that is not responding.

I'm thinking it must be some sort of bandwidth problem but not sure how to prove this or fix it.

Should I be able to get 32 1920x1200 screens of this hardware?

Is this behavior normal? I will try moving NVS 810 3 to slot 3 and see what difference that makes, any other ideas would be appreciated.

The screens are arranged in a 8 by 4 matrix.

screens pc

update from 30/07/16

There had been questions about whether I had reached the max horizontal limit for windows so I wanted to give this a test and prove it.

So I uninstalled the video card driver and removed 1 card so I only had cards in slots 2, 4 and 6. I connected up 16 screens in a 8 by 2 matrix to the cards in slots 2 and 6 and it worked ok. The pc was still struggling when usings windows display settings and nvidia control panel. After applying the video settings it took at least a minute to settle and allow me to accept the config. I stretched a window across the whole screen matrix.

16 screens working ok

I then tried to put a 17th screen on and all hell broke loose again. So as you can see below I added the 17th screen in the middle of the two rows. And applied the settings. The pc took ages to settle and allow me to accept again.

nvidia control

So at this point it the newly added screen is duplicated off the bottom left and windows display settings is showing some freaky 6|17 instead of what the nvidia control panel is showing.

17th screen freakout

I had a go at building the matrix up 4 x 4 and adding more in. Again I made it to 16 screens with no great shakes still a little struggle waiting for it to settle and apply the config but nothing major.

I connected them to the card as follows

NVS 810 1 - top 2 rows of 4 NVS 810 2 - bottom 2 rows of 4 (dont worry about the white screen,it was just an explore window) 4 by 4 ok

I moved the right side top four and connect 2 of them.

They worked 'ok' however they had black wall papers not like the others. Also when you did like a left click drag to select things it wouldn't clear off. So I could draw blue boxes all over so I knew at this point something was up. For the heck of it I connected the next 2 and it threw all the toys out the pram again. It merged/duplicated the top 2 middle screens.

4x4 +4 duplicate

8/1/16

Ordering 6 x AMD Firepro W600 hopefully will have them for the end of the week and will feed back!

8/4/16

Installed 3 x AMD Firepro W600 and hit the same wall at 16 screens, however it was less flaky to setup compared to the nvidia settings, the amd display settings never crashed and allowed windows display settings to control the screen layout.


Source: (StackOverflow)

Is it possible to connect an external GPU via Ethernet?

I have laptop which has working Ethernet port but I always use WiFi . I am wondering if it is possible to run and use a graphics card (with external power supply) connected to the Ethernet port (with some kind of PCI emulation to emulate the Ethernet GPU as a PCI one).

A Cat6 cable can do 10 Gbps, which should be enough for a GPU to run and play games.

Could this be possible?


Source: (StackOverflow)