Discussion in 'Reviews & Articles Discussion' started by HardwareHeaven, Nov 21, 2008.
I wonder if it or something similar will appear in a near future version of Direct X.
I guess put in the sense that it does add to IQ the Framrate hit is acceptable, I guess its too young a tech to really judge it fully. Was more shooting it down because that guy was using it in his arguements for why a 260GTX is better to have than 4870.
As far as I know for DX implementation of a PhysX type deal, Windows 7 would have something similar in the works on the basis of WARP (Windows Advanced Rasterization Platform) which uses the processor to do some rendering, processors with SSE4.1 instruction set will be faster at the rendering (Namely Penryn/Nehalem are the only Intel Processors with this, as far as AMD goes I dont think they will ever be SSE4.1 Compliant, since they opted for their own SSE4a set.)
WARP works with DX10 and DX10.1, as to how good it will be? I would say marginal at release, but it definately has potential.
If you havent heard about WARP a good article on it can be found here...
Windows 7 WARP brings DX10 rendering to the CPU -- Gamers.com News
I wonder what ATI will use to counter PhysX, or are they even going to counter-attack at all?
Well, even though the guy was ranting nonsensically, I would rather have an OC'd 260 GTX compared with a 4870 myself. Partially due to PhsyX and the other due to the Forceware drivers.
I don't think this is going anywhere inside a gaming environment. CPU based solutions like this just won't be useful enough to power in game effects - the bandwidth just won't be high enough (in some cases less than a tenth of a standard GFX which will lead to massive slowdowns even without a lot of particle support etc). This is more for windows eye candy featuresets such as Aero for users with GFX without acceleration support.
THen thing I don't undersand about the WARP is they claim that if for example your graphics card fails, you will still be able to load windows, but, how on earth is the monitor going to receive the signal if your graphics card is fried?
WARP seems like a good solution, I just don't know for what.
As for PhysX, I was never impressed with it, but it has the potential.
Now that this thread has settled down some poking my head in here for a sec. On PhysX I think it does have potential but is still new and has teething pains much like Vista did initially.
A buddy of mine came up with the great idea of using my modded 9600gt as a dedicated PhysX processor along side my new HD4870. But unfortunately Vista only allows one display driver to be installed at a time because ATI and NV use WDDM drivers and not XPDM. So basically for this to work I would need two NV cards using the exact same WDDM driver.
Another buddy of mine said someone found a workaround on Guru(ATI & NV) but I am not a member there and because of the facts I am skeptical ... BUT ... if anyone has some ideas let me know
Wouldn't it be cheaper or at least less problematic if you just bought a dedicated physx card?
I suppose it would be but the 9600GT runs 780/2000/1050 stable with the cooler and I already have it lol. Was an idea but your suggestion might be something to check out in the New Year ... those cards are around 100CAD pretty cheap actually ...
Thanks for the reply
I heard something a while ago about the possibility of adapting physx to run on any ATI card, but I don't know what came of it.
I too have a modded 9600GT lying around, but no SLI motherboard to hook it up alongside the 9800. Besides, I've yet to get into a game that makes any real use of it. I'm still hopeful though - physx (and CUDA for that metter) has great potential.
True I was not expecting much out of WARP, discrete graphics will always be exponentially better in terms of processing power, but it does give way to some other ideas on how to make games run better in terms of CPU loading and getting every bit out of your system.
I would also add that I use 7 beta as my main OS currently, and I have not really seen anything better for performance in gaming scenarios.
I do agree though about it being "eyecandy".... it will probably take the same route as ReadyBoost to be honest.
ati is also loosing a lot of money doing this... you need to read about it...
Got a link?
Hmm, I don't know the prices exactly in the UK, but isn't the HD4870 cheaper than de HD4870 in most cases, especially a OC-version of the GTX?
On the other hand the HD4870 has DX 10.1 and you can easily combine it with another with various Intel motherboards...
The UK market at the minute is fluctuating so much right now its hard to nail prices. However the 512mb 4870 is very inexpensive now at around £190 HIS ATI Radeon HD 4870 512MB GDDR5 TV-Out/Dual DVI/HDMI (PCI-Express) - Retail
A GTX 260 MAXCORE can be bought for around £212 inc vat Zotac GeForce GTX 260 "Maxcore" 896MB GDDR3 TV-Out/Dual DVI (PCI-Express) - Retail
and a 1gb 4870 is around £218 HIS ATI Radeon HD 4870 1024MB GDDR5 TV-Out/Dual DVI/HDMI (PCI-Express) - Retail
So it is pretty close with regards to pricing ...
Incidentally ive been asked what DX10.1 "is" exactly as I notice a lot of people are talking about it:
What are the changes? DX 10.1's goals are to offer the "complete" DX 10, giving developers better control over image quality and making mandatory some of the things that are optional in DX 10. For example, 32-bit floating point filtering is optional in DX10 (16-bit FP filtering is mandatory), but will be mandatory in DX 10.1. Also, in DX 10, the number of multisample anti-aliasing samples is optional ”DX 10.1 will make 4x AA mandatory, and require two specific sample patterns. Graphics cards can offer more sample patterns, and developers can query them in their shaders. Graphics cards that are DX 10.1 compliant will have to offer programmable shader output sample masks and multisample AA depth readback. Game developers will be able to index into cube maps and perform bitwise copies from uncompressed textures to block-compressed texture formats.
Nor do I perhaps there were driver issues there as well but if he couldn't figure it out I know few who can ...
On the 9600gt I don't believe you actually need an SLI board just one with 2 PCI-Ex16 slots. As far as I have read the idea is the cards continue to run independently of each other one dedicated to processing Physx.
As has been mentioned several times on this thread there are still quite a lack of applications using this so not getting to crazy with it .... but those applications will come
Foxconn P4M9007MB - google it and pity me!
Separate names with a comma.