Latest PSU headlines:

Page 11 of 12 FirstFirst ... 11 ... LastLast
Results 251 to 275 of 287
  1. #251
    Power Member
    mynd's Avatar
    Join Date
    May 2006
    Age
    41
    Posts
    17,290
    Rep Power
    160
    Points
    153,515 (0 Banked)
    Items User name style
    Achievements IT'S OVER 9000!
    Quote Originally Posted by mistercrow View Post
    So are you trying to dispute what Carmack is saying or are you agreeing with it?
    I'm saying if Carmack said it, then it likely means the XBO one does have HUMA or some form of HSA.
    I wasn't sure it would.

    Quote Originally Posted by Sajuuk Khar View Post
    Im not sure if I get your logic about it actually being the opposite. A programmer with knowledge of the hardware knowing where everything is going in memory vs an automated program directing instructions to the cpu or gpu based on data types.
    You cant get around stalls while copying the data though, and thtas what they are trying to get rid of.
    I just wonder if having a 1st gen implementation of a new memory managing system is particularly useful in a console environment. If its implemented through hardware ( which from some quick reading it seems it is) is there the possibility that the first gen is going to be able to be perfect in its handling of job allocation? And if so is it possible to by pass if individual devs develop their own faster methods? If its software though, what ever it can be updated :P
    Same thing happened last gen, MS was the first to use Unified Shaders, it was now where as good as later implementations, but it was still more than good enough.
    In relation to the ps4 and memory, the killzone PDF (seems to come up alot :P ) shows 3 memory areas. Video, system and shared. If neither the GPU or CPU can use the same partition of memory, which is really in control of that 128MB shared area?
    The 128mb has made my head scratch, its so very little amount, but I think the key here is, its scratch data, its effective created every cycle, so it would be ideal stuff to pass through from the CPU. I have a theory that there is 128mb cache between the CPU->GPU on the PS4. But its only a theory.
    While this might seem off topic, most of the techniques discussed about the ps4 also relate to the X1.
    They are indeed.
    I am no longer participating in these forums, I wish all of you on the PSU Forums the best for the future.

  2. #252
    Elite Sage
    Two4DaMoney's Avatar
    Join Date
    Jun 2007
    Age
    27
    Posts
    12,301
    Rep Power
    110
    Points
    13,443 (75,576 Banked)
    Items Naughty DogPS3 SlimNaughty DogUser name style
    Achievements IT'S OVER 9000!
    Quote Originally Posted by Sufi View Post
    so was carmack talking out of his butt when he said that the AMD architecture was essentially the same on both consoles?
    In his full speech(and not cherry picked bull$#@! from that oxm article), he made the comment that both consoles are essentially the same due to the architecture being made by AMD. It's apples to apples next gen.
    Last edited by Two4DaMoney; 08-06-2013 at 06:23.
    Destiny is going to be EVERYTHING that EA and MS hoped titanfall would be


  3. #253
    Dedicated Member
    John Willaford's Avatar
    Join Date
    Feb 2013
    Location
    Owings Mills, MD
    Age
    39
    Posts
    1,033
    Rep Power
    18
    Points
    380,210 (0 Banked)
    Quote Originally Posted by mynd View Post
    Sorry, I missed this post. Id actually say the opposite, because it is a know spec/closed box system , you know every system can run it. You also have a limited amount clock/s per cycle, games are only going to improve through efficiencies in code, not hardware. Therefore HSA and HUMA are key to draining very single bit of use you can, it essentially leveraging the highly parallel structure of the GPU's and using them as extra CPU's.
    Yes you could get away with reading an writing the data between, but reading data form the pool that isn't you own creates stalls. Thats why Sony put a bus in to direct data straight from the CPU->to the GPU.

    Same reasoning is behind HUMA, although its far more flexible, you can simply pass an address to the GPU and the GPU reads the data from memory, in one case the data is piped through from the CPU, in an other the GPU simply fetches is from memmory.

    Without out the buss Sony put in, or without HUMA, you have to duplicate the data, by copying it over from system memory, into GPU memory (this is still true to some extent in the PS4).


    176 is the maximum speed the ram goes at. That can either be split between the 20gb bus or the 176gb bus.
    If both were running flat tack (unlikely), then yes it would be 156/20.


    Because we have been told by the developers (see "The Crew porting PS4"), of the issues involved. You can see it, but it stalls. Memory is basically tagged either vram or system ram.


    We don't, all we can go off is the VGleaks diagrams, which indicate it has one bus feeding both.
    What reference can I see that indicates the PS4 doesn't implement HSA/hUMA?
    None of you have spotted the obvious:
    CERNY who designed it said there's a 20GB backside bus from the GPU
    to the memory. He never said there's a 20GB bus from CPU to memory.
    Now, some ass comes along and says CPU instead of GPU.
    I'm going to believe the system architect and not a devloper actually, excuse me, system architect versus someone Microsoft can slide some money to to put out some geek speak misdirection.


    http://www.gamasutra.com/view/featur...rk_.php?page=2

    A 20gb/s bus only from the Jaguars to the Memory is preposterous, as, well, that's slower than DDR3.
    I'm thinking we have Jaguars with GDDR5 memory controllers hUMA with GCN 2.0 cores and a backside bus for some low bandwidth transfers/memory access that wouldn't need to flush the GPU caches causing stalls.

    How does that sound for a bit more logical.
    OTHERWISE, again, please point me to a system dev kit manual where I can see the final diagram and see that such an odd move was made.

    Quote Originally Posted by Vulgotha View Post
    And what AMD CPU will they be using pray tell? That has 48 threads?

    By the way we should probably phone Intel and let them know that AMD was able to create a super chip capable of that many threads and stuff it into a machine running off an APU that has a TDP below 180 watts.
    Agreed dude, totally, like, why doesn't SUN just kill the Sparc and use this chip!? HAHAHAH.
    Anyone can timeslice on a cpu, that's not a thread!

    Quote Originally Posted by Foraeli View Post
    Yeah, the CPU has to be this low-power AMD Jaguar because of heat constraints, but the CPU is generally underused anyways, so it really doesn't matter too much. A Jaguar is more than capable of next gen physics and AI.
    Absolutely, right now only servers utilize 8 hardware threads well. My 12 thread i7s are usually idle on 2 or 4 threads and they aren't just running a game. I can run a game in a window on my 32in IPS Vizio while keeping coms open with the world, lol, not that I game much anymore. (I know, I said COMs, i'm getting old, there's 38 year old grey hair filling in my sideburns, might turn into Christopher Walken, which I can't complain about).

    8 Core Jaguar should test near an i5 Haswell in performance, particularly if it's got GDDR5 bus. Cerny said the PS4 has a 20gb/s bus from the GPU, not the CPU. I'm going with Cerny. 20gb to the Jaguars would be 1/3 the speed of DDR3, makes no logical sense.


    Edit: Please do not triple post. Thanks. ~ PBM
    Last edited by Brandon; 08-06-2013 at 07:27.

  4. #254
    Dedicated Member
    Sajuuk Khar's Avatar
    Join Date
    May 2005
    Age
    26
    Posts
    1,376
    Rep Power
    72
    Points
    8,418 (0 Banked)
    Quote Originally Posted by John Willaford View Post
    What reference can I see that indicates the PS4 doesn't implement HSA/hUMA?
    None of you have spotted the obvious:
    CERNY who designed it said there's a 20GB backside bus from the GPU
    to the memory. He never said there's a 20GB bus from CPU to memory.
    Now, some ass comes along and says CPU instead of GPU.
    I'm going to believe the system architect and not a devloper actually, excuse me, system architect versus someone Microsoft can slide some money to to put out some geek speak misdirection.


    http://www.gamasutra.com/view/featur...rk_.php?page=2

    A 20gb/s bus only from the Jaguars to the Memory is preposterous, as, well, that's slower than DDR3.
    I'm thinking we have Jaguars with GDDR5 memory controllers hUMA with GCN 2.0 cores and a backside bus for some low bandwidth transfers/memory access that wouldn't need to flush the GPU caches causing stalls.

    How does that sound for a bit more logical.
    OTHERWISE, again, please point me to a system dev kit manual where I can see the final diagram and see that such an odd move was made.
    Actually yeah, fair point. Opening line from point from Mr Cerny "First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches."

    What do you make of that Mynd?

    Edit: Actually reading more of what Cerny talked about, that 2nd bus is there for exactly the use of CPU <--> GPU communication through a shared access to system memory.

    The info from "The Crew" porting team talked about how there were issues with the shared memory blocks and going from a DX11 PC enviroment, but what they talked about was mainly shaders need info from both sides. They didnt go into much detail on how they fixed it other than putting the data in the correct block. And this is probably a shared process devs are going to have to do when going from pc to the X1 and PS4.

    What we might have with the ps4 is an almost complete HSA system where the GPU can see both its own VRAM and system ram, while the CPU only has direct system ram access...
    Last edited by Sajuuk Khar; 08-06-2013 at 06:37.

  5. Likes mistercrow likes this post
  6. #255
    Dedicated Member
    John Willaford's Avatar
    Join Date
    Feb 2013
    Location
    Owings Mills, MD
    Age
    39
    Posts
    1,033
    Rep Power
    18
    Points
    380,210 (0 Banked)
    Quote Originally Posted by mynd View Post
    I'm saying if Carmack said it, then it likely means the XBO one does have HUMA or some form of HSA.
    I wasn't sure it would.


    You cant get around stalls while copying the data though, and thtas what they are trying to get rid of.

    Same thing happened last gen, MS was the first to use Unified Shaders, it was now where as good as later implementations, but it was still more than good enough.


    The 128mb has made my head scratch, its so very little amount, but I think the key here is, its scratch data, its effective created every cycle, so it would be ideal stuff to pass through from the CPU. I have a theory that there is 128mb cache between the CPU->GPU on the PS4. But its only a theory.

    They are indeed.
    Eh, right now, i program how i'm used to. While i get great performance i do that to get my first gen game out and if it's running well and plays sweet, i accept that and start working on specialized code on Gen 2 games to start optimizing my engine for each machine. There's nothing about these architectures that STOPS you from corraling off a bit of ram and using it for XYZ etc. I wouldn't read much into that if it were coming from either console.

  7. #256
    Dedicated Member
    John Willaford's Avatar
    Join Date
    Feb 2013
    Location
    Owings Mills, MD
    Age
    39
    Posts
    1,033
    Rep Power
    18
    Points
    380,210 (0 Banked)
    Side Note: Does anyone remember the university project that was supposed to be a giant chip of cpu components that ran threads only? It didn't technically have any one full 'core' but things were arranged that it could couple components efficiently for paths for threads to run on?
    I'm not talking about Transmeta either?
    I think it was sponsored by NEC.

  8. #257
    Apprentice

    Join Date
    Jun 2013
    PSN ID
    bat0nas_LT
    Posts
    235
    Rep Power
    9
    Points
    2,684 (0 Banked)
    OFFTOPIC

    Everybody is blaming MS for not telling the answers or $#@!ing around the specs of X1. But nobody is blaming Sony for doing the same. Are we really sure we know everything (or the answers to the same questions)?

    Probably not.
    But still Sony is OK and MS - not OK.

    I was X360 user and PS3 hater. I pre ordered PS4 (at the beginning it sounded like a good idea). But now - I haven't heard any good new from Sony for a while. Only good news and answers from MS. Makes me wanna cancel the preorder and wait few weeks after both consoles are released.

    Unless we will here more from Sony as well before the launch.

  9. #258
    PSU Live Streamer
    YoungMullah88's Avatar
    Join Date
    Sep 2006
    Location
    Charlotte
    PSN ID
    xShadow__WoIf | YoungMullah88
    Posts
    14,231
    Rep Power
    111
    Points
    30,824 (1,800 Banked)
    Items User name style
    Achievements IT'S OVER 9000!
    Quote Originally Posted by bat0nas View Post
    OFFTOPIC

    Everybody is blaming MS for not telling the answers or $#@!ing around the specs of X1. But nobody is blaming Sony for doing the same. Are we really sure we know everything (or the answers to the same questions)?

    Probably not.
    But still Sony is OK and MS - not OK.

    I was X360 user and PS3 hater. I pre ordered PS4 (at the beginning it sounded like a good idea). But now - I haven't heard any good new from Sony for a while. Only good news and answers from MS. Makes me wanna cancel the preorder and wait few weeks after both consoles are released.

    Unless we will here more from Sony as well before the launch.
    Oh lord not this again :what:

    Guess someone should tell Cerny his been talking out his ass since February

    Sent from my SGH-T889 using Tapatalk

  10. Likes mistercrow, Two4DaMoney likes this post
  11. #259
    Dedicated Member
    John Willaford's Avatar
    Join Date
    Feb 2013
    Location
    Owings Mills, MD
    Age
    39
    Posts
    1,033
    Rep Power
    18
    Points
    380,210 (0 Banked)
    Quote Originally Posted by Sajuuk Khar View Post
    Actually yeah, fair point. Opening line from point from Mr Cerny "First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches."

    What do you make of that Mynd?

    Edit: Actually reading more of what Cerny talked about, that 2nd bus is there for exactly the use of CPU <--> GPU communication through a shared access to system memory.

    The info from "The Crew" porting team talked about how there were issues with the shared memory blocks and going from a DX11 PC enviroment, but what they talked about was mainly shaders need info from both sides. They didnt go into much detail on how they fixed it other than putting the data in the correct block. And this is probably a shared process devs are going to have to do when going from pc to the X1 and PS4.

    What we might have with the ps4 is an almost complete HSA system where the GPU can see both its own VRAM and system ram, while the CPU only has direct system ram access...
    How about 20gb/s to avoid the GPU cache flushes and just grab updates on data worked on by the CPU that needs to be handed to compute, making the GPU more efficient and the system more synchronous?
    I'm just looking for something more convincing than a CPU bus that's less than 1/3 the speed of DDR3, i can't buy that!? So, HSA and hUMA but IF the GCN cores could inefficiently flush their cache's just to check on things and reload some data for compute, then a separate GPU bus to fetch compute data being worked on by CPU and GPU doesn't violate hUMA or HSA, it just adds an efficiency to the specific purpose.

  12. #260
    Dedicated Member
    Sajuuk Khar's Avatar
    Join Date
    May 2005
    Age
    26
    Posts
    1,376
    Rep Power
    72
    Points
    8,418 (0 Banked)
    Quote Originally Posted by John Willaford View Post
    How about 20gb/s to avoid the GPU cache flushes and just grab updates on data worked on by the CPU that needs to be handed to compute, making the GPU more efficient and the system more synchronous?
    I'm just looking for something more convincing than a CPU bus that's less than 1/3 the speed of DDR3, i can't buy that!? So, HSA and hUMA but IF the GCN cores could inefficiently flush their cache's just to check on things and reload some data for compute, then a separate GPU bus to fetch compute data being worked on by CPU and GPU doesn't violate hUMA or HSA, it just adds an efficiency to the specific purpose.
    What DDR3 speed are you comparing 1/3 to?

  13. #261
    Power Member
    mynd's Avatar
    Join Date
    May 2006
    Age
    41
    Posts
    17,290
    Rep Power
    160
    Points
    153,515 (0 Banked)
    Items User name style
    Achievements IT'S OVER 9000!
    Quote Originally Posted by Sajuuk Khar View Post
    Actually yeah, fair point. Opening line from point from Mr Cerny "First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches."

    What do you make of that Mynd?
    Lordy ever time I went to reply to you, you wrote more, LOL.

    Edit: Actually reading more of what Cerny talked about, that 2nd bus is there for exactly the use of CPU <--> GPU communication through a shared access to system memory.

    The info from "The Crew" porting team talked about how there were issues with the shared memory blocks and going from a DX11 PC enviroment, but what they talked about was mainly shaders need info from both sides. They didnt go into much detail on how they fixed it other than putting the data in the correct block. And this is probably a shared process devs are going to have to do when going from pc to the X1 and PS4.

    What we might have with the ps4 is an almost complete HSA system where the GPU can see both its own VRAM and system ram, while the CPU only has direct system ram access...
    Shader constants would normally be in the GPU memory, so they must have been referencing them.

    Really, the 20gb bus isn't in question, what probably is in question is that "super-onion" assumption.



    This is the assumption. Super Oinon would be the section that we really dont know much about, or if it even exists.

    Vgleaks seem more sure of their data...



    This still suggests a bus running between the CPU and GPU.

    So Cerny wasn't lying when he said he had a bus between the GPU and memory, he just omitted the fact that it stop past the CPU on the way through.
    Last edited by mynd; 08-06-2013 at 07:35.
    I am no longer participating in these forums, I wish all of you on the PSU Forums the best for the future.

  14. #262
    Dedicated Member
    Sajuuk Khar's Avatar
    Join Date
    May 2005
    Age
    26
    Posts
    1,376
    Rep Power
    72
    Points
    8,418 (0 Banked)
    Quote Originally Posted by mynd View Post
    Lordy ever time I went to reply to you, you wrote more, LOL.


    Shader constants would normally be in the GPU memory, so they must have been referencing them.

    Really, the 20gb bus isn't in question, what probably is in question is that "super-onion" assumption.



    This is the assumption. Super Garlic would be the section that we really dont know much about, or if it even exists.

    Vgleaks seem more sure of their data...



    This still suggests a bus running between the CPU and GPU.
    Yeah sorry about editing :P

    But Mynd, Cerny actually SAID there is a 20GB/s bus going from the GPU to the system memory, I don't know how you can get a more direct and correct piece of information than that.

    VGleaks seems more sure of their data than a direct quote from the system architect? Who gives examples of the issues we have all been talking about and then explains how the 2nd bus is there to remove/help reduce said problems.

    Also that 2nd layout predates the 8GB announcement so i'm not sure we can be sure of the validity of the information. Actually both images might.

  15. #263
    Power Member
    mynd's Avatar
    Join Date
    May 2006
    Age
    41
    Posts
    17,290
    Rep Power
    160
    Points
    153,515 (0 Banked)
    Items User name style
    Achievements IT'S OVER 9000!
    Quote Originally Posted by John Willaford View Post
    How about 20gb/s to avoid the GPU cache flushes and just grab updates on data worked on by the CPU that needs to be handed to compute, making the GPU more efficient and the system more synchronous?
    Thats exactly what its there for.
    I'm just looking for something more convincing than a CPU bus that's less than 1/3 the speed of DDR3, i can't buy that!?
    20gb is fine.
    So, HSA and hUMA but IF the GCN cores could inefficiently flush their cache's just to check on things and reload some data for compute, then a separate GPU bus to fetch compute data being worked on by CPU and GPU doesn't violate hUMA or HSA, it just adds an efficiency to the specific purpose.
    HUMA means you can simply pass a address pointer off to the GPU and you guaranteed coherency. This is more of a work around.
    I am no longer participating in these forums, I wish all of you on the PSU Forums the best for the future.

  16. #264
    Power Member
    mynd's Avatar
    Join Date
    May 2006
    Age
    41
    Posts
    17,290
    Rep Power
    160
    Points
    153,515 (0 Banked)
    Items User name style
    Achievements IT'S OVER 9000!
    Quote Originally Posted by Sajuuk Khar View Post
    Yeah sorry about editing :P

    But Mynd, Cerny actually SAID there is a 20GB/s bus going from the GPU to the system memory, I don't know how you can get a more direct and correct piece of information than that.
    Actually he didnt, he said it allowed them to.
    He never said its a GPU->memory only buss, in fact its hinders the whole concept of HSA if it does.
    Its far more flexible and useable if the CPU pass data directly to the GPU via that same bus.
    VGleaks seems more sure of their data than a direct quote from the system architect? Who gives examples of the issues we have all been talking about and then explains how the 2nd bus is there to remove/help reduce said problems.

    Also that 2nd layout predates the 8GB announcement so i'm not sure we can be sure of the validity of the information. Actually both images might.
    Here is what he actually said

    A typical PC GPU has two buses," said Cerny. "There’s a bus the GPU uses to access VRAM, and there is a second bus that goes over the PCI Express that the GPU uses to access system memory. But whichever bus is used, the internal caches of the GPU become a significant barrier to CPU/GPU communication -- any time the GPU wants to read information the CPU wrote, or the GPU wants to write information so that the CPU can see it, time-consuming flushes of the GPU internal caches are required."

    "First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches. As a result, if the data that's being passed back and forth between CPU and GPU is small, you don't have issues with synchronization between them anymore. And by small, I just mean small in next-gen terms. We can pass almost 20 gigabytes a second down that bus. That's not very small in today’s terms -- it’s larger than the PCIe on most PCs!
    So that's confirmation that there is three bus's.
    Now if you choose to believe it cant snoop the CPU cache, that's up to you. But onion already does this in the standard AMD APU setup.



    It really isn't that exciting. This is current APU setup.
    Last edited by mynd; 08-06-2013 at 08:02.
    I am no longer participating in these forums, I wish all of you on the PSU Forums the best for the future.

  17. #265
    Dedicated Member
    Sajuuk Khar's Avatar
    Join Date
    May 2005
    Age
    26
    Posts
    1,376
    Rep Power
    72
    Points
    8,418 (0 Banked)
    Quote Originally Posted by mynd View Post
    Actually he didnt, he said it allowed them to.
    Allowed them to do what?

  18. #266
    Power Member
    mynd's Avatar
    Join Date
    May 2006
    Age
    41
    Posts
    17,290
    Rep Power
    160
    Points
    153,515 (0 Banked)
    Items User name style
    Achievements IT'S OVER 9000!
    Quote Originally Posted by Sajuuk Khar View Post
    Allowed them to do what?
    See edit above.
    Really there isn't anything new or unknown about what Sony did, its very much a modification of APU's out on the market now.
    I am no longer participating in these forums, I wish all of you on the PSU Forums the best for the future.

  19. #267
    Dedicated Member
    Sajuuk Khar's Avatar
    Join Date
    May 2005
    Age
    26
    Posts
    1,376
    Rep Power
    72
    Points
    8,418 (0 Banked)
    Quote Originally Posted by mynd View Post
    See edit above.
    Really there isn't anything new or unknown about what Sony did, its very much a modification of APU's out on the market now.
    Yeah caught the edit after :P


    Yeah both the 2 systems are using modified APU's, we know that. It what the modifications are we are looking into.
    Some of this new information starts to negate some of the assumptions you were making back in the first post i quoted with the 2 slides from AMD showing the variations in memory allocations. This new info shows that the allocations are not as clear cut as system/VRAM and the ps4 is not a perfect example of the basic APU setup you alluded to. The ability of GPU in the ps4 being able to read and write directly to system memory puts it above current APU technology. APU systems today still need to copy data from each ram allocation for work to be performed on them by either the CPU or GPU, shown by these 2 articles when AMD was doing the rounds...

    http://www.tomshardware.com/news/AMD...APU,22324.html

    http://www.pcper.com/reviews/Process...UMA-HSA-Action

    They both talk about the fact that data has to be copied back and forth, slowing the system down. They then discuss how HSA unlocks the GPU allocation to the CPU allowing full access. This is what Cerny did differently by giving the GPU full access while still locking out the CPU from direct VRAM access. So you are correct in saying that the PS4 is probably not considered a HSA/HUMA system, but there are features that almost bring it to that point. On that chart I would put the ps4 ( maybe even X1 ) between the 2nd and 3rd examples. The GPUs, at this point, both have full access to all RAM areas, but the CPUs are restricted.

  20. #268
    Power Member
    mynd's Avatar
    Join Date
    May 2006
    Age
    41
    Posts
    17,290
    Rep Power
    160
    Points
    153,515 (0 Banked)
    Items User name style
    Achievements IT'S OVER 9000!
    Quote Originally Posted by Sajuuk Khar View Post
    Yeah caught the edit after :P


    Yeah both the 2 systems are using modified APU's, we know that. It what the modifications are we are looking into.
    Some of this new information starts to negate some of the assumptions you were making back in the first post i quoted with the 2 slides from AMD showing the variations in memory allocations. This new info shows that the allocations are not as clear cut as system/VRAM and the ps4 is not a perfect example of the basic APU setup you alluded to.
    Are you referring to this AMD slide?


    No, its still right there, remember this is from just one unified bus.
    The ability of GPU in the ps4 being able to read and write directly to system memory puts it above current APU technology. APU systems today still need to copy data from each ram allocation for work to be performed on them by either the CPU or GPU, shown by these 2 articles when AMD was doing the rounds...
    Incorrect, this tech as been out for some time. Kabini even takes it a step further, but the LLano started it off. And unless we here something different, the PS4 is mix of LLano tech merged with a more modern CPU and GPU core.

    http://www.tomshardware.com/news/AMD...APU,22324.html

    http://www.pcper.com/reviews/Process...UMA-HSA-Action

    They both talk about the fact that data has to be copied back and forth, slowing the system down. They then discuss how HSA unlocks the GPU allocation to the CPU allowing full access. This is what Cerny did differently by giving the GPU full access while still locking out the CPU from direct VRAM access. So you are correct in saying that the PS4 is probably not considered a HSA/HUMA system, but there are features that almost bring it to that point. On that chart I would put the ps4 ( maybe even X1 ) between the 2nd and 3rd examples. The GPUs, at this point, both have full access to all RAM areas, but the CPUs are restricted.
    He didn't do anything differently, as I say this stuff has been in LLano, Trinity and now Kabini.

    Fusions overall goals...
    http://semiaccurate.com/2011/06/20/a...re-and-fusion/

    LLano
    http://semiaccurate.com/2011/06/20/a...-architecture/


    Trinity
    http://semiaccurate.com/2012/05/28/t...n-and-a-queue/
    http://semiaccurate.com/2012/05/25/t...-of-its-parts/

    Kabini
    http://semiaccurate.com/2013/05/22/w...ore-look-like/


    Its all different steps on along the same path. But the PS4 is definalty HSA capable.
    Its just not HUMA capable. But then the whole concept of HUMA is to make HSA easier, so its not a prerequisite to being able to do HSA.
    Last edited by mynd; 08-06-2013 at 09:13.
    I am no longer participating in these forums, I wish all of you on the PSU Forums the best for the future.

  21. #269
    Dedicated Member
    Sajuuk Khar's Avatar
    Join Date
    May 2005
    Age
    26
    Posts
    1,376
    Rep Power
    72
    Points
    8,418 (0 Banked)
    Quote Originally Posted by mynd View Post
    Are you referring to this AMD slide?


    No, its still right there, remember this is from just one unified bus.


    Incorrect, this tech as been out for some time. Kabini even takes it a step further, but the LLano started it off. And unless we here something different, the PS4 is mix of LLano tech merged with a more modern CPU and GPU core.


    He didn't do anything differently, as I say this stuff has been in LLano, Trinity and now Kabini.

    Fusions overall goals...
    http://semiaccurate.com/2011/06/20/a...re-and-fusion/

    LLano
    http://semiaccurate.com/2011/06/20/a...-architecture/


    Trinity
    http://semiaccurate.com/2012/05/28/t...n-and-a-queue/
    http://semiaccurate.com/2012/05/25/t...-of-its-parts/

    Kabini
    http://semiaccurate.com/2013/05/22/w...ore-look-like/


    Its all different steps on along the same path. But the PS4 is definalty HSA capable.
    Its just not HUMA capable. But then the whole concept of HUMA is to make HSA easier, so its not a prerequisite to being able to do HSA.
    Man that was a lot of reading :P

    OK....so even back with llano they had removed or lowered the need to copy back and forth data (though the article did mention the Zero Copy function wasn't available for security reasons, would think that would be sorted now) and have just been progressively improving upon removing as many blocks as possible.

    To say the ps4 is simply llano tech with an updated cpu and gpu, is a bit of a negative point of view to have. Why do you simply assume that they would not use trinity or kabini tech? Both would have been in development when sony and microsoft went to AMD. I'd say they would be using a minimum base of trinity, maybe for initial systems testing and then gone with Kabini.

    Reading theses articles has shown me how well the 2 systems are going to be able to compete with PCs for a fair amount of time.

    I think its just we need more information. The bus Cerny talks about could be something new or it could just be an upgrade of something already there. From what I read though it seems a bit of a moot point whether it goes to the memory or a direct connection between the CPU and GPU, there are functions built into memory controller that can do a fake "move" instruction allowing the CPU or GPU to see it, and that was introduce with llano. The systems might be one gen away from being fully HUMA or HSA compliant.

  22. #270
    Extreme Poster
    Omar's Avatar
    Join Date
    May 2005
    Location
    Addison, TX.
    Age
    32
    Posts
    29,724
    Rep Power
    188
    Points
    96,492 (0 Banked)
    Achievements IT'S OVER 9000!
    mynd, this is all he said:

    “It’s almost amazing how close they are in capabilities, how common they are,” Carmack said. “And that the capabilities that they give are essentially the same.”

    side-note: he's not talking about power btw.

  23. #271
    PSU Technical Advisor
    Vulgotha's Avatar
    Join Date
    Jan 2007
    Age
    23
    Posts
    15,948
    Rep Power
    143
    Points
    105,482 (0 Banked)
    Achievements IT'S OVER 9000!
    Quote Originally Posted by mynd View Post
    We will see, nothing wrong with Jaguars, as long as you take full advantage of all the cores available to you.
    Really though? Given both consoles are using them it isn't like it will 'affect' console development... But when PC titles (like PS2) get thrown into the mix..

    Only aspect of the next gen consoles I find concerning. It'd be one thing if they cranked up the frequency considerably above 1.6Ghz, but that's not at all likely.

    Best case scenario, before OS removes 2 cores from both machines, it has i5-like performance. (2xJag cores, Carmack's "console's get double CPU performance compared to PC's"). Not exactly awe inspiring stuff.

    To be fair, I don't know how Xenon or Cell compared to the "good" desktop processors of their day in most real-world tasks.


  24. #272
    Forum Elder
    chrisw26308's Avatar
    Join Date
    Jan 2007
    Age
    42
    Posts
    2,620
    Rep Power
    72
    Points
    13,250 (0 Banked)
    Vulgotha, can't gpgpu pick up the slack if needed?

    Sent from my SCH-I500 using Tapatalk 2
    [SIGPIC][/SIGPIC]

  25. #273
    Dedicated Member
    Sajuuk Khar's Avatar
    Join Date
    May 2005
    Age
    26
    Posts
    1,376
    Rep Power
    72
    Points
    8,418 (0 Banked)
    Quote Originally Posted by Vulgotha View Post
    Really though? Given both consoles are using them it isn't like it will 'affect' console development... But when PC titles (like PS2) get thrown into the mix..

    Only aspect of the next gen consoles I find concerning. It'd be one thing if they cranked up the frequency considerably above 1.6Ghz, but that's not at all likely.

    Best case scenario, before OS removes 2 cores from both machines, it has i5-like performance. (2xJag cores, Carmack's "console's get double CPU performance compared to PC's"). Not exactly awe inspiring stuff.

    To be fair, I don't know how Xenon or Cell compared to the "good" desktop processors of their day in most real-world tasks.
    Its probably safe to say that the Xenon and Cell were actually pretty powerful for there day, in their own fields. The end of 2005 Intel hadn't even released their Core architecture yet. A tri core cpu with 2 threads per core was not normal. And the Cell with the SPEs gave it a very unique feature set.

    Thing to remember is this was all before GPGPU was even physically possible on a graphics card, the 360 was the first piece of hardware with a GPU that had the pipeline setup to start handling that kind of work. The software wasn't there.

    With the GPU's now able to handle a lot more of the heavy stuff like physics and particles, the CPU's can specialise back to what they did best.

  26. #274
    Forum Elder
    chrisw26308's Avatar
    Join Date
    Jan 2007
    Age
    42
    Posts
    2,620
    Rep Power
    72
    Points
    13,250 (0 Banked)
    Quote Originally Posted by chrisw26308 View Post
    Vulgotha, can't gpgpu pick up the slack if needed?

    Sent from my SCH-I500 using Tapatalk 2
    So is that a yes sajuuk?

    Sent from my SCH-I500 using Tapatalk 2
    [SIGPIC][/SIGPIC]

  27. #275
    Forum Sage
    Itachi's Avatar
    Join Date
    Nov 2010
    Location
    Winterfell
    PSN ID
    iwinulose042
    Age
    20
    Posts
    8,311
    Rep Power
    82
    Points
    30,167 (151,503 Banked)
    Items Final Fantasy XIII-2Final Fantasy XIIIFull Metal AlchemistDragon Ball ZNarutoDeath NoteNaughty DogLightningNoctisAssassins Creed EzioPS3 Slim
    Quote Originally Posted by chrisw26308 View Post
    So is that a yes sajuuk?

    Sent from my SCH-I500 using Tapatalk 2
    Yeah the GPU will certainly offload some processes from the CPU (reverse of this gen were SPUs in cell offloaded some GPU work) but I have a question:

    My basic knowledge about GPUs is that even though they are parallel processors, they work slower than a CPU which works one job at a time. So does that mean the GPU will be slower at the CPU jobs that it offloads?

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

PSU

Playstation Universe

Reproduction in whole or in part in any form or medium without express written permission of Abstract Holdings International Ltd. prohibited.
Use of this site is governed by our Terms of Use and Privacy Policy.

vBCredits II Deluxe v2.1.0 (Pro) - vBulletin Mods & Addons Copyright © 2010-2014 DragonByte Technologies Ltd.