Jump to content

Dimitris Tolios

Members
  • Posts

    1287
  • Joined

1 Follower

Personal Information

  • Country
    Greece

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Dimitris Tolios's Achievements

Newbie

Newbie (1/14)

  • One Month Later Rare
  • One Year In Rare
  • Week One Done Rare

Recent Badges

10

Reputation

  1. Yeah, I would guess so much. Maya, 3DS and other D3D graphic engines alike, are not that scalable. Once you hit a certain performance level it flats out.
  2. I don't think you can do a s2011 build with your budget. But you probably can do a i7-7700K build which is the fastest 4C CPU atm, and as fast if not faster than many hex-cores if the latter is not overclocked. The 7700K's stock clock that turboboosts all the way to 4.5GHz will give you notable speed boost in Sketchup that is heavily CPU limited and only works on one thread. More threads will kick in only after SU has prepped the scene for Vray and then the Vray Engine takes over using all cores + render nodes if available. And this is your best bang for your buck: buy a 7700K, a compatible s1151 mobo, RAM (16 GB DDR4 2133 will do fine, 32GB won't be faster but might be needed for more complex scenes), a cheap but decent quality PSU (no need for more than 300W really) and a basic case. Or a fancy case if you want to move your workstation in it and keep the current machine in the old one. Now, your "current" i7-6700 with its built-in GPU can be configured to run as a render node along with your 7700K, and the speed of the two machines combined will be more than double of what you have now. The 7700K by itself will be faster, but won't be THAT MUCH faster, not nearly as fast as both the machines will be. If you have a internet router with more than one ethernet port, you don't need anything more than a LAN cable to have the 2 machines "see" each-other for Vray node purposes. You can install windows on an old HDD or small SSD for the node, the speed of the media won't matter. You could even make a bootable USB stick for it and run the VRay Spawner (could use https://rufus.akeo.ie/ ) for that.
  3. For viewport? I would expect differences to be minimal in viewport once you get to GTX 1070 performance, which I think is the best bang for the buck. (ok, maybe the 1060 is that, and the 1050 is a tad better etc...high-end products are not great value products!). The 1080 I would personally not touch...if I would want something faster, I would go straight to the 1080Ti. If I was "tempted" to get a Titan X "for speed", i'd rather go for 2x 1080Ti, etc.
  4. The i7-7700 will blow the old X5670 pair out of the water, with double the single threaded performance. The Quadro 4000 is also a bit archaic at this point, and although ArchiCAD is openGL, I would pick a modern GTX over the Q4000 easily.
  5. This is a Westmere architecture server, as in 7 years old. The combined compute output of two of those CPUs would bring you to roughly the multithreaded performance of a single 6700K or 7700K CPU, while the Xeon pair will be consuming double the power. I would consider this a "Ok" (not great) buy if it was for less than $400.
  6. * The vast majority of renderings is still performed by the CPU. Unless you are running GPU specific versions of rendering engines, the GPU contributes nothing to rendering speed. * RAM doesn't contribute to rendering speed unless you are pushing really high resolution / really complex scenes. If you weren't to come close to utilizing more than 16GB of RAM, you could never tell the difference in rendering speed between a 16GB and a 32GB system. What more RAM allows you to do though, is multitask, having more than one RAM hungry apps running at the same time, cumulatively asking more than 16GB. * A decent GPU is good to have to accelerate your viewport and allow you to work with complex scenes. Even if it doesn't contribute to your rendering speed per se, that doesn't mean that you can manipulate and navigate complex scenes without a bare minimum of GPU grunt. The on-board buffer, i.e. a "2GB or 4GB" of a card, is rarely a real tell of its performance. The on-board IGP in a 6th or 7th gen i7 is a not horrible, but I would look into a $100~150 card as a good start...GTX 1050 or Radeon 460 @ low end, GTX 1060 or Radeon 470 in the upper end of that budget. Edit: I realize that the $ mentioned above might be far away from what people outside the US pay after offers and specials etc for some models of these cards, but I hope readers get the idea regardless.
  7. Architecture is new so many drivers / programs lack any short of optimization for it, still performance is definitely there from day one. Looks a very solid platform to base affordable render nodes on (1700/1700X). 4x Dimms are not a real problem with DDR4 that makes 16GB UDIMM sticks readily available - unless ofc you have to have more than 64GB.
  8. Go into Customize > Preferences Chose the General tab Check Use Large Toolbar buttons For the font size is harder if you are not happy. You pretty much have to force 3DS to use a modified version of the Tahoma.ttf default font it is set to use. To modify the font to default to a 125-150% scaling (whatever you did for windows and you like) use Font Forge as described in this tutorial: https://forum.xda-developers.com/showthread.php?t=990853 Once you're done with the font, place it to C:\Windows\Fonts as "YOUR_NEW_FONT.ttf" or whatever name you want for it. Next, in windows registry, navigate to [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts] and replace "Tahoma (TrueType)"="tahoma.ttf" with "Tahoma (TrueType)"="YOUR_NEW_FONT.ttf" Note that this change will be universaly applicable to all user interfaces using the Tahoma font by default. And it is not guaranteed to scale perfectly without "breaking" your UI in 3DS, but...it is something.
  9. There are easy to follow guides online for overclocking, sometimes step by step. Asus makes very popular Mobos and their BIOS interface is pretty much interchangeable between models, so chances are you will figure it out if you want to try it. Mild overclocks pose no threat to the CPU, given proper cooling. Moreso, what "hurts" the CPU is excessive over-voltage, not overheating. CPUs have thermal monitors that will limit voltage or power the CPU down if excessive temperatures are reached, and that thresshold is set to 100oC, which means Intel themselves are pretty confident on the durability of the CPU. Many users go "nuts" for temps going over 70-75oC, but they are simply over-reacting - in my opinion that is. $500/600 nodes: These configurations are exercises in frugality: how can I have a "render speed multiplier" with the least $ spend, while remaining realistic and reliable. As disclaimed in the blog itself, it is there to provoke & educate you a bit, for you to make your own decisions. Yes, those $600 nodes are great value for $, but remember that after you exceed the critical mass of allowable nodes your Vray licence provides, you have to spend extra just to licence extra nodes. So there is a balance to be kept based on how many licenses you have and how you want to structure your office: 3 workstations with a WS licence each and couple of nodes for each exclusively, without additional licensing? 3 Workstations with a WS licence each and additional floating node licenses so that each WS can use more than 2 nodes (and itself) at a given moment rendering? Etc. Does overclocking lower the life expectancy of the CPU? Well to an extend, yes, it does. But the % of overclock and more importantly the % of over-voltage makes that deterioration vary HUGELY. At the end of the day, if I can tap into 15-20 or even 30% more performance for the cost of $50-100 or so (typically better cooling and/or perhaps a better motherboard), risking that my CPU will become unstable in the overclocked speed 6-7 or more years down the road...well, I call that a fair exchange, as I would consider a CPU "expired" by that time - at least in a demanding professional or enthusiast environment. Meanwhile, I was experiencing performance benefits unavailable to off-the-shelf products. Before I got my current 6700K, I had a 3930K clocked to 4.8GHz which I was using for a few years (overclocked from day 1 I think). This was a 50% over base speed overclock. It took years for Intel to release a CPU that was out-of-the-box faster than what I had in early 2012... I would think 4.6~4.8GHz a mild/safe overclock for the 7700K. Note that this is a very fast clocked CPU out of the box, so 4.8GHz is "just" 15% more than the default base speed of 4.2GHz and ~7% over the 4.5GHz turbo, but it is not bad for a nearly "free" upgrade. You could go for 5+GHz but that perhaps starts pushing your luck a bit - not for hurting the CPU, but because you are more prone to the "cpu lottery" thing: not all CPUs overclock the same, so you have to be lucky, aka get a golden ticket/chip, that will allow you to push the clock a lot without too much Vcore increase. If you have the time to invest, by all means, try going for more, I just thing the 4.8GHz will be the "easy to achieve for pretty much all 7700K chips with minimal effort" goal.
  10. CPU Cooler: The CPU cooler might be overkill if you don't care about overclocking. An yes, it is uber-anal to have your CPU "over-cooled", no you won't really prolong its life etc...I like CLCs, I have one in one of my rigs (i7-6700K @ 4.6GHz in a Cougar QBX) and used them even in office systems I've build (i7-4790K but pushed to 4.5GHz), but both those scenarios have overclocked CPUs, and in the QBX case, I could not really fit an air-cooler that I would like as it is too small. In a big midi-case like the Phanteks Eclipse P400, there are no space issues, and a CLC just adds complexity and failure points without any real benefits. I would opt for a simpler air-cooling solution, and perhaps save some $ in the process: Noctua NH-U12S Cooler Master Hyper 212 EVO CRYORIG H7 All of the above will cool the 7700K just fine, even allow for a little overclocking. HDD: I would personally also take a look at the HDD requirement...you already have a decent 500GB class SSD in each. Do you need 3TB HDD too? Maybe you need to re-invest the monies saved here and there in a decent NAS or a small file server to keep your assets / models centralized and also better protected with at least a RAID-1, if not RAID-5 HDD array.
  11. 1. The Rendering process is still initiated in the WS afaik (e.g. the model being prepped for VRay), but the WS (aka "local machine") doesn't have to participate in the actual rendering process, it can be completely or nearly completely (very small CPU % used) off-loaded to the nodes once started. 2. There is nothing special in a "node" that stops it from being a WS too. You just set the VRay Spawner to launch / wait for orders as a background service. Many offices have all their WS rigged this way and if they know certain people are away from their desk for the day or for as long as the render will last, they take over those CPU resources. EDIT: the rendering process can be set to be at "low priority" for the OS. This actually allows simple tasks to be performed simultaneously with the rendering process in the background: e.g. casual browsing, youtube, 2D CAD drafting etc, can happen without issues or obvious delays for the user that sat infront of the "node". 3. Moves from Quad to Hex to Octa etc, dont say much in themselves. It is a balance between the generation of core architecture, maximum clock on single threaded tasks and maximum sustained clock when all cores are working that makes up the performance experience. The 7700K Quad core for example, has massive IPC and clock advantages over older hex-cores, making it much faster out of the box for modeling (that mostly uses only one core/thread), but also rendering tasks. The 6900K would catch up eventually after the rendering process was started, but complex modeling / forming / sculpting and also rendering initiation (all single core / thread heavy tasks) will be much faster with the "humble" quad 7700K vs. slower clocked and older architecture 8C and even 10C machines. Thus fast clocked quad and hex cores are the best WS oriented CPUs (turbo clock * 1~2 cores = what counts), and multicore CPUs with "ok" clocks but massive CPU clock aggregates (base clock * # of CPUs = what counts). A usefull over-simplification would be: if you are actively clicking for your workflow, 7700K will be better. Most if not all 3DS modeling, most if not all PS, most LR, all AI, all Sketchup, all Revit work etc. = will prefer the 7700K. If you are passively waiting doing nothing, not seconds, but minutes / hours, say rendering, video transcoding, exporting lots of final images from LR = the multi core CPUs with GOOD aggregate (not those $300 low clock Xeons, the $1000-3000+ ones and the $1000+ extreme i7s) will do better.
  12. Even 1-2x "True" PCIe 3.0 bandwidth available per card is enough to access 100% of the cards. The 16x "thingy" is a myth. Was for 2.0, is for 3.0. Bottlenecks exist, PCIe is just too far down the list. By the time cards will be close to pushing the limits of the 3.0 standard in RT multi-GPU configurations (so far Gaming was more taxing than GPGPU, at least in the PCIe bandwidth demand), you will start seeing PCIe 4.0 platforms which are already documented and ready. But people in forums and FB groups and whatnot love to act as if them as players are bigger than the game, and have figured out stuff better than the engineers setting the operational envelopes and alternative upgrade paths a decade (or so) in advance.
  13. Depending on the Keyboard manufacturer and model, the Macro keys can be assigned on the fly - ballpark: 1) click on "Assign macro" special key or key-combination 2) press the G/Macro key you want to assign it to usually flashes or lets you know it is in "record mode" 3) press the key-combination / shortcut 2 & 3 might be the other way around, i.e. you click the shortcut first, than the Macro key you want to assign it to. For sure you have more control with the supplied software, but the latter is often not 100% bug free and might also take a irrational amount of space in your HDD. When the SSDs were insanely more expensive and we were struggling to fit as mush as we could in less than 100GB of SSD (or before that 50GB), it was a huge pain to deal with 500+ MB of ... keyboard control center... really? Most of these gaming keyboards already have profiles for popular games and the supplied software searches your registry and pre-loads profiles for what is already installed, to be activated when the respected .exe is loaded. For 3DS that is probably not the case, so you might have to research into the existing keyboard shortcuts, or create your own keyboard shortcuts and then assign them to Macro keys.
  14. I would call them equal. In RL there is no tangible difference for viewport in this level of GTX cards, at least for 3DS & the rest D3D programs...cards cannot be 100% utilized by the graphic engine and/or a single thread of CPU anyways. Its the nature of the programs, not the hardware that lacks. On the 10xx benefit is the fact that it pulls 100W less @ full tilt, although I doubt - again - that you will ever see anywhere near 100% utilization for 3DS viewport on either card. Those 100Ws are really important if you plan on cramming many of them for GPGPU rendering/compute. I agree that $200 more for a 1070 is not a wise investment over a 980Ti for your application.
  15. GTX 1070 here in the US have been selling around the $390 mark lately... I have no idea how the market is where you are at. I would negotiate for a cheaper price on the 980Ti, or make a move on a used or new 1070. But, that's me.
×
×
  • Create New...