Disk Write Cache Feature and Limited RAM Usage with Windows 10 Pro - 256GB DDR4How to increase the disk cache...

A fantasy book with seven white haired women on the cover

Is there a way to store 9th-level spells in a Glyph of Warding or similar method?

Why is that max-Q doesn't occur in transonic regime?

How vim overwrites readonly mode?

Why does 0.-5 evaluate to -5?

Memory usage: #define vs. static const for uint8_t

Can you determine if focus is sharp without diopter adjustment if your sight is imperfect?

Why didn't Tom Riddle take the presence of Fawkes and the Sorting Hat as more of a threat?

Why avoid shared user accounts?

Possible issue with my W4 and tax return

How to not let the Identify spell spoil everything?

Not a Long-Winded Riddle

Count repetitions of an array

Has any human ever had the choice to leave Earth permanently?

What to do with threats of blacklisting?

Categorical Unification of Jordan Holder Theorems

How do you get out of your own psychology to write characters?

Cat is tipping over bed-side lamps during the night

Eww, those bytes are gross

Find the smallest value of the function

How big is a framed opening for a door relative to the finished door opening width?

How can I have probability increase linearly with more dice?

Website seeing my Facebook data?

Is there a file that always exists and a 'normal' user can't lstat it?



Disk Write Cache Feature and Limited RAM Usage with Windows 10 Pro - 256GB DDR4


How to increase the disk cache of Windows 7When do you know you need more ram?How can I use my small SSD as a cache for a larger hard disk?Should I use Windows write cache if I have RAID controller + BBU?Windows cache and temp on RAM diskWindows Server Storage Spaces tier write-back cache, how to configure threshold that determines if write is performed on write-back cache?Windows 10 with ssd cache or ssd custom install?How frequently does windows flush the write cache?Aggressivly Ram Caching with RAMdisk and readyboostHDD Read Write Stutter with click sound. Usage goes to 100% every stutter?Is fifo pipe size limited by disk cache allocated space?













7















I have a Windows 10 Professional workstation I use to simulate material flow in impressions. The FEA software I use creates large 50-100GB database files for each simulation I run. Copying these files to spinning media for storage seems to not take advantage of the amount of RAM this system has while transferring, starting off quick for a second or two, and then dropping to the two RAID0 7200RPM disk's native transfer speed of 300 MB/s (171-342 seconds). The files are coming from a software RAID0 of two 600GB partitions on two 1.2TB Intel 750 PCIe NVMe SSDs, so read performance is not the issue. The system is on a 2200VA UPS with an EBM, and backed up nightly to our storage server, so no worries about data loss.



What I am wondering is:



If I can tweak Windows 10's cache settings to read the entirety of the 50-100GB files into RAM at the 4GB/s (12-25 seconds) the two Intel 750s are capable of, and then write them to disk transparently. I am under the impression that the built-in Windows feature "disk caching" is capable of this, but some default cache-size setting in Windows is limiting the cache to what looks like ~5GB (hence the small burst of speed at the start). I don't think this blip comes from the miserly 128MB cache on the destination drives, as "Modified" Physical Memory usage does go up by ~5GB in that first second or so of transfer. That 5GB can be seen being written to disk after the transfer dialog box disappears. RAM usage decreases in hand with the speed of the two 7200RPM drives in RAID0. Once completed, you can see the disk activity go to zero and the RAM utilization return to normal. This tells me that disk caching is at least working, just limited to 5GB at most.



The system would be fine using 50-100GB of it's available RAM for this transfer, as the simulations typically only use up to maybe 80GB of RAM, and the simulations are not using that amount of RAM until the last stages of the simulation.



I have a Dell Precision T7910 workstation with the following specs:



2P Xeon E5-2687W v4
256GB LRDIMM ECC DDR4 2400MHz
Quadro M4000
x2 Intel 750 1.2TB PCIe NVMe, one boot, 600GB RAID0 Two Partitons
x2 WD Gold 8TB in software RAID0 on SAS 12 GB Ports (128MB Cache)
Eaton 5PX 2200 IRT 240V UPS
Windows 10 Pro 1703 - System is old enough to not have Windows 10 Pro for Workstations


What I have tried:



Checked/Enabled: "Enable write caching on the device." - On each Disk
Checked/Enabled: "Turn off Windows write-cache buffer flushing on the device." - On each Disk
Made sure Superfetch service is running (for whatever good that does).
Moved away from built-in hardware RAID, as there is NO cache anyway.


I have researched other topics with similar issues, and have come across another older thread mentioning a "CacheSet" tool:



How to increase the disk cache of Windows 7



Would this be applicable to my use-case or should I keep looking?



Is my understanding of how disk caching on a Windows platform works correct, or does it operate differently than I anticipated? I am just looking for write-caching to main memory, using maybe up to 100GB of RAM, nothing else.



Thank you for your help! Any suggestions are welcome.



EDIT:
Running that cacheset.exe software as ADMIN reports the "Peak size" of "663732 KB" which seems too small (648MB?). Just not sure I want to commit changing this setting and potentially messing up this in-production system. The limit I keep running into is right around 5GB.



DOUBLE EDIT:
I revised the apparent GBs that seem to be actually cached. The key was looking at "Modified Physical Memory" and seeing the ~5GB "cap" at the start of the transfer. Still looking to increase this to something like 100GB.



Thank you again!










share|improve this question

























  • this looks... fun. If you don't end up getting an answer, and don't mind keeping an eye on the question for me, I might be willing to get a bounty on this

    – Journeyman Geek
    Nov 27 '17 at 12:23











  • I don't have an 'answer' as I can't test, but I wonder if some of the settings (the client ones) in msdn.microsoft.com/en-us/library/windows/hardware/… would help.

    – djsmiley2k
    Nov 27 '17 at 13:03











  • You can use third-party tools like PrimoCache, if the built-in caching isn't enough for you. I'd suggest that at >100GB RAM you're well into specialised areas.

    – Bob
    Nov 27 '17 at 13:40











  • Thank you Bob. PrimoCache looks like it does what I need, but it seems like this can be fixed by changing a silly limitation hard-coded somewhere in the registry (thank you djsmiley2k, looking now). Not sure I would completely trust inserting another 3rd party layer to "intercept I/O" without testing it thoroughly on a non-production system first. I will see if we can test it out on a lesser 1P Precision with another Intel 750 and spinning rust. This T7910 is our dedicated simulation box. Thank you again!

    – GHTurbines
    Nov 27 '17 at 18:44











  • I will keep an eye on this question for a while, Journeyman Geek. Thank you! As it stands, as long as we stay ahead of transferring off of the SSD before it becomes full (transferring during simulations), we should not have to wait for a transfer to complete to start a new simulation. 1TB of SSD is done with in less than 1 hour if we are busy, or working on a complex impression.

    – GHTurbines
    Nov 27 '17 at 18:46
















7















I have a Windows 10 Professional workstation I use to simulate material flow in impressions. The FEA software I use creates large 50-100GB database files for each simulation I run. Copying these files to spinning media for storage seems to not take advantage of the amount of RAM this system has while transferring, starting off quick for a second or two, and then dropping to the two RAID0 7200RPM disk's native transfer speed of 300 MB/s (171-342 seconds). The files are coming from a software RAID0 of two 600GB partitions on two 1.2TB Intel 750 PCIe NVMe SSDs, so read performance is not the issue. The system is on a 2200VA UPS with an EBM, and backed up nightly to our storage server, so no worries about data loss.



What I am wondering is:



If I can tweak Windows 10's cache settings to read the entirety of the 50-100GB files into RAM at the 4GB/s (12-25 seconds) the two Intel 750s are capable of, and then write them to disk transparently. I am under the impression that the built-in Windows feature "disk caching" is capable of this, but some default cache-size setting in Windows is limiting the cache to what looks like ~5GB (hence the small burst of speed at the start). I don't think this blip comes from the miserly 128MB cache on the destination drives, as "Modified" Physical Memory usage does go up by ~5GB in that first second or so of transfer. That 5GB can be seen being written to disk after the transfer dialog box disappears. RAM usage decreases in hand with the speed of the two 7200RPM drives in RAID0. Once completed, you can see the disk activity go to zero and the RAM utilization return to normal. This tells me that disk caching is at least working, just limited to 5GB at most.



The system would be fine using 50-100GB of it's available RAM for this transfer, as the simulations typically only use up to maybe 80GB of RAM, and the simulations are not using that amount of RAM until the last stages of the simulation.



I have a Dell Precision T7910 workstation with the following specs:



2P Xeon E5-2687W v4
256GB LRDIMM ECC DDR4 2400MHz
Quadro M4000
x2 Intel 750 1.2TB PCIe NVMe, one boot, 600GB RAID0 Two Partitons
x2 WD Gold 8TB in software RAID0 on SAS 12 GB Ports (128MB Cache)
Eaton 5PX 2200 IRT 240V UPS
Windows 10 Pro 1703 - System is old enough to not have Windows 10 Pro for Workstations


What I have tried:



Checked/Enabled: "Enable write caching on the device." - On each Disk
Checked/Enabled: "Turn off Windows write-cache buffer flushing on the device." - On each Disk
Made sure Superfetch service is running (for whatever good that does).
Moved away from built-in hardware RAID, as there is NO cache anyway.


I have researched other topics with similar issues, and have come across another older thread mentioning a "CacheSet" tool:



How to increase the disk cache of Windows 7



Would this be applicable to my use-case or should I keep looking?



Is my understanding of how disk caching on a Windows platform works correct, or does it operate differently than I anticipated? I am just looking for write-caching to main memory, using maybe up to 100GB of RAM, nothing else.



Thank you for your help! Any suggestions are welcome.



EDIT:
Running that cacheset.exe software as ADMIN reports the "Peak size" of "663732 KB" which seems too small (648MB?). Just not sure I want to commit changing this setting and potentially messing up this in-production system. The limit I keep running into is right around 5GB.



DOUBLE EDIT:
I revised the apparent GBs that seem to be actually cached. The key was looking at "Modified Physical Memory" and seeing the ~5GB "cap" at the start of the transfer. Still looking to increase this to something like 100GB.



Thank you again!










share|improve this question

























  • this looks... fun. If you don't end up getting an answer, and don't mind keeping an eye on the question for me, I might be willing to get a bounty on this

    – Journeyman Geek
    Nov 27 '17 at 12:23











  • I don't have an 'answer' as I can't test, but I wonder if some of the settings (the client ones) in msdn.microsoft.com/en-us/library/windows/hardware/… would help.

    – djsmiley2k
    Nov 27 '17 at 13:03











  • You can use third-party tools like PrimoCache, if the built-in caching isn't enough for you. I'd suggest that at >100GB RAM you're well into specialised areas.

    – Bob
    Nov 27 '17 at 13:40











  • Thank you Bob. PrimoCache looks like it does what I need, but it seems like this can be fixed by changing a silly limitation hard-coded somewhere in the registry (thank you djsmiley2k, looking now). Not sure I would completely trust inserting another 3rd party layer to "intercept I/O" without testing it thoroughly on a non-production system first. I will see if we can test it out on a lesser 1P Precision with another Intel 750 and spinning rust. This T7910 is our dedicated simulation box. Thank you again!

    – GHTurbines
    Nov 27 '17 at 18:44











  • I will keep an eye on this question for a while, Journeyman Geek. Thank you! As it stands, as long as we stay ahead of transferring off of the SSD before it becomes full (transferring during simulations), we should not have to wait for a transfer to complete to start a new simulation. 1TB of SSD is done with in less than 1 hour if we are busy, or working on a complex impression.

    – GHTurbines
    Nov 27 '17 at 18:46














7












7








7








I have a Windows 10 Professional workstation I use to simulate material flow in impressions. The FEA software I use creates large 50-100GB database files for each simulation I run. Copying these files to spinning media for storage seems to not take advantage of the amount of RAM this system has while transferring, starting off quick for a second or two, and then dropping to the two RAID0 7200RPM disk's native transfer speed of 300 MB/s (171-342 seconds). The files are coming from a software RAID0 of two 600GB partitions on two 1.2TB Intel 750 PCIe NVMe SSDs, so read performance is not the issue. The system is on a 2200VA UPS with an EBM, and backed up nightly to our storage server, so no worries about data loss.



What I am wondering is:



If I can tweak Windows 10's cache settings to read the entirety of the 50-100GB files into RAM at the 4GB/s (12-25 seconds) the two Intel 750s are capable of, and then write them to disk transparently. I am under the impression that the built-in Windows feature "disk caching" is capable of this, but some default cache-size setting in Windows is limiting the cache to what looks like ~5GB (hence the small burst of speed at the start). I don't think this blip comes from the miserly 128MB cache on the destination drives, as "Modified" Physical Memory usage does go up by ~5GB in that first second or so of transfer. That 5GB can be seen being written to disk after the transfer dialog box disappears. RAM usage decreases in hand with the speed of the two 7200RPM drives in RAID0. Once completed, you can see the disk activity go to zero and the RAM utilization return to normal. This tells me that disk caching is at least working, just limited to 5GB at most.



The system would be fine using 50-100GB of it's available RAM for this transfer, as the simulations typically only use up to maybe 80GB of RAM, and the simulations are not using that amount of RAM until the last stages of the simulation.



I have a Dell Precision T7910 workstation with the following specs:



2P Xeon E5-2687W v4
256GB LRDIMM ECC DDR4 2400MHz
Quadro M4000
x2 Intel 750 1.2TB PCIe NVMe, one boot, 600GB RAID0 Two Partitons
x2 WD Gold 8TB in software RAID0 on SAS 12 GB Ports (128MB Cache)
Eaton 5PX 2200 IRT 240V UPS
Windows 10 Pro 1703 - System is old enough to not have Windows 10 Pro for Workstations


What I have tried:



Checked/Enabled: "Enable write caching on the device." - On each Disk
Checked/Enabled: "Turn off Windows write-cache buffer flushing on the device." - On each Disk
Made sure Superfetch service is running (for whatever good that does).
Moved away from built-in hardware RAID, as there is NO cache anyway.


I have researched other topics with similar issues, and have come across another older thread mentioning a "CacheSet" tool:



How to increase the disk cache of Windows 7



Would this be applicable to my use-case or should I keep looking?



Is my understanding of how disk caching on a Windows platform works correct, or does it operate differently than I anticipated? I am just looking for write-caching to main memory, using maybe up to 100GB of RAM, nothing else.



Thank you for your help! Any suggestions are welcome.



EDIT:
Running that cacheset.exe software as ADMIN reports the "Peak size" of "663732 KB" which seems too small (648MB?). Just not sure I want to commit changing this setting and potentially messing up this in-production system. The limit I keep running into is right around 5GB.



DOUBLE EDIT:
I revised the apparent GBs that seem to be actually cached. The key was looking at "Modified Physical Memory" and seeing the ~5GB "cap" at the start of the transfer. Still looking to increase this to something like 100GB.



Thank you again!










share|improve this question
















I have a Windows 10 Professional workstation I use to simulate material flow in impressions. The FEA software I use creates large 50-100GB database files for each simulation I run. Copying these files to spinning media for storage seems to not take advantage of the amount of RAM this system has while transferring, starting off quick for a second or two, and then dropping to the two RAID0 7200RPM disk's native transfer speed of 300 MB/s (171-342 seconds). The files are coming from a software RAID0 of two 600GB partitions on two 1.2TB Intel 750 PCIe NVMe SSDs, so read performance is not the issue. The system is on a 2200VA UPS with an EBM, and backed up nightly to our storage server, so no worries about data loss.



What I am wondering is:



If I can tweak Windows 10's cache settings to read the entirety of the 50-100GB files into RAM at the 4GB/s (12-25 seconds) the two Intel 750s are capable of, and then write them to disk transparently. I am under the impression that the built-in Windows feature "disk caching" is capable of this, but some default cache-size setting in Windows is limiting the cache to what looks like ~5GB (hence the small burst of speed at the start). I don't think this blip comes from the miserly 128MB cache on the destination drives, as "Modified" Physical Memory usage does go up by ~5GB in that first second or so of transfer. That 5GB can be seen being written to disk after the transfer dialog box disappears. RAM usage decreases in hand with the speed of the two 7200RPM drives in RAID0. Once completed, you can see the disk activity go to zero and the RAM utilization return to normal. This tells me that disk caching is at least working, just limited to 5GB at most.



The system would be fine using 50-100GB of it's available RAM for this transfer, as the simulations typically only use up to maybe 80GB of RAM, and the simulations are not using that amount of RAM until the last stages of the simulation.



I have a Dell Precision T7910 workstation with the following specs:



2P Xeon E5-2687W v4
256GB LRDIMM ECC DDR4 2400MHz
Quadro M4000
x2 Intel 750 1.2TB PCIe NVMe, one boot, 600GB RAID0 Two Partitons
x2 WD Gold 8TB in software RAID0 on SAS 12 GB Ports (128MB Cache)
Eaton 5PX 2200 IRT 240V UPS
Windows 10 Pro 1703 - System is old enough to not have Windows 10 Pro for Workstations


What I have tried:



Checked/Enabled: "Enable write caching on the device." - On each Disk
Checked/Enabled: "Turn off Windows write-cache buffer flushing on the device." - On each Disk
Made sure Superfetch service is running (for whatever good that does).
Moved away from built-in hardware RAID, as there is NO cache anyway.


I have researched other topics with similar issues, and have come across another older thread mentioning a "CacheSet" tool:



How to increase the disk cache of Windows 7



Would this be applicable to my use-case or should I keep looking?



Is my understanding of how disk caching on a Windows platform works correct, or does it operate differently than I anticipated? I am just looking for write-caching to main memory, using maybe up to 100GB of RAM, nothing else.



Thank you for your help! Any suggestions are welcome.



EDIT:
Running that cacheset.exe software as ADMIN reports the "Peak size" of "663732 KB" which seems too small (648MB?). Just not sure I want to commit changing this setting and potentially messing up this in-production system. The limit I keep running into is right around 5GB.



DOUBLE EDIT:
I revised the apparent GBs that seem to be actually cached. The key was looking at "Modified Physical Memory" and seeing the ~5GB "cap" at the start of the transfer. Still looking to increase this to something like 100GB.



Thank you again!







windows-10 hard-drive memory ssd cache






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 27 '17 at 10:29







GHTurbines

















asked Nov 27 '17 at 9:02









GHTurbinesGHTurbines

363




363













  • this looks... fun. If you don't end up getting an answer, and don't mind keeping an eye on the question for me, I might be willing to get a bounty on this

    – Journeyman Geek
    Nov 27 '17 at 12:23











  • I don't have an 'answer' as I can't test, but I wonder if some of the settings (the client ones) in msdn.microsoft.com/en-us/library/windows/hardware/… would help.

    – djsmiley2k
    Nov 27 '17 at 13:03











  • You can use third-party tools like PrimoCache, if the built-in caching isn't enough for you. I'd suggest that at >100GB RAM you're well into specialised areas.

    – Bob
    Nov 27 '17 at 13:40











  • Thank you Bob. PrimoCache looks like it does what I need, but it seems like this can be fixed by changing a silly limitation hard-coded somewhere in the registry (thank you djsmiley2k, looking now). Not sure I would completely trust inserting another 3rd party layer to "intercept I/O" without testing it thoroughly on a non-production system first. I will see if we can test it out on a lesser 1P Precision with another Intel 750 and spinning rust. This T7910 is our dedicated simulation box. Thank you again!

    – GHTurbines
    Nov 27 '17 at 18:44











  • I will keep an eye on this question for a while, Journeyman Geek. Thank you! As it stands, as long as we stay ahead of transferring off of the SSD before it becomes full (transferring during simulations), we should not have to wait for a transfer to complete to start a new simulation. 1TB of SSD is done with in less than 1 hour if we are busy, or working on a complex impression.

    – GHTurbines
    Nov 27 '17 at 18:46



















  • this looks... fun. If you don't end up getting an answer, and don't mind keeping an eye on the question for me, I might be willing to get a bounty on this

    – Journeyman Geek
    Nov 27 '17 at 12:23











  • I don't have an 'answer' as I can't test, but I wonder if some of the settings (the client ones) in msdn.microsoft.com/en-us/library/windows/hardware/… would help.

    – djsmiley2k
    Nov 27 '17 at 13:03











  • You can use third-party tools like PrimoCache, if the built-in caching isn't enough for you. I'd suggest that at >100GB RAM you're well into specialised areas.

    – Bob
    Nov 27 '17 at 13:40











  • Thank you Bob. PrimoCache looks like it does what I need, but it seems like this can be fixed by changing a silly limitation hard-coded somewhere in the registry (thank you djsmiley2k, looking now). Not sure I would completely trust inserting another 3rd party layer to "intercept I/O" without testing it thoroughly on a non-production system first. I will see if we can test it out on a lesser 1P Precision with another Intel 750 and spinning rust. This T7910 is our dedicated simulation box. Thank you again!

    – GHTurbines
    Nov 27 '17 at 18:44











  • I will keep an eye on this question for a while, Journeyman Geek. Thank you! As it stands, as long as we stay ahead of transferring off of the SSD before it becomes full (transferring during simulations), we should not have to wait for a transfer to complete to start a new simulation. 1TB of SSD is done with in less than 1 hour if we are busy, or working on a complex impression.

    – GHTurbines
    Nov 27 '17 at 18:46

















this looks... fun. If you don't end up getting an answer, and don't mind keeping an eye on the question for me, I might be willing to get a bounty on this

– Journeyman Geek
Nov 27 '17 at 12:23





this looks... fun. If you don't end up getting an answer, and don't mind keeping an eye on the question for me, I might be willing to get a bounty on this

– Journeyman Geek
Nov 27 '17 at 12:23













I don't have an 'answer' as I can't test, but I wonder if some of the settings (the client ones) in msdn.microsoft.com/en-us/library/windows/hardware/… would help.

– djsmiley2k
Nov 27 '17 at 13:03





I don't have an 'answer' as I can't test, but I wonder if some of the settings (the client ones) in msdn.microsoft.com/en-us/library/windows/hardware/… would help.

– djsmiley2k
Nov 27 '17 at 13:03













You can use third-party tools like PrimoCache, if the built-in caching isn't enough for you. I'd suggest that at >100GB RAM you're well into specialised areas.

– Bob
Nov 27 '17 at 13:40





You can use third-party tools like PrimoCache, if the built-in caching isn't enough for you. I'd suggest that at >100GB RAM you're well into specialised areas.

– Bob
Nov 27 '17 at 13:40













Thank you Bob. PrimoCache looks like it does what I need, but it seems like this can be fixed by changing a silly limitation hard-coded somewhere in the registry (thank you djsmiley2k, looking now). Not sure I would completely trust inserting another 3rd party layer to "intercept I/O" without testing it thoroughly on a non-production system first. I will see if we can test it out on a lesser 1P Precision with another Intel 750 and spinning rust. This T7910 is our dedicated simulation box. Thank you again!

– GHTurbines
Nov 27 '17 at 18:44





Thank you Bob. PrimoCache looks like it does what I need, but it seems like this can be fixed by changing a silly limitation hard-coded somewhere in the registry (thank you djsmiley2k, looking now). Not sure I would completely trust inserting another 3rd party layer to "intercept I/O" without testing it thoroughly on a non-production system first. I will see if we can test it out on a lesser 1P Precision with another Intel 750 and spinning rust. This T7910 is our dedicated simulation box. Thank you again!

– GHTurbines
Nov 27 '17 at 18:44













I will keep an eye on this question for a while, Journeyman Geek. Thank you! As it stands, as long as we stay ahead of transferring off of the SSD before it becomes full (transferring during simulations), we should not have to wait for a transfer to complete to start a new simulation. 1TB of SSD is done with in less than 1 hour if we are busy, or working on a complex impression.

– GHTurbines
Nov 27 '17 at 18:46





I will keep an eye on this question for a while, Journeyman Geek. Thank you! As it stands, as long as we stay ahead of transferring off of the SSD before it becomes full (transferring during simulations), we should not have to wait for a transfer to complete to start a new simulation. 1TB of SSD is done with in less than 1 hour if we are busy, or working on a complex impression.

– GHTurbines
Nov 27 '17 at 18:46










1 Answer
1






active

oldest

votes


















2














A larger cache won't help, not with any standard file copy app.



No matter how large the Windows file cache is, no sane file copy tool will close the input files for the copy task (allowing you to delete them) before all the data has been written to the destination. That is true even if the entire input data happens to have been read into the cache.



The reason is that the data in the cache is not safe - it could disappear at any time. Any RAM used by the cache that has not been modified since it was read from disk is considered discardable by the memory manager. That is, if something else needs the RAM, RAM used by the cache can be grabbed from the cache and "repurposed", ie given to the something else, at any moment. Of course it can - after all, there isn't supposed to be any data anywhere that's only in the cache. (Unless it's been modified since being read from disk. In that case it is automatically queued for writeback, which will happen within four seconds; it can't be repurposed until the writeback is complete.)



So with a standard copy program, you're going to have to wait for the spinning disks to write the data before you can delete the source files - regardless of how much is buffered in a cache.



Note that a copy program (or any other app) cannot even find out whether something is in the cache. There's no interface for that. Even if there was, the information would be considered "stale" the moment it was retrieved, even before the app looked at it. The file cache is supposed to work automatically, transparent to apps, and part of transparency is that there are very few controls on it.



Think of it this way: You need a safe intermediate copy of your results - something functionally equivalent to another copy of your source files in a different directory, maybe even a separate drive letter, before you can safely delete the originals. The Windows file cache will never give you that. Even if Windows does decide to slurp the original files in their entirety into the file cache (which is very unlikely), the file cache does not give you that.



(You may be wondering "so what good is it anyway?" The main goal of the file cache is to make repeated access to numerous small files (and file system metadata) faster. And it does that quite well.)



SuperFetch



TL,DR version: SuperFetch doesn't give you that either.



You are correct in your expressed doubt about SuperFetch. While SuperFetch is effective at what it tries to do, it won't help this case. What SuperFetch does is to keep track of files that are frequently accessed, say on every boot, and try to read them into RAM in advance of need.



This distinction is important if you want to understand Windows caching in general. The Windows file cache (which is what I've described in the previous section) is reactive, meaning that it never caches anything until a program has actually tried to read it. It has been in the Windows NT family since first release (NT 3.1).



SuperFetch is a separate mechanism, originally added with Vista. It is proactive, trying to pre-fetch things that have been observed to have been accessed often in the past.



SuperFetch manages its RAM separately from the Windows file cache. SuperFetch uses "lower-priority" pages on the Windows standby page list - and funny thing, it just leaves them on that list, so they remain part of "available" RAM. (So all other things being equal, you won't notice a difference in "Available" RAM with our without SuperFetch enabled, but you will notice a difference in the amount of reported "Cache".) The RAM assigned to the file cache is in a working set and therefore takes a little longer to repurpose, if that need arises. Because of this design the SuperFetch cache is even more quickly "discardable" than the Windows file cache, so it does not give you a safe temporary copy of anything, any more than the file cache does.



So what can you do?



To solve your problem, I would be looking at dedicated hardware. Maybe get a single not-so-expensive-or-fast SSD of say 120 GB and copy the data from your SSD array to that, then copy it from there to the hard drive. This would, alas, mean that you can't start writing to the hard drives until all of your data has been copied to the temporary, so this will take longer than what you're doing now. But the source data will be freed sooner.



A simpler idea would be to get more hard drives and put them in a larger stripe set to increase write throughput.



A dedicated copy program, one that knows the structure of your data, might help.



btw, I hope your FEA software is using mapped file access when creating the data set - it's far faster than traditional read/write calls.






share|improve this answer


























  • Well technically I think he doesn’t really want to to be cached. He wants it to be buffered. That’s safe, of course, but still won’t provide any advantage whatsoever.

    – Daniel B
    Feb 27 '18 at 10:11











  • You are correct, but my strong impression is that he was asking about getting the Windows file cache to do the job. Hence my discourse about why it isn't suitable.

    – Jamie Hanrahan
    Feb 27 '18 at 12:02











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1272039%2fdisk-write-cache-feature-and-limited-ram-usage-with-windows-10-pro-256gb-ddr4%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














A larger cache won't help, not with any standard file copy app.



No matter how large the Windows file cache is, no sane file copy tool will close the input files for the copy task (allowing you to delete them) before all the data has been written to the destination. That is true even if the entire input data happens to have been read into the cache.



The reason is that the data in the cache is not safe - it could disappear at any time. Any RAM used by the cache that has not been modified since it was read from disk is considered discardable by the memory manager. That is, if something else needs the RAM, RAM used by the cache can be grabbed from the cache and "repurposed", ie given to the something else, at any moment. Of course it can - after all, there isn't supposed to be any data anywhere that's only in the cache. (Unless it's been modified since being read from disk. In that case it is automatically queued for writeback, which will happen within four seconds; it can't be repurposed until the writeback is complete.)



So with a standard copy program, you're going to have to wait for the spinning disks to write the data before you can delete the source files - regardless of how much is buffered in a cache.



Note that a copy program (or any other app) cannot even find out whether something is in the cache. There's no interface for that. Even if there was, the information would be considered "stale" the moment it was retrieved, even before the app looked at it. The file cache is supposed to work automatically, transparent to apps, and part of transparency is that there are very few controls on it.



Think of it this way: You need a safe intermediate copy of your results - something functionally equivalent to another copy of your source files in a different directory, maybe even a separate drive letter, before you can safely delete the originals. The Windows file cache will never give you that. Even if Windows does decide to slurp the original files in their entirety into the file cache (which is very unlikely), the file cache does not give you that.



(You may be wondering "so what good is it anyway?" The main goal of the file cache is to make repeated access to numerous small files (and file system metadata) faster. And it does that quite well.)



SuperFetch



TL,DR version: SuperFetch doesn't give you that either.



You are correct in your expressed doubt about SuperFetch. While SuperFetch is effective at what it tries to do, it won't help this case. What SuperFetch does is to keep track of files that are frequently accessed, say on every boot, and try to read them into RAM in advance of need.



This distinction is important if you want to understand Windows caching in general. The Windows file cache (which is what I've described in the previous section) is reactive, meaning that it never caches anything until a program has actually tried to read it. It has been in the Windows NT family since first release (NT 3.1).



SuperFetch is a separate mechanism, originally added with Vista. It is proactive, trying to pre-fetch things that have been observed to have been accessed often in the past.



SuperFetch manages its RAM separately from the Windows file cache. SuperFetch uses "lower-priority" pages on the Windows standby page list - and funny thing, it just leaves them on that list, so they remain part of "available" RAM. (So all other things being equal, you won't notice a difference in "Available" RAM with our without SuperFetch enabled, but you will notice a difference in the amount of reported "Cache".) The RAM assigned to the file cache is in a working set and therefore takes a little longer to repurpose, if that need arises. Because of this design the SuperFetch cache is even more quickly "discardable" than the Windows file cache, so it does not give you a safe temporary copy of anything, any more than the file cache does.



So what can you do?



To solve your problem, I would be looking at dedicated hardware. Maybe get a single not-so-expensive-or-fast SSD of say 120 GB and copy the data from your SSD array to that, then copy it from there to the hard drive. This would, alas, mean that you can't start writing to the hard drives until all of your data has been copied to the temporary, so this will take longer than what you're doing now. But the source data will be freed sooner.



A simpler idea would be to get more hard drives and put them in a larger stripe set to increase write throughput.



A dedicated copy program, one that knows the structure of your data, might help.



btw, I hope your FEA software is using mapped file access when creating the data set - it's far faster than traditional read/write calls.






share|improve this answer


























  • Well technically I think he doesn’t really want to to be cached. He wants it to be buffered. That’s safe, of course, but still won’t provide any advantage whatsoever.

    – Daniel B
    Feb 27 '18 at 10:11











  • You are correct, but my strong impression is that he was asking about getting the Windows file cache to do the job. Hence my discourse about why it isn't suitable.

    – Jamie Hanrahan
    Feb 27 '18 at 12:02
















2














A larger cache won't help, not with any standard file copy app.



No matter how large the Windows file cache is, no sane file copy tool will close the input files for the copy task (allowing you to delete them) before all the data has been written to the destination. That is true even if the entire input data happens to have been read into the cache.



The reason is that the data in the cache is not safe - it could disappear at any time. Any RAM used by the cache that has not been modified since it was read from disk is considered discardable by the memory manager. That is, if something else needs the RAM, RAM used by the cache can be grabbed from the cache and "repurposed", ie given to the something else, at any moment. Of course it can - after all, there isn't supposed to be any data anywhere that's only in the cache. (Unless it's been modified since being read from disk. In that case it is automatically queued for writeback, which will happen within four seconds; it can't be repurposed until the writeback is complete.)



So with a standard copy program, you're going to have to wait for the spinning disks to write the data before you can delete the source files - regardless of how much is buffered in a cache.



Note that a copy program (or any other app) cannot even find out whether something is in the cache. There's no interface for that. Even if there was, the information would be considered "stale" the moment it was retrieved, even before the app looked at it. The file cache is supposed to work automatically, transparent to apps, and part of transparency is that there are very few controls on it.



Think of it this way: You need a safe intermediate copy of your results - something functionally equivalent to another copy of your source files in a different directory, maybe even a separate drive letter, before you can safely delete the originals. The Windows file cache will never give you that. Even if Windows does decide to slurp the original files in their entirety into the file cache (which is very unlikely), the file cache does not give you that.



(You may be wondering "so what good is it anyway?" The main goal of the file cache is to make repeated access to numerous small files (and file system metadata) faster. And it does that quite well.)



SuperFetch



TL,DR version: SuperFetch doesn't give you that either.



You are correct in your expressed doubt about SuperFetch. While SuperFetch is effective at what it tries to do, it won't help this case. What SuperFetch does is to keep track of files that are frequently accessed, say on every boot, and try to read them into RAM in advance of need.



This distinction is important if you want to understand Windows caching in general. The Windows file cache (which is what I've described in the previous section) is reactive, meaning that it never caches anything until a program has actually tried to read it. It has been in the Windows NT family since first release (NT 3.1).



SuperFetch is a separate mechanism, originally added with Vista. It is proactive, trying to pre-fetch things that have been observed to have been accessed often in the past.



SuperFetch manages its RAM separately from the Windows file cache. SuperFetch uses "lower-priority" pages on the Windows standby page list - and funny thing, it just leaves them on that list, so they remain part of "available" RAM. (So all other things being equal, you won't notice a difference in "Available" RAM with our without SuperFetch enabled, but you will notice a difference in the amount of reported "Cache".) The RAM assigned to the file cache is in a working set and therefore takes a little longer to repurpose, if that need arises. Because of this design the SuperFetch cache is even more quickly "discardable" than the Windows file cache, so it does not give you a safe temporary copy of anything, any more than the file cache does.



So what can you do?



To solve your problem, I would be looking at dedicated hardware. Maybe get a single not-so-expensive-or-fast SSD of say 120 GB and copy the data from your SSD array to that, then copy it from there to the hard drive. This would, alas, mean that you can't start writing to the hard drives until all of your data has been copied to the temporary, so this will take longer than what you're doing now. But the source data will be freed sooner.



A simpler idea would be to get more hard drives and put them in a larger stripe set to increase write throughput.



A dedicated copy program, one that knows the structure of your data, might help.



btw, I hope your FEA software is using mapped file access when creating the data set - it's far faster than traditional read/write calls.






share|improve this answer


























  • Well technically I think he doesn’t really want to to be cached. He wants it to be buffered. That’s safe, of course, but still won’t provide any advantage whatsoever.

    – Daniel B
    Feb 27 '18 at 10:11











  • You are correct, but my strong impression is that he was asking about getting the Windows file cache to do the job. Hence my discourse about why it isn't suitable.

    – Jamie Hanrahan
    Feb 27 '18 at 12:02














2












2








2







A larger cache won't help, not with any standard file copy app.



No matter how large the Windows file cache is, no sane file copy tool will close the input files for the copy task (allowing you to delete them) before all the data has been written to the destination. That is true even if the entire input data happens to have been read into the cache.



The reason is that the data in the cache is not safe - it could disappear at any time. Any RAM used by the cache that has not been modified since it was read from disk is considered discardable by the memory manager. That is, if something else needs the RAM, RAM used by the cache can be grabbed from the cache and "repurposed", ie given to the something else, at any moment. Of course it can - after all, there isn't supposed to be any data anywhere that's only in the cache. (Unless it's been modified since being read from disk. In that case it is automatically queued for writeback, which will happen within four seconds; it can't be repurposed until the writeback is complete.)



So with a standard copy program, you're going to have to wait for the spinning disks to write the data before you can delete the source files - regardless of how much is buffered in a cache.



Note that a copy program (or any other app) cannot even find out whether something is in the cache. There's no interface for that. Even if there was, the information would be considered "stale" the moment it was retrieved, even before the app looked at it. The file cache is supposed to work automatically, transparent to apps, and part of transparency is that there are very few controls on it.



Think of it this way: You need a safe intermediate copy of your results - something functionally equivalent to another copy of your source files in a different directory, maybe even a separate drive letter, before you can safely delete the originals. The Windows file cache will never give you that. Even if Windows does decide to slurp the original files in their entirety into the file cache (which is very unlikely), the file cache does not give you that.



(You may be wondering "so what good is it anyway?" The main goal of the file cache is to make repeated access to numerous small files (and file system metadata) faster. And it does that quite well.)



SuperFetch



TL,DR version: SuperFetch doesn't give you that either.



You are correct in your expressed doubt about SuperFetch. While SuperFetch is effective at what it tries to do, it won't help this case. What SuperFetch does is to keep track of files that are frequently accessed, say on every boot, and try to read them into RAM in advance of need.



This distinction is important if you want to understand Windows caching in general. The Windows file cache (which is what I've described in the previous section) is reactive, meaning that it never caches anything until a program has actually tried to read it. It has been in the Windows NT family since first release (NT 3.1).



SuperFetch is a separate mechanism, originally added with Vista. It is proactive, trying to pre-fetch things that have been observed to have been accessed often in the past.



SuperFetch manages its RAM separately from the Windows file cache. SuperFetch uses "lower-priority" pages on the Windows standby page list - and funny thing, it just leaves them on that list, so they remain part of "available" RAM. (So all other things being equal, you won't notice a difference in "Available" RAM with our without SuperFetch enabled, but you will notice a difference in the amount of reported "Cache".) The RAM assigned to the file cache is in a working set and therefore takes a little longer to repurpose, if that need arises. Because of this design the SuperFetch cache is even more quickly "discardable" than the Windows file cache, so it does not give you a safe temporary copy of anything, any more than the file cache does.



So what can you do?



To solve your problem, I would be looking at dedicated hardware. Maybe get a single not-so-expensive-or-fast SSD of say 120 GB and copy the data from your SSD array to that, then copy it from there to the hard drive. This would, alas, mean that you can't start writing to the hard drives until all of your data has been copied to the temporary, so this will take longer than what you're doing now. But the source data will be freed sooner.



A simpler idea would be to get more hard drives and put them in a larger stripe set to increase write throughput.



A dedicated copy program, one that knows the structure of your data, might help.



btw, I hope your FEA software is using mapped file access when creating the data set - it's far faster than traditional read/write calls.






share|improve this answer















A larger cache won't help, not with any standard file copy app.



No matter how large the Windows file cache is, no sane file copy tool will close the input files for the copy task (allowing you to delete them) before all the data has been written to the destination. That is true even if the entire input data happens to have been read into the cache.



The reason is that the data in the cache is not safe - it could disappear at any time. Any RAM used by the cache that has not been modified since it was read from disk is considered discardable by the memory manager. That is, if something else needs the RAM, RAM used by the cache can be grabbed from the cache and "repurposed", ie given to the something else, at any moment. Of course it can - after all, there isn't supposed to be any data anywhere that's only in the cache. (Unless it's been modified since being read from disk. In that case it is automatically queued for writeback, which will happen within four seconds; it can't be repurposed until the writeback is complete.)



So with a standard copy program, you're going to have to wait for the spinning disks to write the data before you can delete the source files - regardless of how much is buffered in a cache.



Note that a copy program (or any other app) cannot even find out whether something is in the cache. There's no interface for that. Even if there was, the information would be considered "stale" the moment it was retrieved, even before the app looked at it. The file cache is supposed to work automatically, transparent to apps, and part of transparency is that there are very few controls on it.



Think of it this way: You need a safe intermediate copy of your results - something functionally equivalent to another copy of your source files in a different directory, maybe even a separate drive letter, before you can safely delete the originals. The Windows file cache will never give you that. Even if Windows does decide to slurp the original files in their entirety into the file cache (which is very unlikely), the file cache does not give you that.



(You may be wondering "so what good is it anyway?" The main goal of the file cache is to make repeated access to numerous small files (and file system metadata) faster. And it does that quite well.)



SuperFetch



TL,DR version: SuperFetch doesn't give you that either.



You are correct in your expressed doubt about SuperFetch. While SuperFetch is effective at what it tries to do, it won't help this case. What SuperFetch does is to keep track of files that are frequently accessed, say on every boot, and try to read them into RAM in advance of need.



This distinction is important if you want to understand Windows caching in general. The Windows file cache (which is what I've described in the previous section) is reactive, meaning that it never caches anything until a program has actually tried to read it. It has been in the Windows NT family since first release (NT 3.1).



SuperFetch is a separate mechanism, originally added with Vista. It is proactive, trying to pre-fetch things that have been observed to have been accessed often in the past.



SuperFetch manages its RAM separately from the Windows file cache. SuperFetch uses "lower-priority" pages on the Windows standby page list - and funny thing, it just leaves them on that list, so they remain part of "available" RAM. (So all other things being equal, you won't notice a difference in "Available" RAM with our without SuperFetch enabled, but you will notice a difference in the amount of reported "Cache".) The RAM assigned to the file cache is in a working set and therefore takes a little longer to repurpose, if that need arises. Because of this design the SuperFetch cache is even more quickly "discardable" than the Windows file cache, so it does not give you a safe temporary copy of anything, any more than the file cache does.



So what can you do?



To solve your problem, I would be looking at dedicated hardware. Maybe get a single not-so-expensive-or-fast SSD of say 120 GB and copy the data from your SSD array to that, then copy it from there to the hard drive. This would, alas, mean that you can't start writing to the hard drives until all of your data has been copied to the temporary, so this will take longer than what you're doing now. But the source data will be freed sooner.



A simpler idea would be to get more hard drives and put them in a larger stripe set to increase write throughput.



A dedicated copy program, one that knows the structure of your data, might help.



btw, I hope your FEA software is using mapped file access when creating the data set - it's far faster than traditional read/write calls.







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 9 '18 at 15:02

























answered Feb 24 '18 at 21:08









Jamie HanrahanJamie Hanrahan

18.7k34279




18.7k34279













  • Well technically I think he doesn’t really want to to be cached. He wants it to be buffered. That’s safe, of course, but still won’t provide any advantage whatsoever.

    – Daniel B
    Feb 27 '18 at 10:11











  • You are correct, but my strong impression is that he was asking about getting the Windows file cache to do the job. Hence my discourse about why it isn't suitable.

    – Jamie Hanrahan
    Feb 27 '18 at 12:02



















  • Well technically I think he doesn’t really want to to be cached. He wants it to be buffered. That’s safe, of course, but still won’t provide any advantage whatsoever.

    – Daniel B
    Feb 27 '18 at 10:11











  • You are correct, but my strong impression is that he was asking about getting the Windows file cache to do the job. Hence my discourse about why it isn't suitable.

    – Jamie Hanrahan
    Feb 27 '18 at 12:02

















Well technically I think he doesn’t really want to to be cached. He wants it to be buffered. That’s safe, of course, but still won’t provide any advantage whatsoever.

– Daniel B
Feb 27 '18 at 10:11





Well technically I think he doesn’t really want to to be cached. He wants it to be buffered. That’s safe, of course, but still won’t provide any advantage whatsoever.

– Daniel B
Feb 27 '18 at 10:11













You are correct, but my strong impression is that he was asking about getting the Windows file cache to do the job. Hence my discourse about why it isn't suitable.

– Jamie Hanrahan
Feb 27 '18 at 12:02





You are correct, but my strong impression is that he was asking about getting the Windows file cache to do the job. Hence my discourse about why it isn't suitable.

– Jamie Hanrahan
Feb 27 '18 at 12:02


















draft saved

draft discarded




















































Thanks for contributing an answer to Super User!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1272039%2fdisk-write-cache-feature-and-limited-ram-usage-with-windows-10-pro-256gb-ddr4%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

VNC viewer RFB protocol error: bad desktop size 0x0I Cannot Type the Key 'd' (lowercase) in VNC Viewer...

Couldn't open a raw socket. Error: Permission denied (13) (nmap)Is it possible to run networking commands...

Why not use the yoke to control yaw, as well as pitch and roll? Announcing the arrival of...