Windows (ReFS,NTFS) file preallocation hint












1















Assume I have multiple processes writing large files (20gb+). Each process is writing its own file and assume that the process writes x mb at a time, then does some processing and writes x mb again, etc..



What happens is that this write pattern causes the files to be heavily fragmented, since the files blocks get allocated consecutively on the disk.



Of course it is easy to workaround this issue by using SetEndOfFile to "preallocate" the file when it is opened and then set the correct size before it is closed. But now an application accessing these files remotely, which is able to parse these in-progress files, obviously sees zeroes at the end of the file and takes much longer to parse the file.
I do not have control over the this reading application so I can't optimize it to take zeros at the end into account.



Another dirty fix would be to run defragmentation more often, run Systernal's contig utility or even implement a custom "defragmenter" which would process my files and consolidate their blocks together.



Another more drastic solution would be to implement a minifilter driver which would report a "fake" filesize.



But obviously both solutions listed above are far from optimal. So I would like to know if there is a way to provide a file size hint to the filesystem so it "reserves" the consecutive space on the drive, but still report the right filesize to applications?



Otherwise obviously also writing larger chunks at a time obviously helps with fragmentation, but still does not solve the issue.



EDIT:



Since the usefulness of SetEndOfFile in my case seems to be disputed I made a small test:



LARGE_INTEGER size;
LARGE_INTEGER a;
char buf='A';
DWORD written=0;

DWORD tstart;

std::cout << "creating filen";
tstart = GetTickCount();
HANDLE f = CreateFileA("e:\test.dat", GENERIC_ALL, FILE_SHARE_READ, NULL, CREATE_ALWAYS, 0, NULL);
size.QuadPart = 100000000LL;
SetFilePointerEx(f, size, &a, FILE_BEGIN);
SetEndOfFile(f);
printf("file extended, elapsed: %dn",GetTickCount()-tstart);
getchar();
printf("writing 'A' at the endn");
tstart = GetTickCount();
SetFilePointer(f, -1, NULL, FILE_END);
WriteFile(f, &buf,1,&written,NULL);
printf("written: %d bytes, elapsed: %dn",written,GetTickCount()-tstart);


When the application is executed and it waits for a keypress after SetEndOfFile I examined the on disc NTFS structures:
before



The image shows that NTFS has indeed allocated clusters for my file. However the unnamed DATA attribute has StreamDataSize specified as 0.



Systernals DiskView also confirms that clusters were allocated
DickView



When pressing enter to allow the test to continue (and waiting for quite some time since the file was created on slow USB stick), the StreamDataSize field was updated
enter image description here



Since I wrote 1 byte at the end, NTFS now really had to zero everything, so SetEndOfFile does indeed help with the issue that I am "fretting" about.



I would appreciate it very much that answers/comments also provide an official reference to back up the claims being made.



Oh and the test application outputs this in my case:



creating file
file extended, elapsed: 0

writing 'A' at the end
written: 1 bytes, elapsed: 21735


Also for sake of completeness here is an example how the DATA attribute looks like when setting the FileAllocationInfo (note that the I created a new file for this picture)
enter image description here










share|improve this question

























  • I am really curious why my question received a downvote, could the downvoter please explain the reasons so I can improve my question?

    – Jaka
    Nov 16 '18 at 9:04











  • That SetEndOfFile trick does nothing anyway, it merely updates the directory entry but does not actually allocate any clusters. That you could not see this yourself is a pretty good hint that you are fretting over an irrelevant problem.

    – Hans Passant
    Nov 16 '18 at 9:10
















1















Assume I have multiple processes writing large files (20gb+). Each process is writing its own file and assume that the process writes x mb at a time, then does some processing and writes x mb again, etc..



What happens is that this write pattern causes the files to be heavily fragmented, since the files blocks get allocated consecutively on the disk.



Of course it is easy to workaround this issue by using SetEndOfFile to "preallocate" the file when it is opened and then set the correct size before it is closed. But now an application accessing these files remotely, which is able to parse these in-progress files, obviously sees zeroes at the end of the file and takes much longer to parse the file.
I do not have control over the this reading application so I can't optimize it to take zeros at the end into account.



Another dirty fix would be to run defragmentation more often, run Systernal's contig utility or even implement a custom "defragmenter" which would process my files and consolidate their blocks together.



Another more drastic solution would be to implement a minifilter driver which would report a "fake" filesize.



But obviously both solutions listed above are far from optimal. So I would like to know if there is a way to provide a file size hint to the filesystem so it "reserves" the consecutive space on the drive, but still report the right filesize to applications?



Otherwise obviously also writing larger chunks at a time obviously helps with fragmentation, but still does not solve the issue.



EDIT:



Since the usefulness of SetEndOfFile in my case seems to be disputed I made a small test:



LARGE_INTEGER size;
LARGE_INTEGER a;
char buf='A';
DWORD written=0;

DWORD tstart;

std::cout << "creating filen";
tstart = GetTickCount();
HANDLE f = CreateFileA("e:\test.dat", GENERIC_ALL, FILE_SHARE_READ, NULL, CREATE_ALWAYS, 0, NULL);
size.QuadPart = 100000000LL;
SetFilePointerEx(f, size, &a, FILE_BEGIN);
SetEndOfFile(f);
printf("file extended, elapsed: %dn",GetTickCount()-tstart);
getchar();
printf("writing 'A' at the endn");
tstart = GetTickCount();
SetFilePointer(f, -1, NULL, FILE_END);
WriteFile(f, &buf,1,&written,NULL);
printf("written: %d bytes, elapsed: %dn",written,GetTickCount()-tstart);


When the application is executed and it waits for a keypress after SetEndOfFile I examined the on disc NTFS structures:
before



The image shows that NTFS has indeed allocated clusters for my file. However the unnamed DATA attribute has StreamDataSize specified as 0.



Systernals DiskView also confirms that clusters were allocated
DickView



When pressing enter to allow the test to continue (and waiting for quite some time since the file was created on slow USB stick), the StreamDataSize field was updated
enter image description here



Since I wrote 1 byte at the end, NTFS now really had to zero everything, so SetEndOfFile does indeed help with the issue that I am "fretting" about.



I would appreciate it very much that answers/comments also provide an official reference to back up the claims being made.



Oh and the test application outputs this in my case:



creating file
file extended, elapsed: 0

writing 'A' at the end
written: 1 bytes, elapsed: 21735


Also for sake of completeness here is an example how the DATA attribute looks like when setting the FileAllocationInfo (note that the I created a new file for this picture)
enter image description here










share|improve this question

























  • I am really curious why my question received a downvote, could the downvoter please explain the reasons so I can improve my question?

    – Jaka
    Nov 16 '18 at 9:04











  • That SetEndOfFile trick does nothing anyway, it merely updates the directory entry but does not actually allocate any clusters. That you could not see this yourself is a pretty good hint that you are fretting over an irrelevant problem.

    – Hans Passant
    Nov 16 '18 at 9:10














1












1








1








Assume I have multiple processes writing large files (20gb+). Each process is writing its own file and assume that the process writes x mb at a time, then does some processing and writes x mb again, etc..



What happens is that this write pattern causes the files to be heavily fragmented, since the files blocks get allocated consecutively on the disk.



Of course it is easy to workaround this issue by using SetEndOfFile to "preallocate" the file when it is opened and then set the correct size before it is closed. But now an application accessing these files remotely, which is able to parse these in-progress files, obviously sees zeroes at the end of the file and takes much longer to parse the file.
I do not have control over the this reading application so I can't optimize it to take zeros at the end into account.



Another dirty fix would be to run defragmentation more often, run Systernal's contig utility or even implement a custom "defragmenter" which would process my files and consolidate their blocks together.



Another more drastic solution would be to implement a minifilter driver which would report a "fake" filesize.



But obviously both solutions listed above are far from optimal. So I would like to know if there is a way to provide a file size hint to the filesystem so it "reserves" the consecutive space on the drive, but still report the right filesize to applications?



Otherwise obviously also writing larger chunks at a time obviously helps with fragmentation, but still does not solve the issue.



EDIT:



Since the usefulness of SetEndOfFile in my case seems to be disputed I made a small test:



LARGE_INTEGER size;
LARGE_INTEGER a;
char buf='A';
DWORD written=0;

DWORD tstart;

std::cout << "creating filen";
tstart = GetTickCount();
HANDLE f = CreateFileA("e:\test.dat", GENERIC_ALL, FILE_SHARE_READ, NULL, CREATE_ALWAYS, 0, NULL);
size.QuadPart = 100000000LL;
SetFilePointerEx(f, size, &a, FILE_BEGIN);
SetEndOfFile(f);
printf("file extended, elapsed: %dn",GetTickCount()-tstart);
getchar();
printf("writing 'A' at the endn");
tstart = GetTickCount();
SetFilePointer(f, -1, NULL, FILE_END);
WriteFile(f, &buf,1,&written,NULL);
printf("written: %d bytes, elapsed: %dn",written,GetTickCount()-tstart);


When the application is executed and it waits for a keypress after SetEndOfFile I examined the on disc NTFS structures:
before



The image shows that NTFS has indeed allocated clusters for my file. However the unnamed DATA attribute has StreamDataSize specified as 0.



Systernals DiskView also confirms that clusters were allocated
DickView



When pressing enter to allow the test to continue (and waiting for quite some time since the file was created on slow USB stick), the StreamDataSize field was updated
enter image description here



Since I wrote 1 byte at the end, NTFS now really had to zero everything, so SetEndOfFile does indeed help with the issue that I am "fretting" about.



I would appreciate it very much that answers/comments also provide an official reference to back up the claims being made.



Oh and the test application outputs this in my case:



creating file
file extended, elapsed: 0

writing 'A' at the end
written: 1 bytes, elapsed: 21735


Also for sake of completeness here is an example how the DATA attribute looks like when setting the FileAllocationInfo (note that the I created a new file for this picture)
enter image description here










share|improve this question
















Assume I have multiple processes writing large files (20gb+). Each process is writing its own file and assume that the process writes x mb at a time, then does some processing and writes x mb again, etc..



What happens is that this write pattern causes the files to be heavily fragmented, since the files blocks get allocated consecutively on the disk.



Of course it is easy to workaround this issue by using SetEndOfFile to "preallocate" the file when it is opened and then set the correct size before it is closed. But now an application accessing these files remotely, which is able to parse these in-progress files, obviously sees zeroes at the end of the file and takes much longer to parse the file.
I do not have control over the this reading application so I can't optimize it to take zeros at the end into account.



Another dirty fix would be to run defragmentation more often, run Systernal's contig utility or even implement a custom "defragmenter" which would process my files and consolidate their blocks together.



Another more drastic solution would be to implement a minifilter driver which would report a "fake" filesize.



But obviously both solutions listed above are far from optimal. So I would like to know if there is a way to provide a file size hint to the filesystem so it "reserves" the consecutive space on the drive, but still report the right filesize to applications?



Otherwise obviously also writing larger chunks at a time obviously helps with fragmentation, but still does not solve the issue.



EDIT:



Since the usefulness of SetEndOfFile in my case seems to be disputed I made a small test:



LARGE_INTEGER size;
LARGE_INTEGER a;
char buf='A';
DWORD written=0;

DWORD tstart;

std::cout << "creating filen";
tstart = GetTickCount();
HANDLE f = CreateFileA("e:\test.dat", GENERIC_ALL, FILE_SHARE_READ, NULL, CREATE_ALWAYS, 0, NULL);
size.QuadPart = 100000000LL;
SetFilePointerEx(f, size, &a, FILE_BEGIN);
SetEndOfFile(f);
printf("file extended, elapsed: %dn",GetTickCount()-tstart);
getchar();
printf("writing 'A' at the endn");
tstart = GetTickCount();
SetFilePointer(f, -1, NULL, FILE_END);
WriteFile(f, &buf,1,&written,NULL);
printf("written: %d bytes, elapsed: %dn",written,GetTickCount()-tstart);


When the application is executed and it waits for a keypress after SetEndOfFile I examined the on disc NTFS structures:
before



The image shows that NTFS has indeed allocated clusters for my file. However the unnamed DATA attribute has StreamDataSize specified as 0.



Systernals DiskView also confirms that clusters were allocated
DickView



When pressing enter to allow the test to continue (and waiting for quite some time since the file was created on slow USB stick), the StreamDataSize field was updated
enter image description here



Since I wrote 1 byte at the end, NTFS now really had to zero everything, so SetEndOfFile does indeed help with the issue that I am "fretting" about.



I would appreciate it very much that answers/comments also provide an official reference to back up the claims being made.



Oh and the test application outputs this in my case:



creating file
file extended, elapsed: 0

writing 'A' at the end
written: 1 bytes, elapsed: 21735


Also for sake of completeness here is an example how the DATA attribute looks like when setting the FileAllocationInfo (note that the I created a new file for this picture)
enter image description here







windows ntfs hint refs pre-allocation






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 20:10







Jaka

















asked Nov 16 '18 at 8:51









JakaJaka

1,0061219




1,0061219













  • I am really curious why my question received a downvote, could the downvoter please explain the reasons so I can improve my question?

    – Jaka
    Nov 16 '18 at 9:04











  • That SetEndOfFile trick does nothing anyway, it merely updates the directory entry but does not actually allocate any clusters. That you could not see this yourself is a pretty good hint that you are fretting over an irrelevant problem.

    – Hans Passant
    Nov 16 '18 at 9:10



















  • I am really curious why my question received a downvote, could the downvoter please explain the reasons so I can improve my question?

    – Jaka
    Nov 16 '18 at 9:04











  • That SetEndOfFile trick does nothing anyway, it merely updates the directory entry but does not actually allocate any clusters. That you could not see this yourself is a pretty good hint that you are fretting over an irrelevant problem.

    – Hans Passant
    Nov 16 '18 at 9:10

















I am really curious why my question received a downvote, could the downvoter please explain the reasons so I can improve my question?

– Jaka
Nov 16 '18 at 9:04





I am really curious why my question received a downvote, could the downvoter please explain the reasons so I can improve my question?

– Jaka
Nov 16 '18 at 9:04













That SetEndOfFile trick does nothing anyway, it merely updates the directory entry but does not actually allocate any clusters. That you could not see this yourself is a pretty good hint that you are fretting over an irrelevant problem.

– Hans Passant
Nov 16 '18 at 9:10





That SetEndOfFile trick does nothing anyway, it merely updates the directory entry but does not actually allocate any clusters. That you could not see this yourself is a pretty good hint that you are fretting over an irrelevant problem.

– Hans Passant
Nov 16 '18 at 9:10












1 Answer
1






active

oldest

votes


















2














Windows file systems maintain two public sizes for file data, which are reported in the FileStandardInformation:





  • AllocationSize - a file's allocation size in bytes, which is typically a multiple of the sector or cluster size.


  • EndOfFile - a file's absolute end of file position as a byte offset from the start of the file, which must be less than or equal to the allocation size.


Setting an end of file that exceeds the current allocation size implicitly extends the allocation. Setting an allocation size that's less than the current end of file implicitly truncates the end of file.



Starting with Windows Vista, we can manually extend the allocation size without modifying the end of file via SetFileInformationByHandle: FileAllocationInfo. You can use Sysinternals DiskView to verify that this allocates clusters for the file. When the file is closed, the allocation gets truncated to the current end of file.



If you don't mind using the NT API directly, you can also call NtSetInformationFile: FileAllocationInformation. Or even set the allocation size at creation via NtCreateFile.





FYI, there's also an internal ValidDataLength size, which must be less than or equal to the end of file. As a file grows, the clusters on disk are lazily initialized. Reading beyond the valid region returns zeros. Writing beyond the valid region extends it by initializing all clusters up to the write offset with zeros. This is typically where we might observe a performance cost when extending a file with random writes. We can set the FileValidDataLengthInformation to get around this (e.g. SetFileValidData), but it exposes uninitialized disk data and thus requires SeManageVolumePrivilege. An application that utilizes this feature should take care to open the file exclusively and ensure the file is secure in case the application or system crashes.






share|improve this answer


























  • Yes, exactly, calling SetFileValidData will just set the StreamDataSize (and AttributeSize) to whatever is passed as ValidDataLength without zeroing the clusters, so the new file may contain sensitive information. It seems that AllocationSize maps to AttributeSize field of DATA attribute and EndOfFile maps to the StreamDataSize field.

    – Jaka
    Nov 16 '18 at 20:14














Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53334343%2fwindows-refs-ntfs-file-preallocation-hint%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














Windows file systems maintain two public sizes for file data, which are reported in the FileStandardInformation:





  • AllocationSize - a file's allocation size in bytes, which is typically a multiple of the sector or cluster size.


  • EndOfFile - a file's absolute end of file position as a byte offset from the start of the file, which must be less than or equal to the allocation size.


Setting an end of file that exceeds the current allocation size implicitly extends the allocation. Setting an allocation size that's less than the current end of file implicitly truncates the end of file.



Starting with Windows Vista, we can manually extend the allocation size without modifying the end of file via SetFileInformationByHandle: FileAllocationInfo. You can use Sysinternals DiskView to verify that this allocates clusters for the file. When the file is closed, the allocation gets truncated to the current end of file.



If you don't mind using the NT API directly, you can also call NtSetInformationFile: FileAllocationInformation. Or even set the allocation size at creation via NtCreateFile.





FYI, there's also an internal ValidDataLength size, which must be less than or equal to the end of file. As a file grows, the clusters on disk are lazily initialized. Reading beyond the valid region returns zeros. Writing beyond the valid region extends it by initializing all clusters up to the write offset with zeros. This is typically where we might observe a performance cost when extending a file with random writes. We can set the FileValidDataLengthInformation to get around this (e.g. SetFileValidData), but it exposes uninitialized disk data and thus requires SeManageVolumePrivilege. An application that utilizes this feature should take care to open the file exclusively and ensure the file is secure in case the application or system crashes.






share|improve this answer


























  • Yes, exactly, calling SetFileValidData will just set the StreamDataSize (and AttributeSize) to whatever is passed as ValidDataLength without zeroing the clusters, so the new file may contain sensitive information. It seems that AllocationSize maps to AttributeSize field of DATA attribute and EndOfFile maps to the StreamDataSize field.

    – Jaka
    Nov 16 '18 at 20:14


















2














Windows file systems maintain two public sizes for file data, which are reported in the FileStandardInformation:





  • AllocationSize - a file's allocation size in bytes, which is typically a multiple of the sector or cluster size.


  • EndOfFile - a file's absolute end of file position as a byte offset from the start of the file, which must be less than or equal to the allocation size.


Setting an end of file that exceeds the current allocation size implicitly extends the allocation. Setting an allocation size that's less than the current end of file implicitly truncates the end of file.



Starting with Windows Vista, we can manually extend the allocation size without modifying the end of file via SetFileInformationByHandle: FileAllocationInfo. You can use Sysinternals DiskView to verify that this allocates clusters for the file. When the file is closed, the allocation gets truncated to the current end of file.



If you don't mind using the NT API directly, you can also call NtSetInformationFile: FileAllocationInformation. Or even set the allocation size at creation via NtCreateFile.





FYI, there's also an internal ValidDataLength size, which must be less than or equal to the end of file. As a file grows, the clusters on disk are lazily initialized. Reading beyond the valid region returns zeros. Writing beyond the valid region extends it by initializing all clusters up to the write offset with zeros. This is typically where we might observe a performance cost when extending a file with random writes. We can set the FileValidDataLengthInformation to get around this (e.g. SetFileValidData), but it exposes uninitialized disk data and thus requires SeManageVolumePrivilege. An application that utilizes this feature should take care to open the file exclusively and ensure the file is secure in case the application or system crashes.






share|improve this answer


























  • Yes, exactly, calling SetFileValidData will just set the StreamDataSize (and AttributeSize) to whatever is passed as ValidDataLength without zeroing the clusters, so the new file may contain sensitive information. It seems that AllocationSize maps to AttributeSize field of DATA attribute and EndOfFile maps to the StreamDataSize field.

    – Jaka
    Nov 16 '18 at 20:14
















2












2








2







Windows file systems maintain two public sizes for file data, which are reported in the FileStandardInformation:





  • AllocationSize - a file's allocation size in bytes, which is typically a multiple of the sector or cluster size.


  • EndOfFile - a file's absolute end of file position as a byte offset from the start of the file, which must be less than or equal to the allocation size.


Setting an end of file that exceeds the current allocation size implicitly extends the allocation. Setting an allocation size that's less than the current end of file implicitly truncates the end of file.



Starting with Windows Vista, we can manually extend the allocation size without modifying the end of file via SetFileInformationByHandle: FileAllocationInfo. You can use Sysinternals DiskView to verify that this allocates clusters for the file. When the file is closed, the allocation gets truncated to the current end of file.



If you don't mind using the NT API directly, you can also call NtSetInformationFile: FileAllocationInformation. Or even set the allocation size at creation via NtCreateFile.





FYI, there's also an internal ValidDataLength size, which must be less than or equal to the end of file. As a file grows, the clusters on disk are lazily initialized. Reading beyond the valid region returns zeros. Writing beyond the valid region extends it by initializing all clusters up to the write offset with zeros. This is typically where we might observe a performance cost when extending a file with random writes. We can set the FileValidDataLengthInformation to get around this (e.g. SetFileValidData), but it exposes uninitialized disk data and thus requires SeManageVolumePrivilege. An application that utilizes this feature should take care to open the file exclusively and ensure the file is secure in case the application or system crashes.






share|improve this answer















Windows file systems maintain two public sizes for file data, which are reported in the FileStandardInformation:





  • AllocationSize - a file's allocation size in bytes, which is typically a multiple of the sector or cluster size.


  • EndOfFile - a file's absolute end of file position as a byte offset from the start of the file, which must be less than or equal to the allocation size.


Setting an end of file that exceeds the current allocation size implicitly extends the allocation. Setting an allocation size that's less than the current end of file implicitly truncates the end of file.



Starting with Windows Vista, we can manually extend the allocation size without modifying the end of file via SetFileInformationByHandle: FileAllocationInfo. You can use Sysinternals DiskView to verify that this allocates clusters for the file. When the file is closed, the allocation gets truncated to the current end of file.



If you don't mind using the NT API directly, you can also call NtSetInformationFile: FileAllocationInformation. Or even set the allocation size at creation via NtCreateFile.





FYI, there's also an internal ValidDataLength size, which must be less than or equal to the end of file. As a file grows, the clusters on disk are lazily initialized. Reading beyond the valid region returns zeros. Writing beyond the valid region extends it by initializing all clusters up to the write offset with zeros. This is typically where we might observe a performance cost when extending a file with random writes. We can set the FileValidDataLengthInformation to get around this (e.g. SetFileValidData), but it exposes uninitialized disk data and thus requires SeManageVolumePrivilege. An application that utilizes this feature should take care to open the file exclusively and ensure the file is secure in case the application or system crashes.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 16 '18 at 20:07

























answered Nov 16 '18 at 20:01









eryksuneryksun

23.8k25268




23.8k25268













  • Yes, exactly, calling SetFileValidData will just set the StreamDataSize (and AttributeSize) to whatever is passed as ValidDataLength without zeroing the clusters, so the new file may contain sensitive information. It seems that AllocationSize maps to AttributeSize field of DATA attribute and EndOfFile maps to the StreamDataSize field.

    – Jaka
    Nov 16 '18 at 20:14





















  • Yes, exactly, calling SetFileValidData will just set the StreamDataSize (and AttributeSize) to whatever is passed as ValidDataLength without zeroing the clusters, so the new file may contain sensitive information. It seems that AllocationSize maps to AttributeSize field of DATA attribute and EndOfFile maps to the StreamDataSize field.

    – Jaka
    Nov 16 '18 at 20:14



















Yes, exactly, calling SetFileValidData will just set the StreamDataSize (and AttributeSize) to whatever is passed as ValidDataLength without zeroing the clusters, so the new file may contain sensitive information. It seems that AllocationSize maps to AttributeSize field of DATA attribute and EndOfFile maps to the StreamDataSize field.

– Jaka
Nov 16 '18 at 20:14







Yes, exactly, calling SetFileValidData will just set the StreamDataSize (and AttributeSize) to whatever is passed as ValidDataLength without zeroing the clusters, so the new file may contain sensitive information. It seems that AllocationSize maps to AttributeSize field of DATA attribute and EndOfFile maps to the StreamDataSize field.

– Jaka
Nov 16 '18 at 20:14






















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53334343%2fwindows-refs-ntfs-file-preallocation-hint%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Vorschmack

Quarantine