How to share ZFS pool to LXD container inside VM?











up vote
0
down vote

favorite












I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.



Background



In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.



What I want




  1. Separate testing environments from services I don’t want to break

  2. Have remote access to administer machine, install / reinstall OSes

  3. Use snapshots to have point in time backups that I can revert to if something goes wrong


Here is a diagram that shows my plan:



Plan of my setup



The problem



I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.



To sum up my draft setup:




  1. HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD

  2. VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM

  3. LXD containers INSIDE VM


Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:




  1. Share ZVOL block device, mount it inside VM as /dev/vdb

  2. Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)

  3. NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage


I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:




  1. Having everything inside one VM makes it more portable, easier to backup

  2. In case of system restore, it's easier to restart VM than HOST

  3. Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?


Questions



I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:




  1. Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL block device? I believe there is, question is it that bad that I shouldn't do it at all?

  2. Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?










share|improve this question






















  • This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
    – Dan
    Nov 12 at 21:03















up vote
0
down vote

favorite












I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.



Background



In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.



What I want




  1. Separate testing environments from services I don’t want to break

  2. Have remote access to administer machine, install / reinstall OSes

  3. Use snapshots to have point in time backups that I can revert to if something goes wrong


Here is a diagram that shows my plan:



Plan of my setup



The problem



I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.



To sum up my draft setup:




  1. HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD

  2. VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM

  3. LXD containers INSIDE VM


Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:




  1. Share ZVOL block device, mount it inside VM as /dev/vdb

  2. Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)

  3. NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage


I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:




  1. Having everything inside one VM makes it more portable, easier to backup

  2. In case of system restore, it's easier to restart VM than HOST

  3. Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?


Questions



I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:




  1. Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL block device? I believe there is, question is it that bad that I shouldn't do it at all?

  2. Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?










share|improve this question






















  • This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
    – Dan
    Nov 12 at 21:03













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.



Background



In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.



What I want




  1. Separate testing environments from services I don’t want to break

  2. Have remote access to administer machine, install / reinstall OSes

  3. Use snapshots to have point in time backups that I can revert to if something goes wrong


Here is a diagram that shows my plan:



Plan of my setup



The problem



I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.



To sum up my draft setup:




  1. HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD

  2. VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM

  3. LXD containers INSIDE VM


Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:




  1. Share ZVOL block device, mount it inside VM as /dev/vdb

  2. Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)

  3. NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage


I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:




  1. Having everything inside one VM makes it more portable, easier to backup

  2. In case of system restore, it's easier to restart VM than HOST

  3. Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?


Questions



I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:




  1. Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL block device? I believe there is, question is it that bad that I shouldn't do it at all?

  2. Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?










share|improve this question













I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.



Background



In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.



What I want




  1. Separate testing environments from services I don’t want to break

  2. Have remote access to administer machine, install / reinstall OSes

  3. Use snapshots to have point in time backups that I can revert to if something goes wrong


Here is a diagram that shows my plan:



Plan of my setup



The problem



I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.



To sum up my draft setup:




  1. HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD

  2. VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM

  3. LXD containers INSIDE VM


Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:




  1. Share ZVOL block device, mount it inside VM as /dev/vdb

  2. Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)

  3. NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage


I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:




  1. Having everything inside one VM makes it more portable, easier to backup

  2. In case of system restore, it's easier to restart VM than HOST

  3. Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?


Questions



I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:




  1. Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL block device? I believe there is, question is it that bad that I shouldn't do it at all?

  2. Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?







performance local-storage virtual-machine zfs lxd






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 10 at 23:48









mDfRg

193




193












  • This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
    – Dan
    Nov 12 at 21:03


















  • This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
    – Dan
    Nov 12 at 21:03
















This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
– Dan
Nov 12 at 21:03




This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
– Dan
Nov 12 at 21:03

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244550%2fhow-to-share-zfs-pool-to-lxd-container-inside-vm%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244550%2fhow-to-share-zfs-pool-to-lxd-container-inside-vm%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Vorschmack

Quarantine