How to share ZFS pool to LXD container inside VM?
up vote
0
down vote
favorite
I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.
Background
In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.
What I want
- Separate testing environments from services I don’t want to break
- Have remote access to administer machine, install / reinstall OSes
- Use snapshots to have point in time backups that I can revert to if something goes wrong
Here is a diagram that shows my plan:
Plan of my setup
The problem
I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.
To sum up my draft setup:
- HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD
- VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM
- LXD containers INSIDE VM
Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:
- Share ZVOL block device, mount it inside VM as /dev/vdb
- Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)
- NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage
I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:
- Having everything inside one VM makes it more portable, easier to backup
- In case of system restore, it's easier to restart VM than HOST
- Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?
Questions
I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:
- Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL block device? I believe there is, question is it that bad that I shouldn't do it at all?
- Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?
performance local-storage virtual-machine zfs lxd
add a comment |
up vote
0
down vote
favorite
I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.
Background
In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.
What I want
- Separate testing environments from services I don’t want to break
- Have remote access to administer machine, install / reinstall OSes
- Use snapshots to have point in time backups that I can revert to if something goes wrong
Here is a diagram that shows my plan:
Plan of my setup
The problem
I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.
To sum up my draft setup:
- HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD
- VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM
- LXD containers INSIDE VM
Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:
- Share ZVOL block device, mount it inside VM as /dev/vdb
- Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)
- NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage
I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:
- Having everything inside one VM makes it more portable, easier to backup
- In case of system restore, it's easier to restart VM than HOST
- Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?
Questions
I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:
- Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL block device? I believe there is, question is it that bad that I shouldn't do it at all?
- Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?
performance local-storage virtual-machine zfs lxd
This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
– Dan
Nov 12 at 21:03
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.
Background
In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.
What I want
- Separate testing environments from services I don’t want to break
- Have remote access to administer machine, install / reinstall OSes
- Use snapshots to have point in time backups that I can revert to if something goes wrong
Here is a diagram that shows my plan:
Plan of my setup
The problem
I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.
To sum up my draft setup:
- HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD
- VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM
- LXD containers INSIDE VM
Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:
- Share ZVOL block device, mount it inside VM as /dev/vdb
- Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)
- NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage
I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:
- Having everything inside one VM makes it more portable, easier to backup
- In case of system restore, it's easier to restart VM than HOST
- Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?
Questions
I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:
- Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL block device? I believe there is, question is it that bad that I shouldn't do it at all?
- Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?
performance local-storage virtual-machine zfs lxd
I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.
Background
In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.
What I want
- Separate testing environments from services I don’t want to break
- Have remote access to administer machine, install / reinstall OSes
- Use snapshots to have point in time backups that I can revert to if something goes wrong
Here is a diagram that shows my plan:
Plan of my setup
The problem
I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.
To sum up my draft setup:
- HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD
- VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM
- LXD containers INSIDE VM
Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:
- Share ZVOL block device, mount it inside VM as /dev/vdb
- Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)
- NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage
I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:
- Having everything inside one VM makes it more portable, easier to backup
- In case of system restore, it's easier to restart VM than HOST
- Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?
Questions
I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:
- Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL block device? I believe there is, question is it that bad that I shouldn't do it at all?
- Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?
performance local-storage virtual-machine zfs lxd
performance local-storage virtual-machine zfs lxd
asked Nov 10 at 23:48
mDfRg
193
193
This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
– Dan
Nov 12 at 21:03
add a comment |
This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
– Dan
Nov 12 at 21:03
This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
– Dan
Nov 12 at 21:03
This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
– Dan
Nov 12 at 21:03
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244550%2fhow-to-share-zfs-pool-to-lxd-container-inside-vm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
This question is a bit broad, but to answer your specific questions: (1) Performance hit shouldn't be noticeable unless you have something that requires super-high-performance storage running inside your VM. If so, using NFS to access files on the host instead should help. (2) Using NFS seems fine to me. Normally I think you'd keep files on a separate filer so that you don't have to rely on the NFS share being present on loopback to run your container / VM; this would make your app more portable because it won't create a dependency on running on your specific machine / VM configuration.
– Dan
Nov 12 at 21:03