it solutions for special requirements

Schlagwort: zpool

zfs-auto-snapshot vs sanoid: which one is better? : zfs

Quelle: zfs-auto-snapshot vs sanoid: which one is better? : zfs

Und wieder eine interessante Frage in Verbindung mit sanoid.

Es ist wohl Zeit, endlich auch mal einen genaueren Blick darauf zu werfen.

Was ist eigentlich DogeOS?

Was ist eigentlich DogeOS?

http://www.dogeos.net/

In Anbetracht der ungewissen Zukunft von OmniOS, ist DogeOS vielleicht eine Option zum testen.

DogeOS is a distribution based on SmartOS and FIFO project. It is made to be the ultimate cloud OS for data center.

  • All industry proven features of SmartOS: ZFS, Dtrace, KVM, Zones and Crossbow.
  • Ready-to-use management console from FIFO.
  • Nearly 100% resource utilization of hardware.
  • No installation time for Resource Node (a.k.a chunter node).
  • Guided, fast (< 10min) provision of the first FiFo (management) zone, and works even without Internet access.

DogeOS is, as similar to Project FiFo and SmartOS, licensed under CDDL. It is free to use.

Deleting with ZFS

Why are deletions slow?

  • Deleting a file requires a several steps. The file metadata must be marked as ‚deleted‘, and eventually it must be reclaimed so the space can be reused. ZFS is a ‚log structured filesystem‘ which performs best if you only ever create things, never delete them. The log structure means that if you delete something, there’s a gap in the log and so other data must be rearranged (defragmented) to fill the gap. This is invisible to the user but generally slow.
  • The changes must be made in such a way that if power were to fail partway through, the filesystem remains consistent. Often, this means waiting until the disk confirms that data really is on the media; for an SSD, that can take a long time (hundreds of milliseconds). The net effect of this is that there is a lot more bookkeeping (i.e. disk I/O operations).
  • All of the changes are small. Instead of reading, writing and erasing whole flash blocks (or cylinders for a magnetic disk) you need to modify a little bit of one. To do this, the hardware must read in a whole block or cylinder, modify it in memory, then write it out to the media again. This takes a long time.

What can be done?

zfs create -o compression=on -o exec=on -o setuid=off zroot/tmp ; 

chmod 1777 /zroot/tmp ; 

zfs set mountpoint=/tmp zroot/tmp

copy to /zroot/tmp

zfs delete zroot/tmp

source: http://serverfault.com/questions/801074/delete-10m-files-from-zfs-effectively

ZFS send und receive with nc

ZFS send & receive is a great feature and a good solution for remote backup. How to receive and then(!) send zfs snapshots?

Here is my codesnippet;

root@local:~# nc remote.dyndns.org 22553 | zfs receive -vd vol1
root@remoteserver:~# zfs send vol2/services/datastore_l@1 | nc -l -p 22553

works fine with OmniOS and Openindiana;

OmniOS is a burnig flame

OmniOS is a greate operating system. Illumos based (like nexentastor 😉 ) and free.

ZFS and KVM inside and easy to use. But how should i use it?

In my case OmniOS is my SDS OS for my ESXi Enviroment 😉

vmware datastore nfs esx

vmware datastore nfs esx

Anpassung der NFS Werte

Nach einigen Problemen mit der ZFS Pool Performance habe ich paar NFS Werte verändert … und schon fühlt dich der Pool nicht mehr so langsam an:

root@Napp:~# sharectl get nfs
servers=512
lockd_listen_backlog=256
lockd_servers=256
lockd_retransmit_timeout=5
grace_period=90
server_versmin=2
server_versmax=3
client_versmin=2
client_versmax=3
server_delegation=on
nfsmapid_domain=
max_connections=-1
protocol=ALL
listen_backlog=32
device=

Präsentiert von WordPress & Theme erstellt von Anders Norén