![]() You can channel bond them to give you almost double the network throughput and some redundancy. What you are probably seeing is not a translation overhead but a performance hit due to a different access pattern. Sequential writes to a ZFS volume would simply create a nearly-sequential data stream to be written to your underlying physical disks. Sequential writes to a VMFS datastore on top of a ZFS volume would create a data stream which is "pierced" by metadata updates of the VMFS filesystem structure and frequent sync / cache flush requests for this very metadata. Sequential writes to a virtual disk from within a client again would add more "piercing" of your sequential stream due to the guest's file system metadata. The cure usually prescribed in these situations would be enabling of a write cache which would ignore cache flush requests. It would alleviate the random-write and sync issues and improve the performance you see in your VM guests. Keep in mind however that your data integrity would be at risk if the cache would not be capable of persisting across power outages / sudden reboots. You could easily test if you are hitting your disk's limits by issuing something like iostat -xd 5 on your FreeNAS box and looking at the queue sizes and utilization statistics of your underlying physical devices. Running esxtop in disk device mode also should help you getting a clue about what is going on by showing disk utilization statistics from the ESX side. I currently use FreeNas 8 with two Raid 5 sSata arrays attached off the server. The server has 8GB of ram and two single core Intel Xeon processors. My performance has been substantially different to what others have experienced. I am not using MPIO or any load balancing on NICs. Just a single Intel GIGE 10/100/1000 server NIC.īoth arrays have five 2.0TB drives equating to roughly 7.5 TB of space RAID5. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |