O
O
oni__ino2015-06-25 18:06:16
Xen
oni__ino, 2015-06-25 18:06:16

How to restore free space in SR Storage in XenServer 6.2?

I apologize for the thousand and first question about freeing up free space after deleting virtual machines / snapshots. I read what was here, read the citrix forums, googled, struggled with the problem for two days. The results are there but not what I expected.
Given: XenServer version 6.2, LVM storage on 3Tb
2f3d144ba0d7480fb4b0cef0c4a881e9.png
Hence the question immediately, what is usage, size is understandable, but virtual allocation ? I read that this is a reserved place for snapshots, but I'm not sure that I will find where I saw it.
Removed unnecessary virtual machines, transplanted some to other disks. Deleted unnecessary disks through the client
a0613d6ed9204a9a847f3363cd0fa96e.png
Free space was not added. Research has begun.
Removed unnecessary lvm partitions that apparently remained after the removal of machines, looked through lvscan (inactive flags).
inactive '/dev/VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228/VHD-94d0fa64-f1e2-4b7f-b4d0-37aaee985ba7' [8.00 MB] inherit
inactive '/dev/VG_XenStorage-04ff2444-48bf-fa5233 /VHD-8df726ff-3193-47cf-9551-0b855726bcb1' [50.11 GB] inherit
There are a lot of 8MB disks in the list, where did they come from, I didn't create them on purpose.
As a result, about 200GB was added, as shown by pvs, after the update and the client (shown in the screenshots, did them when 200GB was freed, before that it was 99% full)
Machine disks individually take up about 2Tb in total.
I thought the remaining space might be taken up by snapshots by doing
xe vdi-list is-a-snapshot=true | grep name-label
It turned out that 1 snapshot in two machines, and the size of their disks is 10-20GB, well, they cannot occupy the rest of the space.
Total lost about 500GB. There is no possibility to transfer data to another SR.
More interesting information can be obtained with the vhd-util scan -f -m "VHD-*" -l "VG_XenStorage-04ff2444-48bf- 33f5-2afd
-fa33abaf5228" -p command
81f5676b39ce4b86bc76d15c99de2de2.png
knows?
Suggest thoughts, where to look for the loss? Thanks for the discussion.
UPD: according to the recommendations of Argenon , I rescanned the storage, I did it before, along the way I corrected the error with the remaining image storage according to the article stan.borbat.com/fix-mark-the-vdi-hidden-failed

here is the output of /var/log/SMlog (spoiler)

LVMCache created for VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228
['/usr/sbin/vgs', 'VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228']
pread SUCCESS
lock: acquired /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/sr
LVMCache: will initialize now
LVMCache: refreshing
['/usr/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228']
pread SUCCESS
lock: released /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/sr
Entering _checkMetadataVolume
lock: acquired /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/sr
sr_scan {'sr_uuid': '04ff2444-48bf-33f5-2afd-fa33abaf5228', 'subtask_of': 'DummyRef:|373be5b4-4dd9-3f7f-8764-6cac6200ff9f|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:dd9f2f35-800c-87db-c664-2be0e241cba5', 'session_ref': 'OpaqueRef:83dd2423-4108-4f9a-5d49-f33328d36b1f', 'device_config': {'device': '/dev/disk/by-id/scsi-3600050e0fd1a9d007f0c000042630000-part3', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:9d66840e-31e4-1dd4-b54a-4d0dd432597c'}
LVHDSR.scan for 04ff2444-48bf-33f5-2afd-fa33abaf5228
LVMCache: refreshing
['/usr/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228']
pread SUCCESS
['/usr/bin/vhd-util', 'scan', '-f', '-c', '-m', 'VHD-*', '-l', 'VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228']
pread SUCCESS
Scan found hidden leaf (d30b7834-51ca-4acc-b8b8-cb70a943db56), ignoring
Scan found hidden leaf (94d0fa64-f1e2-4b7f-b4d0-37aaee985ba7), ignoring
Scan found hidden leaf (90001c72-1731-4fe9-a759-556c60f3637a), ignoring
Scan found hidden leaf (607f624d-bf1d-43a7-ac2c-16b1ddc05af5), ignoring
Scan found hidden leaf (bf71042a-9cf7-4477-bd52-e69277d7c4ee), ignoring
Scan found hidden leaf (e33943c3-c16d-4726-9453-0b58ac371881), ignoring
Scan found hidden leaf (1212e821-6d41-4c30-bbd6-cf806acb91d4), ignoring
Scan found hidden leaf (c460f026-b4db-4364-844b-063f56f8a264), ignoring
Scan found hidden leaf (01d6e166-14c7-45a6-96ad-60c0540927f8), ignoring
Scan found hidden leaf (6d27f42c-b40c-4a7b-bfaf-c6cccb07a014), ignoring
Scan found hidden leaf (59051d07-64ad-4c5e-bc32-abbf7ed0c303), ignoring
Scan found hidden leaf (4776d94f-be7e-4d47-a2f6-c494f74ebe41), ignoring
Scan found hidden leaf (3df74c4d-0705-4424-a059-88acf3e74a94), ignoring
Scan found hidden leaf (a223e0b2-c531-4545-9102-2e13ac2ea991), ignoring
Scan found hidden leaf (6ae07193-0c38-45f9-9d5d-9a44b25d959b), ignoring
Scan found hidden leaf (08c7b714-4780-48ee-b1f6-c1d2389627d4), ignoring
Scan found hidden leaf (e6383e07-7a6d-4589-9ae5-e282e1928468), ignoring
Scan found hidden leaf (41935794-8dc2-448e-a2ed-5c4f884167ce), ignoring
Scan found hidden leaf (cdeb48c6-104f-4895-8650-afd96419ce5a), ignoring
['/usr/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228']
pread SUCCESS
lock: tried lock /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/running, acquired: True (exists: True)
lock: released /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/running
Kicking GC
=== SR 04ff2444-48bf-33f5-2afd-fa33abaf5228: gc ===
Will finish as PID [28221]
New PID [28220]
lock: closed /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/running
lock: released /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/sr
lock: closed /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/sr
LVMCache created for VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228
lock: tried lock /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/sr, acquired: True (exists: True)
LVMCache: refreshing
['/usr/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228']
pread SUCCESS
['/usr/bin/vhd-util', 'scan', '-f', '-c', '-m', 'VHD-*', '-l', 'VG_XenStorage-04ff2444-48bf-33f5-2afd-fa33abaf5228']
pread SUCCESS
lock: released /var/lock/sm/04ff2444-48bf-33f5-2afd-fa33abaf5228/sr
SR 04ff ('Local storage') (67 VDIs in 10 VHD trees):
*f20cefc4[VHD](8.000G//652.000M|ao)
b412f222[VHD](8.000G//8.023G|ao)
*a90062be[VHD](2.102G//76.000M|ao)
aeb6a2c8[VHD](2.102G//2.113G|ao)
*828b5174[VHD](5.000G//2.871G|ao)
b967c4d1[VHD](32.000G//32.070G|ao)
8de89d16[VHD](250.000G//250.496G|ao)
*921b15f4[VHD](50.000G//13.094G|ao)
*607f624d[VHD](8.000G//8.023G|n)
0b59bc4c[VHD](80.000G//80.164G|ao)
*87b9cf75[VHD](100.000G//73.727G|ao)
1edbcad3[VHD](100.000G//100.203G|ao)
f739ad6a[VHD](100.000G//100.203G|ao)
*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
***********************
* E X C E P T I O N *
***********************
gc: EXCEPTION util.SMException, Parent VDI 1fd828c0-250e-4975-807f-17412952fe3f of 1212e821-6d41-4c30-bbd6-cf806acb91d4 not found
File "/opt/xensource/sm/cleanup.py", line 2559, in gc
_gc(None, srUuid, dryRun)
File "/opt/xensource/sm/cleanup.py", line 2459, in _gc
_gcLoop(sr, dryRun)
File "/opt/xensource/sm/cleanup.py", line 2413, in _gcLoop
sr.scanLocked()
File "/opt/xensource/sm/cleanup.py", line 1303, in scanLocked
self.scan(force)
File "/opt/xensource/sm/cleanup.py", line 2136, in scan
self._buildTree(force)
File "/opt/xensource/sm/cleanup.py", line 1823, in _buildTree
raise util.SMException("Parent VDI %s of %s not " \
*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
* * * * * SR 04ff2444-48bf-33f5-2afd-fa33abaf5228: ERROR

Answer the question

In order to leave comments, you need to log in

1 answer(s)
A
Argenon, 2015-06-25
@oni__ino

Size - storage size, Virtual Allocation - how much is allocated for virtual disks, Usage - how much is actually used.
Run tail -f /var/log/SMlog in the console and rescan the storage, if you don't mind, then show the output.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question