Q & A doc

advertisement
Questions 2013 7 24 - Shared Storage Pools - Nigel Griffiths
Q1: Is IBM i called as VIO?
A: No. IBM i is a VM just like AIX.
VIO is a part of PowerVM.
Q2: Is the presentation available to download?
A: The presentation is available on the WIki - www.tinyurl.com/ibmaixvug
Q3: What's the prime advantage of Shared pool over NPIV disks?
A: Reduced Admin time: You only sort out the LUNs and Zones once when the
create the pool the you can allocate disk space from the VIOS or HMC
Q4: It did worked on a p5 595 machine with VIOS level at 2.2.1.4
A: Thanks for the information but it is NOT SUPPORTED ON POWER5
Q5: Is it will be corrected that 100% storage pool usage makes Storage Pool
impossible to manage and needs to be recreated?
A: Not at all. Just add a new LUN to the Shared Storage Pool and carry on. Also
worth looking at removing unused LU and snapshots as they take up space.
Q6: What would be some practical use case for this, in today’s data centre where
storage has already become very strong?
A: If you SAN team can’t react to urgent disk space demands in less and say an hour.
Your disks don’t Thin Provisioning or snapshots. You want to reduce System Admin
time. You don’t have a strong dedicated SAN team i.e. you don’t have a large estate
or large team. You want some HA capability without HACMP or PowerHA.
Q7: No SCSI reservation means no PowerHA support?
A: Not really. PowerHA on a SSP is supported as far as I know – also you don’t have
to recover the disks because they are already available on the target VIOS. Just
assign the LU to a LPAR – you would not want PowerHA to fiddle with SSP LUNs
and as far as I know PowerHA does not support LU’s, but I could be out of date on
that.
A2 (JDA) – I checked with our PowerHA expert and found that PowerHA is supported
with SSP. Here is a link to the flash announcing the support:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10806
Q8: what version of AIX backs VIOS 2.2.2.2 is it AIX7100 ?
A; Why do you care and why ask that here? My VIOS 2.2.2.2 is 6100-08-01-1245
Q9: After you clean the hdisk, do you need to rerun the cluster create command to
have them join?
A: Eh? Of course, if they failed to be acceptable for the SSP then they are rejected
and ignored until you have to add them once you fix them up and run the command
again.
Q10: What is advantage of SP over NPIV - why to take additional load at VIOS?
A: Reduced Admin time: You only sort out the LUNs and Zones once when the
create the pool the you can allocate disk space from the VIOS or HMC
Q11: What is the link for the enterprise systems 2013 as shown initially?
A:
http://www-304.ibm.com/jct03001c/services/learning/ites.wss/zz/en?pageType=page
&c=S438082P94220Z18
Q12: LPM possible if other frame is not part of SP cluster VIOs - seems not - NPIV,
yes it is possible?
A If the VIOS is not part of the SSP but fixing that or NPIV both require SAN and
Zoning work. With SSP you do that once only for NPIV you have to do it every time.
Q13: On dual VIOS, should we always set the no_reserve flag on SSP's disks ?
A: All SSP LUNs are set to no_reserve as they are ALL online to ALL VIOS in the
SSP – the command to add the disk sets the no_reserve for you in the latest SSP
release. So stop worrying about it.
Q14: Can the migration from "bad" LUN to "Good" dynamically?
A: If by “bad” you mean the LUN is off line and not accessible then no – SSP when
you ask it to replace a LUN has to move the LUN blocks. If the blocks that are
actually allocated can be read once then the LUN can be replaced.
Q15: Does SSP Support Tiered Disk or multiple storage pools for different disk tiers?
A: Not in this release nor planned for this year … currently … but plans change.
There are features for later this year that would give you what I think you want
multiple pools for.
Q16: How's this different from Storage level multipathing?
A: SSP can operate in a Multipathing operation. Multiple paths from a VIOS or over
Dual VIOS. If you mean SVC style mirroring then it is similar but does not need a
SVC 
Q17: Can the LUN be replaced hot? If so what is the I/O impact on the VIO client
using storage from the SSP?
A: The VIOS will copy the used blocks from one disk to the other. This is just regular
I/O so the CPU impart is the same as normal disk I/O.
Q18: Is SSP works with LPM?
A: Yes – this was covered later in the Webinar, to another machine with VIOS in the
same SSP.
Q19: How about mirroring? it's possible to setup the pool that is SAN mirrored (2
SAN boxes)? If not, could I set up 2 pool and mirror on AIX level?
A: The SSP just sees a LUN it is unaware of lower level fiddling. There is only one
Pool in each SSP. Theoretically your AIX could “talk” to two VIOS that are in different
SSP but that is a complex environment and SSP is about being simple (less
man-power) and quick. Wait till Q4 2013.
Q20: Does storage pool support LPM?
A: So, everybody should have caught that, LPM works as long as the target VM is
associated with a VIOS in the same SSP as the source.
Q21: Is performance using LPM with SSPs disks faster than LPM with vSCSI/vFC ?
A: Probably not much difference as the LPM time is mostly copying memory from one
VIOS to the other and is not necessarily base on the disks (they are simple
reconnected.
Q22: Same can do with NPIV much easy way :-)
A: I guess you are a SAN expert – I like connection say a dozen disks to the SSP
then very simple allocation of space. I would rather run the mkbdsp command 200
times than carve out a LUN and Zone it 200 times.
Q23: I saw a bullet in a previous slide that you cannot have swap on the SSP disks
so how can you do this?
A: The was the Active Memory Sharing disk space not regular OS swapping.
Q24: I believe experience with SSP must be smoother than NPIV LPM as it has less
stricter zoning reqs and the target VIOS is already updated on disk structures
A: I agree 100% - the disks are already online so there can be no zoning issues
during the LPM.
Q25: Can we do migratepv on rootvg disk?
A: Absolutely, YES but don’t forget the bosboot and bootlist commands afterwards –
in fact migretepv reminds you.
Q26: Should the dedicated LAN be on separate adapters or just separate ports?
A: That is up to you – SSP just need access. Makes some sense if you have many
adapters to keep things simpler if you SSP and non SSP I/O.
Q27: We run multi VIO servers with test/dev & prod on each. We also have
everything zoned so any VIO Client can move anywhere along with storage. How
would this benefit me? Sounds like adding another layer of virtualization with no
ROI
A: ON the Zoning many sites do not Zone every where for fear of accidentally
connecting a LUN which is already in use else where. Hence, all the comments on
zoning. If you have a simple to operate rapidly fulfilled new disk requirements full
marks. SSP offers features you may not need.
Q28: Does SSP support concurrent access to the same virtual disk from different
client LPARs, potentially distributed over multiple VIOS / servers on different physical
machines?
A: Yes and No. You could configure this but you could NOT use the disk for regular
file system access and multiple machines doing this will corrupt the disk in
microseconds.
Q29: Follow up to the above question: Assumes of course application handles the
concurrent access.
A: This is blue sky research and at your own risk. Get in touch with the developers if
you have a clear reason to do this (the questioner is an IBMer).
Q30: Is the feature called "Remote Restart" available and ready for mainstream
adoption? I recall some early opinions on it were not very favourable due to complex
requirements.
A: Yes its available now. No idea on Adoption as its only been out a few months. This
is “off topic”.
Q31: Since we can only have a single storage pool per VIOS, is there any way to
group types of disks in a storage pool and provision them out to the clients separately
(ie Tier1, Tier 2, replicated, non-replicated, etc)?
A: Not with SSP phase 3. Note the 3.
Q32: Can we use Flashcopy to keep an alternate boot disk ready when we need it?
A: Eh? This is not a Flashcopy topic – If you mean using the SSP snapshot before
and upgrade then yes you can roll back to the snapshot in seconds to recover a
failed upgrade. The stop and starting of the LPAR takes seconds the rollback is a
sub-second.
Q33: Will there be a TSM that we could install on one VIO to allow backing up full
images of LPARs, like TSM4VE on a vCenter ?
A: Please contact you TSM IBM representative for TSM feature support. I would
expect so but demand from customer really helps.
Q34: Would Nigel recommend putting the rootVG on the same storage pool as the
other data.
A: Absolutely YES – SSP is there to make your life simple.
Q35: How efficient is the management of space in the storage pool?
A: I don’t know a number for this but it is allocating 64 MB chunks, it does thin
provisioning.
Q36: How to balance different SAN policy - like FC, SATA and low speed SCSI which is easy managed at SAN level compare to SPP3.
A: SSP is very simple to operate, fast (due to SAN) and a far better use of disk space
than half used or less internal local disks scattered across many machines. Don’t
forget LPM is impossible with local disks.
Q37: Statement made earlier you can not expand a LUN in the SSP. Not very
efficient in disk utilization from a storage perspective.
A: You currently can’t expand a LUN in the SSP – just give it another LUN and SSP
will optimise the I/O to use it effectively. I don’t understand second part of the
question at all but I guess it would require making a LUN and adding to the SSP
Zone. You should expect adding LUNs to the SSP as rare events as you have
“pooled” the unused space rather than most LUNs being under used with specific
LPARs.
Q38: Any monitor tools or perf logs special for SSP?
A: You have lssp from and VIOS of the cluster and pool full and high over provisions
is alerted. If you have ideas about what you would like to see please email Nigel
Griffiths or Joe Armstrong.
Q39: Do the storage LUNs in the pool need to be the same DISK/SAN type? Can
they be mixed Disk Types?
A: They are just LUNs to the SSP – they can be on different disk subsystem from
different disk vendors. Any LUNs supported by the VIOS will work OK but I can’t say
that is a good idea. KISS.
Q40: Can the size of the created LUN's be resized dynamically in the storage pool?
Can the size of the pool be changed dynamically?
A: You can’t expand or shrink the LUNs in the pool but you can add LUN’s to make
the SSP pool bigger. Later we hope to have the pool shrink by removing a LUN
feature but heck why would you do that 
Q41: Any way to tell how LU is distributed across the pool?
A: Yes absolutely – it is evenly distributed across all the LUNs in the SSP.
Q42: I have notice in the later versions of SSP that the SSP Cluster disappears and
no way to recover. Have you heard of this issue? Has the issue fixed?
A: No I have not had any problems with my 3 year old SSP nor heard of a
“disappear” problem – Please work with IBM Support and DON’T JUST GO POKING
ABOUT. I did have SSP refuse to do thinks that I later worked out were user errors
and due to me having the cluster down on that node i.e. it stopped me doing harm
Q43: Is there a list of supported 3rd party SAN's (EMC, 3PAR, etc.)
A: All disk subsystems that are currently supported for use with the VIOS are
supported for SSP – check your VIOS documentation or ask you IBM rep.
Q44: Is there any way to use VLAN tagging over the cluster network?
A: Check the Readme files for fine details. The VIOS need 100% reliable
communication pathways between themselves – I would assume if that is working
then SSP will work. Nigel’s network is very simple 
Q45: PowerHA for i is not supported with SSPs
A: Thanks Sue Baker for the clarification.
Q46: Can you within a SSP can you create multi tiers, we use EMC Vmax here and
we have Gold, Silver and Bronze pools. So if you create a vdisk can you set which
hdisk's it gets allocated on.
A: Nope. If you have Gold, Silver, Bronze VIO servers then you could use differ
SSP’s - that would work.
Q47: What do you have against Informix?
A:  Nothing Informix was my first RDBMS so I have a soft spot for it!
Q48: How should be tuned the vSCSI on LPAR clients?
A: Good question – up the queue depth and go home early.
Q49: Can you turn off Thin provisioning on the server since most arrays provide thin
provisioning?
A: just use –thick when you create the LU.
Q50: Also, any more deep documents, like how the Storage pool is implemented, like
storage server or client architecture?
A: Nope – we don’t want you to worry.
Q51: Hello, Can you please address storage replication to remote sites for DR ...
thank you?
A: The plan is for this feature to be handled by SSP – in the mean time - you could
mirror with SVC or similar. Don’t forget we don’t have a Client LPAR to LUN
relationship here – recovering to a remote site would mean stopping ALL the local
SSP clients and downing the SSP VIOS access before starting again on the remote
end.
Q52: If you use global mirror and flash copy, how would that work in SSP?
A: No idea – you need to talk to Global mirror and flash copy vendors or specialist.
See above Answer about remote sites for DR.
Q53: And how to tune it for best performance?
A: Good news - No tuning is available for SSP. I guess you should think about the
regular SAN performance for the SSP LUNs.
Q54: When should I use dedicated fibre channel vs vSCSI for the clients?
A: That is up to you and depends on your needs. Personally, the reduced admin
means I am moving practically everything to SSP. If you have extreme disk I/O
LPARs like vital RDBMS then why not keep them on their current setup and probably
NPIV – that assumes they are LPM ready!
Q55: Does SSP work better with FCoE (Power 7 Blades)? We tried it back when on
2.2.1.4 and had to revert back to traditional usage due to memory leaks and VIOS
crashing.
A: I have not heard of any such problems. I am surprised this was not resolved
another way than stopping SSP or perhaps the VIOS were undersized. Sorry, but I
don’t know further details. Have you test machines to retry this combination in your
environment.
Q56: Does the VIO cluster remain up when repository disk is down? - - and - - Q: If
you loose repository disk is there an easy way to restore it?
A: Yes, the VIO cluster remains up when the repository disk is down. However, no
changes to the environment can be made until you hand the SSP an alternative LUN
to use for the repository.
Q57: Have you up to date information about official support of SSP by OEM vendors.
Example: is SSP now officially supported by EMC for their storage devices with
PowerPath and EMC Thin provisioning?
A: SSP is supported by IBM on all disks that are supported by the VIOS. IBM does
not speak on the support levels or limitation of other vendors.
Q58: Can we use NPIV for SSP?
A: No, at the present time, SSP storage must be presented to client LPARs via
VSCSI. Actually that will NEVER happen. The point of NPIV is that SAN packets are
only routed by the VIOS to the right Client LPAR so the NPIV LUN is not online in the
VIOS,
Q59: Is this correct? No NPIV in SSP
A: From a client LPAR perspective, all access to SSP volumes must be vSCSI. The
VIO server itself must have a physical fibre connection to the SAN.
Q60: How do you perform backup of large amount of data, which in a classical
setting is performed using by splitting a mirror and performing a Flashcopy of the split
disk?
A: This will not work for SSP because the data of a LPAR is spread across all SSP
LUNs. It is on the future feature list! No commitment implied.
Q61: Are there any performance tools specific to SSP? ie. How the pool is
performing.
A: Just regular VIOS commands and lssp
Q62: Are there any benefits in clustering VIO servers if you *aren't* using SSPs?
A: That would NOT be supported – don’t do it.
Q63: Are there VIO sizing guidelines? Would creating separate VIO servers for SSP
make any sense?
A: See page 5 of the presentation materials. Basic guidance is start with no less
than 1 entitled processor and 4GB of memory. Use VIOS Advisor to determine
need for additional processors or memory. Nigel does NOT recommend yet more
VIOS’s being created and customers have made life harder by having to many!
Q64: What error will a client VM get if writing and pool is full from over commit?
A: It gets and I/O failure from the write() system call
Q65: Does SSP3 do any de-duplication?
A: Nope. Possible future feature. No commitment implied. The LU cloning feature
when using Systems Director/VMControl and SSP would reduce duplication
considerable.
Q66: Do shared storage pools make VIO updates more difficult?
A: SSP now supports rolling updates. In the past, guidance was given to shutdown
or LPM the client LPARs while upgrading the VIO server. You should plan to update
all the VIOS of an SSP together – like over a few days if you can’t do them the same
day. Only when all VIOS are updated do the new features become active.
Q67: SSP with NPIV hosts - SSP are mapped between the cluster VIOS and NPIV
hosts carve the disk there? and not directly from the san, I guess?
A: If a VIO server is serving up both NPIV and SSP to client LPARs, the storage
LUNs will be different. You cannot create an NPIV connection to a LUN that is part
of the SSP.
Q68: Instead of adding an additional virtual disk to an LPAR, could you instead
increase the size of an existing root disk?
A: Nope – no in the current SSP release.
Q69: Are Shared Storage Pools used with VSCSI? How do you allocate space to a
particular vhost
A: Yes, VSCSI is used to connect the storage to a specific client LPAR via the vhost
adapter. Typing the GBs to the vhost is a parameter on the mkbdsp command. See
the mkbdsp slide.
Q70: What other options are available; thin ...?
A: Or Thick with mkbdsp –thick option.
Q71: When adding disk to LPAR (i.e. gold6c) via VIOS command line, does the
command line have to be replicated on all VIOS serving the lpar (i.e. dual VIOS
configuration)?
A: In the demo Nigel showed a pull down that listed the VIOS. In his case only one
was listed, but he said both could be checked and it would take care of both with the
one GUI click.
Q72: Yes, but what if done via the command line as he demonstrated?
A: Command line version the mkbdsp needs to be run on each VIO server that is part
of the cluster that will serve disk to the client. On VIO #2-16, the command would
be modified to only state the VSCSI device name and the vadapter .... no size should
be specified.
Q73: If LPAR has 1 VIOS can you LPM to server with dual VIOS? or can you only go
1 - 1 and 2 -2 VIOS setups?
A: With SSP, the requirement is to LPM from one VIO to another within the VIO
cluster for the SSP. If it is single VIO on server1 and dual VIO on server2, only one
of the two VIO servers will be used. This is no different to LPM with vSCSI LUNs or
NPIV.
Q74: Is there anything in Futures for more than one shared pool in a cluster?
A: No comment on unannounced features.
Q75: How about mirroring? It's possible to setup the pool that is SAN mirrored (2
SAN boxes)?
A: You are not thinking clusters. You would have to stop all the Client LPARs of the
SSP and stop the VIOS accessing the LUN before you could run on the other copy.
This might not work for you or might be good. Either way its not a SSP function!
Q76: Can a VIO join the cluster if it does NOT have visibility to ALL SSP disks in
use?
A: Nope. Every LU is spread across all LUNs in the pool so you would not get far.
Q77: Does shared storage pools play nice with Veritas?
A: If Veritas is supported on the VIOS then yes. You should be asking Veritas not
IBM for their support statement, IBM can’t make statements on their behalf.
Q78: Since data is chunked in 64M bites across all the disks in the pool is there a
performance issue with how and when you add additional storage to the pool?
A: It is chunked at 64MB (this is the smallest size allocated) but you can do disk I/O
at all the regular sizes as normal. I don’t see a performance hit here.
Q79: Will Nigel be in Orlando Technical University in November?
A: Yes, that is the plan and talking about SSP3 and 4 with Linda Fanders sharing a
double length session.
Q80: Before doing the SSP, do you suggest we have to do VIOS clustering first?
A: NO, NO and NO. The VIOS padmin user runs “cluster –create” command which
creates the SSP cluster – as you might expect. You should NOT be doing any cluster
work on the VIOS beforehand, in fact, I am horrified by the question. If you play with
clustering on the VIOS in any non-SSP, way I think you have no support from IBM
Support – fiddling on the VIOS is not allowed.
Download