The extended or streched Oracle (RAC) cluster – part 2 (examples)

for part 1, click here…

In this blog post I will describe a few cases with extended (distance) Oracle clusters using examples with different storage, controller, failgroup configurations and where you have data blocks you left when failure happens.

Configuration

Storage and disks

Lets assume you have the following storage unit and disks defined:

storage 1                     storage 2
+----------------+   +----------------+
|                |   |                |
| ctrl1          |   | ctrl3          |
|  disk1 + disk2 |   |  disk5 + disk6 |
|  diskA         |   |  diskC         |
|                |   |                |
| ctrl2          |   | ctrl4          |
|  disk3 + disk4 |   |  disk7 + disk8 |
|  diskB         |   |  diskD         |
|                |   |                |
+----------------+   +----------------+

ctrl = controller

Diskgroups

Your diskgroups are defined as followed:

+DATA=disk1+disk2+disk3+disk4+disk5+disk6+disk7+disk8
+RECO=diskA+diskb+diskC+diskD

Failgroups

Next are three cases with failgroups designs and disks placed in them:

case 1) Failgroups based on storage locations:
FAILGROUP1=disk1+disk2+disk3+disk4+diskA+diskB
FAILGROUP2=disk5+disk6+disk7+disk8+diskC+diskD

case 2) Failgroups based on controllers:
FAILGROUP1=disk1+disk2+diskA
FAILGROUP2=disk3+disk4+diskB
FAILGROUP3=disk5+disk6+diskC
FAILGROUP4=disk7+disk8+diskD

case 3) Failgroups based on disks (default when you do not specify a failgroup):
FAILGROUP1=disk1
FAILGROUP2=disk2
FAILGROUP3=disk3
FAILGROUP4=disk4
FAILGROUP5=disk5
FAILGROUP6=disk6
FAILGROUP7=disk7
FAILGROUP8=disk8
FAILGROUPA=diskA
FAILGROUPB=diskB
FAILGROUPC=diskC
FAILGROUPD=diskD

Failures :(

Normal redundancy

With NORMAL redundancy, blocks on disk1 in +DATA and FAILGROUP1 will be mirrored to:
-> disks in the same +DATA diskgroup
-> one copy other than FAILGROUP1

This leaves mirroring to these disks in the following cases:
1) disk5 or disk6 or disk7 or disk8
2) disk3 or disk4 or disk5 or disk6 or disk7 or disk8
3) disk2 or disk3 or disk4 or disk5 or disk6 or disk7 or disk8

After disk1 failure, you can have copies in:
1) disk5 or disk6 or disk7 or disk8
2) disk3 or disk4 or disk5 or disk6 or disk7 or disk8
3) disk2 or disk3 or disk4 or disk5 or disk6 or disk7 or disk8
–> All cases are fine, copies can be found on one of the other disks.

After ctrl1 failure, you can have copies in:
1) disk5 or disk6 or disk7 or disk8
2) disk3 or disk4 or disk5 or disk6 or disk7 or disk8
3) disk2 or disk3 or disk4 or disk5 or disk6 or disk7 or disk8
–> In case 3, this can be an issue, as disk2 is gone as well. If your copy was on that disk, you are lost.

After storage1 failure, you can have copies in:
1) disk5 or disk6 or disk7 or disk8
2) disk3 or disk4 or disk5 or disk6 or disk7 or disk8
3) disk2 or disk3 or disk4 or disk5 or disk6 or disk7 or disk8
–> In case 2 and 3, this is an issue, as disk2, disk3, disk4 are gone as well. If your copy was on one of these disks, you are lost.

If you have chosen for normal redundancy and have more than two failgroups, you must be 100% sure a complete storage can not be gone with a single point of failure. Utilizing more than one controller per storage and use more than two failgroups will not protect you against storage unit failure.

High redundancy

With HIGH redundancy, one can have three storage units (and use three failgroups), but if you have two storage locations and two controllers per storage, you can also survive a SPOF if you use four failgroups with HIGH redundancy! Lets see:

With HIGH redundancy, blocks on disk1 in +DATA and FAILGROUP1 will be mirrored to:
-> disks in the same +DATA diskgroup
-> TWO copies other than FAILGROUP1, each one in a different failgroup

If you only have two failgroups, it’s not possible to store the third copy, case 1 can be left out. Lets assume one copy of blocks is on disk3 already.

This leaves the second mirroring to these disks in the following cases:
1) —
2) disk5 or disk6 or disk7 or disk8
3) disk2 or disk4 or disk5 or disk6 or disk7 or disk8

After disk1 failure, you can have copies in:
1) —
2) disk3 + (disk5 or disk6 or disk7 or disk8)
3) disk3 + (disk2 or disk4 or disk5 or disk6 or disk7 or disk8)
–> All cases are fine, copies can be found on one of the other disks.

After ctrl1 failure, you can have copies in:
1) —
2) disk3 + (disk5 or disk6 or disk7 or disk8)
3) disk3 + (disk2 or disk4 or disk5 or disk6 or disk7 or disk8)
–> All cases are fine, copies can be found on one of the other disks, at least disk3.

After storage1 failure, you can have copies in:
1) —
2) disk3 + (disk5 or disk6 or disk7 or disk8)
3) disk3 + (disk2 or disk4 or disk5 or disk6 or disk7 or disk8)
–> In case 2, disk3 is gone, but you have all storage2 disks to back you up!
–> In case 3, this can be an issue, as disk2 and disk4 are gone as well. If your copy was on one of these disks, you are lost.

Conclusion

A failgroup must consist of a set of disks that may fail all at once, without running into trouble. Depending on your storage configuration and redundancy level, you must choose them wisely. Make sure the copies of your blocks do not end up in the same storage or at least not on your most acceptable single point of failure. If you accept a storage unit to be a spof, the controllers are next in line.

Using one failgroup per disk is a bad idea when using an extended (distant) cluster. You are not protected against storage failure in a normal an high redundancy configuration as above scenario’s do show.

Normal redundancy

If you have one copy of your data, make sure this copy is not on the same storage. This can only be achieved with two failgroups, one for each storage. Unfortunately you can not divide storage locations and two controllers per storage in four failgroups. If a storage goes down, you can have a copy in a failgroup on the same storage. With normal redundancy, you create one failgroup per storage unit. You can not utilize separate controllers (see storage1 failure, case 2).

High redundancy

If you have two copies, make sure these copies are not on the same storage as well. That’s why you can have a maximum of two failgroups on one storage location, the third must ‘force’ data to go to the other one. With a maximum of two controllers per storage unit, it the easiest to have the rule: With triple mirroring, you create one failgroup per controller and put the disks controlled by a controller together in a failgroup.

External redundancy (non extended cluster)

You must rely on storage mirroring. All disks can have a failgroup of there own.

Extra: Disk group type

New in Oracle ASM 12c is that you can set the content type of a diskgroup. This can be ‘data’, ‘recovery’ or ‘system’. From Oracle’s ASM 12c new features document:

“The benefit is that the contents of Disk Groups with different content type settings are distributed across the available disks differently. This decreases the likelihood that a double-failure will result in data loss across normal redundancy Disk Groups with different content type settings. Likewise, a triple-failure is less likely to result in data loss for high redundancy disk groups with different content type settings.”

Extra: Fail groups for each disk group

Yes, it’s possible to split disk groups with fail groups of their own:

STORAGE1=disk1+disk2+disk3+disk4+diskA+diskB
STORAGE2=disk5+disk6+disk7+disk8+diskC+diskD

+DATA=disk1+disk2+disk3+disk4+disk5+disk6+disk7+disk8
+RECO=diskA+diskB+diskC+diskD

FG_DATA_1=disk1+disk2+disk3+disk4
FG_DATA_2=disk5+disk6+disk7+disk8
FG_RECO_1=diskA+diskB
FG_RECO_2=diskC+diskD

But because Oracle already keeps block copies within the same diskgroup, this is kind of extra administration…

for part 1, click here…

 

Tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *