[Why this post: GATHER_SYSTEM_STATS does not gather MREADTIM information from Direct Path Reads]
Oracle can be tuned in a lot of parts and places. One of these is when Oracle is going to choose between reading an index or doing a full table scan.
In this blog I’m not going into depth about all this, but one of the ‘parameters’ here is setting the MREADTIM system statistic to a ‘real life’ value. This value will tell Oracle how fast reading multiple blocks from disk is with all the overhead in between. How many multiple blocks is, is defined by the multi block read count (MBRC) setting. Together with SREADTIM, IOSEEKTIM and MBRC this will have influence in the execution path Oracle will choose.
Short answer, for balancing to the SCAN listeners from a single client, yes it does (a little).
When you look at a connection string with only a single SCAN ‘host’ there, it seems logical that the LOAD_BALANCE option is unnecessary, but the Oracle client will replace (expand) this with an ADDRESS_LIST, containing the IP addresses it gets from the DNS server. It seems this order can not be trusted to be random. The DNS client can cache this until the TTL expires and/or the DNS server might give them in the order configured and not do this is a round-robin fashion (Round-robin DNS). Nothing will guarantee it will be returned randomly. It might look random when you do a nslookup of the SCAN address, but tracing the Oracle client it shows not to be.
“There is no standard procedure for deciding which address will be used by the requesting application, a few resolvers attempt to re-order the list to give priority to numerically “closer” networks. Some desktop clients do try alternate addresses after a connection timeout of 30–45 seconds.”
Furthermore (in 11.2), the LOAD_BALANCE option is only on by default in the DESCRIPTION_LIST, not the ADDRESS_LIST: Local Naming Parameters (tnsnames.ora).
Well, I have configured some 30 ‘Data Guards’ by now, but I never came across this warning, it seems it’s new in 12c:
DGMGRL> validate database cdb1dgsara
Database Role: Physical standby database
Primary Database: cdb1dgkara
Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)
Future Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
1 3 2 Insufficient SRLs
Warning: standby redo logs not configured for thread 1 on cdb1dgsara
Hang on, standby redo logs not configured? I have 4 groups! Continue reading
During the installation of Oracle 12c (12.1) I encountered the following error:
Error in invoking target 'irman ioracle' of makefile
See '/u01/app/oraInventory/logs/installActions2015(...).log' for details.
Inside the logfile the following error is encountered:
INFO: collect2: ld terminated with signal 9 [Killed]
According to metalink doc 2040972.1 this is due to less memory available (in a VM environment). Continue reading
Linux and Windows…
Quick Reference To Patch Numbers For Database PSU, SPU(CPU) And Bundle Patches [ID 1454618.1]
This document is replaced by Note 2118136.2:
Download Reference for Oracle Database/GI PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases [ID 2118136.2]
Oracle Database, Networking and Grid Agent Patches for Microsoft Platforms [ID 161549.1]
Connecting with the Oracle Instant Client 11g to Amazon Cloud Web Services (amazonaws.com) can result in the next error:
ORA-28547: connection to server failed, probable Oracle Net admin error
I found that while pinging the host, it was not able to resolve it, but connecting with SQL Developer (188.8.131.52) was possible! Strange…
I downloaded the Oracle Instant Client 12c (184.108.40.206) and that works. It seems the Oracle Instant Client 11g (220.127.116.11) is not able to connect to Amazon Web Services…
It seems Oracle VM (<=3.3.1 *) and Oracle Linux (<= 5.10/6.6 *) both install ISOs and installed OS’s are not capable of booting when UEFI on the bare-metal hardware is used. I have seen two configurations now where this happened, one using a USB HDD drive capable providing a ISO to boot from as CD/DVD (Zalman ZM-VE300) and one HP iLO4 (http and local ISO) ‘remote’ booting. Continue reading
When one is looking for the OpenSSL fix 1.0.1g for Oracle (Red Hat) Linux 6, the fixed package version is ‘1.0.1e-16.el6_5.7’. I think this a bit misleading, because OpenSSL 1.0.1e is subject to the bug (CVE-2014-0160). But from the Red Hat site: and Orcale MetaLink (MOS Note 1663998.1): “Version openssl-1.0.1e-16.el6_5.7 included a fix backported from openssl-1.0.1g“.
Some simple OS tests can produce a false-positive to heartbleed tests, becasue it could look only for text other than 1.0.1g.
To update to the ‘latest’ OpenSSL version, enable the [OL6_latest] repository en ‘yum update openssl’:
Setting up Update Process
--> Running transaction check
---> Package openssl.x86_64 0:1.0.1e-15.el6 will be updated
---> Package openssl.x86_64 0:1.0.1e-16.el6_5.7 will be an update
--> Finished Dependency Resolution
Testing for processes using OpenSSL
One can test if processes are using OpenSSL (not a heartbleed vulnerability test), by issuing one of these two following commands:
$ lsof | awk 'NR==1 || $0~/libssl.so.1.0.1e/'
$ grep libssl.so.1.0.1 /proc/*/maps |cut -d/ -f3 |sort -u |xargs -r -- ps uf
OpenSSL Security Bug – Heartbleed / CVE-2014-0160
Document written at April the 18th, 2014…
Happy blee, uh, testing and patching!
Oracle Direct NFS (dNFS for short) is an NFS Client functionality integrated directly in the Oracle database software, optimizing the I/O (multi)path to your NFS storage without the overhead of the OS client/kernel software.
In this blog post I’ll describe network considerations, configurations and problems I have encountered during set-ups I have done.
dNFS uses two kinds of NFS mounts, the OS mount of NFS (also referred to as kernel NFS of kNFS) and, of course, Oracle’s database NFS mount, Direct NFS or dNFS.
According to [Direct NFS: FAQ (Doc ID 954425.1)] and [How to configure DNFS to use multiple IPs (Doc ID 1552831.1)], an kNFS mount is needed, although Oracle also claims it will also work on platforms that don’t natively support NFS, e.g. Windows… [Oracle Database 11g Direct NFS Client White Paper] (I don’t know how yet…).
Because dNFS implements multipath I/O internally, these is no need for bonding the interfaces to storage via active-backup or Link Aggregation. However, it’s good practice to bond the OS kNFS connection:
1 - eth0 -\
- bond0 - OS / kNFS
2 - eth1 -/
3 - eth2 --------- - dNFS path 1
4 - eth3 --------- - dNFS path 2
Above schematic shows [How to configure DNFS to use multiple IPs (Doc ID 1552831.1)]:
“A good solution could be to use bonded NICs (…) to perform the mount and then use unbonded NICs via dNFS for the performance critical path.” Continue reading
After updating Oracle Linux 6.3 to 6.4 or installing 6.4 from scratch will give a corrupt (blank) VNC remote console when launching the console from Oracle VM Manager:
As discussed in https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U4-en.html#idp513536 and Oracle Support note ‘Corrupted VNC console in PVM guests running Oracle Linux 6.4 on Oracle VM’ (Doc ID 1537278.1), this issue is addressed in ‘X Window System Does Not Run in a PVHVM guest’.
Uninstalling the xorg-x11-drv-cirrus guest driver solves the issue
If you uninstall the xorg-x11-cirrus driver from the guest OS, it will solve this issue.
# rpm -ev --nodeps xorg-x11-drv-cirrus
Reboot the guest OS after uninstalling.