April 4, 2018

Enterprise Manager and the disadvantages of WLS' Dynamic Monitoring Service (lots of metricdump files)

Had an interesting problem this morning with a customers Windows Server where Enterprise Manager 12c is running: Disk C: - where the EM Software is installed - was full. By the way: Imho, oracle software should never ever be installed in C: ...

The Problem:

Reason for the full disk was Weblogic's Dynamic Monitoring Service (or a weak housekeeping tool ;-)).
Weblogic has a feature, called DMS, which is checking some domain metrics and saves them to a file in directory (DOMAIN_HOME/server/<ServerName>/logs/metrics) for each managed server. Each file is about 600 to 800k in size, which is not that much - except You have thousands of those files. Filename is, by the way, somewhat like metricdump*. The files are created per default in a 3 hour cycle.

The Solution:

For this customer's system, I decided to stop the Dynamic Monitoring Service for the AdminServer as well as for the EMGC_OMS1 managed server. To achieve this, search for the file dms_config.xml (DOMAIN_HOME/config/fmwconfig/servers/<ServerName>) and change the value of 'enabled' to "false"

  <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="false"/>

Restart EM to activate the change. 
Another option could be, to change the value of intervalSeconds to a higher value - for example once a day (86'400 secs) or twice a day (43'200 secs). Or, in case you have a proper working housekeeping tool running each day, configure it that it deletes all files older than 'whatever-you-might-think'.


March 16, 2018

ODA X6-2M – Expanding Storage can be unsuccessful when impatient

In 'A Brief History of Time' (RIP Mr. Hawking), Chapter 2 is named 'Space and Time' – and this title fits perfect with a current problem, I was faced with, when expanding disk capacity in an ODA X6-2M.

Long story, told short

An ODA X6-2M, equipped with the standard disk configuration, ran out of space and my customer decided to add two NVMe's. So, he ordered two NVMe's – and, by the way - we were waiting for two weeks (!), until delivery. 

Next step - mounting the disks. According documentation, expanding disk storage is a quite easy job:
  • Put the disks in
  • Set the disks in online state (odaadmcli power disk on pd_02 and odaadmcli power disk on pd_03)
  • Expand storage (odaadmcli expand storage)

This results in my environment in a complete disaster:

  • Disks are already online when trying a 'power disk on',
  • 'expand storage' meant, the disks have already an ASM signature and aborted,
  • 'odaadmcli show disk' shows the disks, but als UNKNOWN and with the /dev/nmv* names of the existing NMVes, whilst the old ones had, all of a sudden, new names … (!)
  • The command to power off said that the disks are not online, but a subsequent remove of the disks resulted in a system crash …

In short: horror!! Opened a Service Request, tried this, tried that – and finally, after three weeks fighting with Oracle Support, I started a last try: Put the disks in, went to the coffee bar, returned about an hour later. Started to set the disks in online state (result: the disks are already online), and issued an 'odaadmcli expand storage' – expecting the same result as with every try before … but, surprisingly, the command returned no error, ASM started rebalancing and everything went fine J

Solution / Chapter 2 ... 'Space and Time'

You'll get the disk space, when You take Your time ...

Put the disks in, wait at least 15 minutes, power on the disks and issue the 'expand storage' command. Oracle Supports explaination of this behavior was (in my words): The oakd process needs some time to prepare the disks in a way that expand storage works properly …


ODA Xx – Space Waste When Using ACFS (instead of ASM) For Databases

Disclaimer: I will not discuss the necessity of an ACFS FS to store database files, nor compare ASM and ACFS or it's benefits. What to use / what fits best in Your environment is up to You, folks … ;-)

Pretty sure that this is for lots of people a well-known problem – but for all others:

When creating a database using ACFS as storage option, a FS of size 100GB is created. This 100GB is not a configurable value, by the way. Result is: When having lots of small databases, making use of ACFS wastes a lot of disk space, as every database's file system occupies 100GB.
Oracle Support's answer to that is: "In the future releases you may see the customized option to define the size of ACFS while creating the database". Nevertheless: It is possible to reduce that 100GB FS to an appropriate size. Here's how:  

Examples are based on OAK, used db release:

Create the database:

oracle@eoda03 ~]$ odacli create-database -m -n RCRACFS1 --no-cdb  --dbstorage ACFS -dh cce28d1b-01cc-4917-8237-38683d34f53e

… results in (from a FS perspective)

Filesystem                   Size  Used Avail Use% Mounted on
/dev/asm/datrcracfs1-262     100G  2.5G   98G   3% /u02/app/oracle/oradata/RCRACFS1

Reduce the filesystem size to 10GB:

[oracle@eoda03 ~]$ acfsutil size -90G /u02/app/oracle/oradata/RCRACFS1
acfsutil size: new file system size: 10737418240 (10240MB)

Check the filesystem size

[oracle@oda03 ~]$ df -h /u02/app/oracle/oradata/RCRACFS1
Filesystem                   Size  Used Avail Use% Mounted on
/dev/asm/datrcracfs1-262      20G  2.5G   18G  13% /u02/app/oracle/oradata/RCRACFS1

20G?? 100 minus 90 is 20? Possibly yes (you never know ;-)) – on the other hand and a better explanation: 20GB could be minimum FS for an ACFS file system.


Yes! It is that easy J


March 5, 2018

EXPDP using an External Password Store - facing a new 'release' of a well known performance issue

After a while without blogging, here a new blog post talking about a very well known performance issue:

export (exp as well as expdp) is usually slower when using an TCP based TNS-Alias instead of setting ORACLE_SID. Remember? ;-) 

In a little more detail:
An "expdb system/manager@db directory= ..." takes (usually) more time than an
"export ORACLE_SID=DB; expdp system/manager directory= ..."

But what if You have to use an external password store - a wallet - to avoid clearly readable passwords either in a file or at the command line? One part of the whole procedure is, to define the TNS alias in tnsnames.ora - and most of us define a TCP based alias. This is - imho - not the best way of connecting a local database - IPC or BEQ are way better for that.

So, I solved an expdp performance issue by using an Bequeath based TNS alias:

Created a new wallet:
mkstore -wrl /u01/app/oracle/wallet -create

Created an alias to connect to the DB:
mkstore -wrl /u01/app/oracle/wallet -createCredential db_system_beq.world system manager

Added a BEQ connect description to my tnsnames.ora:
    (ADDRESS =
          (PROTOCOL = BEQ)
          (PROGRAM = oracle)
          (ARGV0 = oracleDB)
        (SERVICE_NAME = DB)


Set the environment for my db and issued an expdp:
export ORACLE_SID=DB; expdb /@db_system_beq.world directory= ...


Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Mon Mar 5 15:04:15 2018 elapsed 0 00:00:43 

Result when using a TCP based TNS alias:

Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Mon Mar 5 14:58:03 2018 elapsed 0 00:01:31 

Try it - and post Your results as comment.


September 26, 2017

News Regarding Post "bind csid (#) does not match session csid (#)"

Problem "bind csid (#) does not match session csid (#)"

After further investigation and analysis, here some more news regarding the problem with "bind csid (#) does not match session csid (#)"

I wrote: "Reason was - in my case - that an agent, which has connected to a freshly created clone of a production database a little to early, was stuck with an Character Set which was current at the time of the first connect."

Additional information: 

Found out that the agent has used the 'sys' account of the database as monitoring user. That means, after the cloning process has finished and the database has been starting, the agent connected to the database using this account. Because sys (or any sysdba) is able to connect to a database which is not in an open state ... You know what I mean: The agent connected much too early, got US7ASCII as character set, and ran finally into the "bind csid does not match ..." error.  


There are more options to fix that problem than mentioned in the first post regarding this topic:

  • restart the agent will solve the problem
  • if the database is a result of a recurring cloning process, do not use 'sys' as monitoring user, but 'dbsnmp'
  • generally: do not use 'sys' as a monitoring user, except for Dataguard 
  • After a system restart, start the databases first and subsequently the agent - if You have to use 'sys' as monitoring user for whatever reason
  • monitor the trace directory to avoid that tracefiles are filling up the filesystem


August 16, 2017

SQLcl - the SQL*Plus plus a plus ...

"euphoric: intensely happy or confident", @thatjeffsmith, https://youtu.be/ecgijS76jvw ...

Just watched the video from @thatjeffsmith about SQLcl - and it is, indeed, worth the time. After looking that video, I'm pretty sure that SQLcl will replace SQL*Plus within months (if some more people 'spread the word').

Just a few keywords to make you curious:

  • SQLcl is available as 'standalone' download, but it is also already part of the latest SQLDeveloper 17.2.0
  • SQLcl needs an JRE to run 
  • To start SQLcl use 'sql' - not 'sqlcl'
  • SQLcl has almost all features included, SQL*Plus has (simply run Your sql scripts in SQLcl instead of SQL*Plus)
  • Once started, type help and You'll see all the new commands available. For example:
    • alias - define aliases for often used SQL-Commands
    • cd - change directory when within SQLcl
    • ctas - simply type 'ctas <table> <new-table>' and You'll get the complete syntax for a 'Create Table As Select'. The ctas is loaded into the SQLcl buffer, so You could simply issue a '/' and the code is executed. Ok - You have to edit the result, because of ORA-02264 errors (Constraint already exists - but thats easy done by typing ed to start the editor
    • SQLcl eases typing of statements. It can autocomplete column and table names. Example: Type 'select * from hr.empl<tab>' and it automatically autocompletes to hr.employees. If more than one possibility is available, it lists all ...
    • DDL - a command to extract the ddl for a given object
    • info - a 'verbose' desc which displays way more information about an object
    • info+ - yes, info+! An even more informational view on an object. Interestingly, neither info nor info+ are shown when issuing an help command. But 'help information' will show how to use it.
    • history - show the last 100 command You'd issued in SQLcl (customizable to any value). Even more: You can use history <index> (where index is the number of a sql statement in the history) to load a sql statement in buffer - execute it by typing / , edit it (ed) or change values using the good old 'c/<old value>/<new value>

And there is a lot more - repeat for example, which repeats a statement n times, sleeping t seconds between each execution. Watch the video and then try SQLcl!

"euphoric: intensely happy or confident" - Yes, I am :-)


July 31, 2017

Copyright © Robert Crames' Oracle Database and Middleware Blog | Powered by Blogger
Design by SimpleWpThemes | Blogger Theme by NewBloggerThemes.com