



Introduction:
I/O Resource Manager (IORM) is an integral software component of the Oracle Exadata Storage Server and runs within the Cell Services (Cellsrv) on each Cell. Each Cell disk maintains an I/O queue for each consumer group and for each database.
Exadata Storage Server I/O Resource Manager (IORM) allows us to govern I/O resource usage among different categories.
- User Types
- Applications
- Databases (Interdatabase / Intradatabase)
- Workload Types
I/O Resource Manager Plans:
Intradatabase Resource Plan – You can use DBRM to create an inter database plan. When you create using DBRM automatically sent to each storage server cell.
Interdatabase Resource Plan – Managed with an inter database plan
The Category, Interdatabase Plan & Intradatabase Plan are used together by Oracle Exadata to allocate I/O resources on Exadata Storage Server Cell.
IORM Architecture:
- IORM manages Oracle Exadata I/O resources in a per-cell basis.
- It will schedule incoming I/O requests based on configured resource plans.
- The goal of IORM is fully utilize the available disk resources in Oracle Exadata.
- IORM only intervenes when more an one active consumer group on database.
- IORM only controls and manages I/O queues for physical disks not on flash based grid disks (OR) request serviced by Oracle Exadata Smart Flash Cache.
IORM Objective:
- Off: Default Setting. It does not arbitrate I/O resources
- Low_latency: Useful for OLTP workloads & limiting the number of concurrent I/O requests.
- High_throughput: Useful for DSS and data warehouse work loads
- Balanced: Useful for mixed work loads.
- Auto: IORM decides the best objective based on active plans and workloads.
Enabling I/O Resource Management – Intradatabase Plan
- Enable by manually – set the database’s RESOURCE_MANAGER_PLAN parameter.
- Enable by automatically – Create a job scheduler window and attach to resource plan.
- Activate the IORM plan on each Oracle Exadata Cells.
Enabling I/O Resource Management – Multiple Databases
- Enable IORM for multiple databases by configuring IORM Plan.
- Use CellCLI to define and activate the plan on each Oracle Exadata Cell.
- Configure same resource plan on each Oracle Exadata Cell.
- Only one IORM plan can be active at a time on Exadata Cell.
Demonstration on IORM:
Two separate databases (Database1: orcl and Database2: iormdb) and it will share the resources of two Exadata Cells (cell1 and cell2).
Connect to the orcl database:
[oracle@dbnode ~]$ export ORACLE_SID=orcl
[oracle@dbnode ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
[oracle@dbnode bin]$ sqlplus /nolog
SQL*Plus: Release 11.2.0.3.0 Production on Fri Oct 18 09:52:03 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
SQL> conn sys/oracle@orcl as sysdba
Connected.
SQL> set timing on
Connect to the iormdb database:
[oracle@dbnode ~]$ export ORACLE_SID=iormdb
[oracle@dbnode ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
[oracle@dbnode bin]$ sqlplus /nolog
SQL*Plus: Release 11.2.0.3.0 Production on Fri Oct 18 09:52:38 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
SQL> conn sys/oracle@iormdb as sysdba
Connected.
SQL> set timing on
Creating tablespaces simultaneously on both the databases (orcl and iormdb):
In orcl database:
SQL> create bigfile tablespace test datafile '+DATA' size 1G;
Tablespace created.
Elapsed: 00:17:09.37
In iormdb database:
SQL> create bigfile tablespace test datafile '+DATA' size 1G;
Tablespace created.
Elapsed: 00:17:14.73
While creating tablespaces in the orcl and iormdb database the cells are periodically queries to display the metric showing large write throughput (CD_IO_BY_W_LG_SEC).
Check the metrics – Exadata Storage Server – Cell1:
CellCLI> list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'
CD_IO_BY_W_LG_SEC CD_DISK01_cell1 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK02_cell1 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK03_cell1 0.426 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK04_cell1 0.921 MB/sec
CellCLI> list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'
CD_IO_BY_W_LG_SEC CD_DISK01_cell1 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK02_cell1 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK03_cell1 0.426 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK04_cell1 0.921 MB/sec
Check the metrics – Exadata Storage Server – Cell2:
CellCLI> list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'
CD_IO_BY_W_LG_SEC CD_DISK01_cell2 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK02_cell2 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK03_cell2 0.467 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK04_cell2 1.002 MB/sec
CellCLI> list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'
CD_IO_BY_W_LG_SEC CD_DISK01_cell2 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK02_cell2 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK03_cell2 0.467 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK04_cell2 1.002 MB/sec
Note: Eventually the disks involved in both the Exadata Cells reach saturation point. The two tablespaces created in both orcl and iormdb databases. Note that with no I/O resource management (IORM) plan in place the execution times almost identical.
Implementing IORM in Cell1 & cell2:
Exadata Storage Server – Cell1:
CellCLI> alter iormplan dbplan=((name=orcl, level=1, allocation=90),
(name=other, level=2, allocation=10))
IORMPLAN successfully altered
CellCLI> alter iormplan active
IORMPLAN successfully altered
CellCLI> list iormplan detail
name: cell1_IORMPLAN
catPlan:
dbPlan: name=orcl,level=1,allocation=90
name=other,level=2,allocation=10
objective: basic
status: active
Exadata Storage Server – Cell2:
CellCLI> alter iormplan dbplan=((name=orcl, level=1, allocation=90),
(name=other, level=2, allocation=10))
IORMPLAN successfully altered
CellCLI> alter iormplan active
IORMPLAN successfully altered
CellCLI> list iormplan detail
name: cell2_IORMPLAN
catPlan:
dbPlan: name=orcl,level=1,allocation=90
name=other,level=2,allocation=10
objective: basic
status: active
Check the status of IORM plans in Exadata Storage Cells:
[celladmin@cell1 ~]$ cellcli -e list iormplan
cell1_IORMPLAN active
[celladmin@cell2 ~]$ cellcli -e list iormplan
cell2_IORMPLAN active
Creating tablespaces simultaneously on both the databases (orcl and iormdb) after enabling IORM:
In orcl database:
SQL> create bigfile tablespace test1 datafile '+DATA' size 1G;
Tablespace created.
Elapsed: 00:13:07.35
In iormdb database:
SQL> create bigfile tablespace test1 datafile '+DATA' size 1G;
Tablespace created.
Elapsed: 00:16:14.42
Check the metrics – Exadata Storage Server – Cell1:
CellCLI> list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'
CD_IO_BY_W_LG_SEC CD_DISK01_cell1 0.002 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK02_cell1 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK03_cell1 0.200 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK04_cell1 0.802 MB/sec
CellCLI> list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'
CD_IO_BY_W_LG_SEC CD_DISK01_cell1 0.002 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK02_cell1 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK03_cell1 0.200 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK04_cell1 0.802 MB/sec
Check the metrics – Exadata Storage Server – Cell2:
CellCLI> list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'
CD_IO_BY_W_LG_SEC CD_DISK01_cell2 0.002 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK02_cell2 0.001 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK03_cell2 0.050 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK04_cell2 0.126 MB/sec
CellCLI> list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'
CD_IO_BY_W_LG_SEC CD_DISK01_cell2 0.009 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK02_cell2 0.000 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK03_cell2 0.294 MB/sec
CD_IO_BY_W_LG_SEC CD_DISK04_cell2 0.485 MB/sec
Note: With the interdatabase IORM in place the tablespace creation executes almost quickly on priority database compare to without IORM plan.
The tablespace creation finishes on the non-priority database, the execution time is slower than the priority database. While priority database has priority in this case it could not completely saturate the I/O alone so the non-priority database was still able to perform a reasonable amount of work.



