Pages

Tuesday, February 28, 2012

Noetix: Improving Performance of the GMF_SLA_Cost_Subledger Template at R12

Looking at the source code behind this view it quickly becomes apparent that, with nine UNIONS each of them with a full GROUP BY, unless you have a very finely tuned database you are going to get performance problems when accessing this view. This is especially true if you are querying it using something like the Production_Batch_Number (only populated by Query position 9).

Probably the quickest (and easiest) way to improve performance is to cut out any of the UNIONS that you actually aren't using. The easiest way to identify these is to edit the source code of the view and;
  • Add a column called "block_number" to each block with the number of that block,
  • Remove the GROUP BY clauses (this will just improve performance for what we are about to do),
  • Wrap the modified view SQL with the query "SELECT DISTINCT block_number FROM ( ... )"
  • Execute the query, and note the time it takes
On our system (with about 12 years of GL data) the query took 5:29 (5 and half minutes) and returned the following results;
Result of SQL
As you can see query blocks 3 and 5 are contributing zero records to the result and, as we're looking at every record that query block returns, we can safely remove them (of course, if at a later date we change the way our e-Business Suite is configured we might need to look back at this but then before we'd do that we'd fully test reporting wouldn't we?!).

So now we have two query blocks 3,5 and that we think we can remove.

As a sanity check I commented out blocks 3 and 5 in the SQL we had modified with the block numbers and re-ran it to see if we got the same results (and performance improved). Sure enough the results were the same and the query returned in 1:26 (down from 5:29) - of course some of this will be due to the intelligent optimiser having kept results of the previous run but still it's an encouraging sign.

The next step is to identify the Noetix query blocks that we can "comment out", to do this we need to look in the N_View_Query_Templates table and see how the view is currently configured;

select *
  from n_view_query_templates n
 where n.view_label = 'GMF_SLA_Cost_Subledger'

If you look at the VIEW_COMMENT column there is some useful information (from Noetix) regarding what each union is used for. Looking at blocks 3 and 5 the comments are RMA Transactions, and Drop Shipment Transactions - which makes sense as we use neither.

Now to remove the queries from the view we need to create a simple script we can call from WNOETXU2.sql such as;

@utlspon gmf_sla_cost_subledger_xu2
 
UPDATE N_View_Query_Templates N
   SET N.product_version  = '9',
       N.last_updated_by  = 'A Pellew',
       N.last_update_date = TO_DATE('28-FEB-2012')
 WHERE 1 = 1
   AND N.view_label = 'GMF_SLA_Cost_Subledger'
   AND N.query_position IN (3, 5)
;
 
COMMIT;
 
@utlspoff

Maybe future versions of Noetix will do this sort of performance turning automatically.

Monday, February 20, 2012

SSRS: Changing US-format Date/Time Pickers (SharePoint Integrated Mode)

If you have, like us, configured your servers and clients for a non-US locale and then been surprised to see that when running SQL Server Reporting Services (SSRS) all the date/time pickers seem still, somehow, to be stuck in US Date/Time format the fix is actually incredibly easy.

Basically the issue is that SharePoint uses neither the Locale of the Server or the Locale of the User but instead uses the SharePoint culture to determine the format that the Date/Time pickers will use.

In order to fix the issue you need to be able to administer the site in order to update the Regional Settings.

Go to your SharePoint website;
"Site Actions" on a SharePoint site
Using the "Site Actions" drop down select "Site Settings";
Site Settings in SharePoint
Under the "Site Administration" section select "Regional settings";
Site Settings > Regional Settings in SharePoint
Change the locale (which should be saying "English (United States)") to the correct locale.

Date/Time pickers will now be in UK-format (or whatever format you have selected);
SSRS Date/Time Pickers Showing UK-format
NOTE: It's worth noting that if you set your users settings to a different region the settings on the Site seem to take precedence to your personal settings.

Friday, February 17, 2012

Oracle PL/SQL: Querying Inventory Quantities in R12.

As you have probably gathered from my other posts the company I work for uses NoetixViews for reporting against Oracle e-Business Suite however it looks like, at Release 12, Oracle has started providing some reasonably useful API's to get inventory quantities.

Take a look at the INV_Quantity_Tree_Pub package (under the APPS user) in your database. The "query_quantities" API has the following parameters;

P_API_VERSION_NUMBER    NUMBER    IN   
P_INIT_MSG_LST    VARCHAR2    IN    Y
X_RETURN_STATUS    VARCHAR2    OUT   
X_MSG_COUNT    NUMBER    OUT   
X_MSG_DATA    VARCHAR2    OUT   
P_ORGANIZATION_ID    NUMBER    IN   
P_INVENTORY_ITEM_ID    NUMBER    IN   
P_TREE_MODE    NUMBER    IN   
P_IS_REVISION_CONTROL    BOOLEAN    IN   
P_IS_LOT_CONTROL    BOOLEAN    IN   
P_IS_SERIAL_CONTROL    BOOLEAN    IN   
P_DEMAND_SOURCE_TYPE_ID    NUMBER    IN    Y
P_DEMAND_SOURCE_HEADER_ID    NUMBER    IN    Y
P_DEMAND_SOURCE_LINE_ID    NUMBER    IN    Y
P_DEMAND_SOURCE_NAME    VARCHAR2    IN    Y
P_LOT_EXPIRATION_DATE    DATE    IN    Y
P_REVISION    VARCHAR2    IN   
P_LOT_NUMBER    VARCHAR2    IN   
P_SUBINVENTORY_CODE    VARCHAR2    IN   
P_LOCATOR_ID    NUMBER    IN   
P_ONHAND_SOURCE    NUMBER    IN    Y
X_QOH    NUMBER    OUT   
X_RQOH    NUMBER    OUT   
X_QR    NUMBER    OUT   
X_QS    NUMBER    OUT   
X_ATT    NUMBER    OUT   
X_ATR    NUMBER    OUT   
P_TRANSFER_SUBINVENTORY_CODE    VARCHAR2    IN    Y
P_COST_GROUP_ID    NUMBER    IN    Y
P_LPN_ID    NUMBER    IN    Y
P_TRANSFER_LOCATOR_ID    NUMBER    IN    Y

I said they have "provided a reasonably useful API" not that they had provided an "easy to use API"! Clearly if you want to get anything useful out of this API then you need to build a wrapper package for this (particularly if you want to be able to query quantities in SQL rather than PL/SQL - which I'm assuming you do).

I'm going to suggest you create a wrapper function called "get_onhand_quantity" as an example, it should be pretty clear how you alter this to return some of the other quantities associated with the item, here is the SQL:

CREATE OR REPLACE FUNCTION get_onhand_quantity(p_organization_id   IN NUMBER,
                                               p_inventory_item_id IN NUMBER,
                                               p_subinventory_code IN VARCHAR2,
                                               p_locator_id        IN NUMBER,
                                               p_lot_number        IN VARCHAR2)
  RETURN NUMBER IS
  -- The various quantity values returned
  v_QuantityOnhand              NUMBER;
  v_QuantityReservableOnhand    NUMBER;
  v_QuantityReserved            NUMBER;
  v_QuantitySuggested           NUMBER;
  v_QuantityAvailableToTransact NUMBER;
  v_QuantityAvailableToReserve  NUMBER;

  -- Monitor the return status of the API
  v_ReturnStatus       VARCHAR2(1);
  v_ReturnMessageCount NUMBER;
  v_ReturnMessageData  VARCHAR2(256);
begin
  INV_QUANTITY_TREE_PUB.query_quantities(p_api_version_number         => 1,
                                         p_init_msg_lst               => fnd_api.g_false,
                                         x_return_status              => v_ReturnStatus,
                                         x_msg_count                  => v_ReturnMessageCount,
                                         x_msg_data                   => v_ReturnMessageData,
                                         p_organization_id            => p_organization_id,
                                         p_inventory_item_id          => p_inventory_item_id,
                                         p_tree_mode                  => apps.INV_Quantity_Tree_PUB.g_transaction_mode,
                                         p_is_revision_control        => FALSE,
                                         p_is_lot_control             => TRUE,
                                         p_is_serial_control          => FALSE,
                                         p_demand_source_type_id      => -9999,
                                         p_demand_source_header_id    => -9999,
                                         p_demand_source_line_id      => -9999,
                                         p_demand_source_name         => NULL,
                                         p_lot_expiration_date        => NULL,
                                         p_revision                   => NULL,
                                         p_lot_number                 => p_lot_number,
                                         p_subinventory_code          => p_subinventory_code,
                                         p_locator_id                 => p_locator_id,
                                         p_onhand_source              => inv_quantity_tree_pvt.g_all_subs,
                                         x_qoh                        => v_QuantityOnhand,
                                         x_rqoh                       => v_QuantityReservableOnhand,
                                         x_qr                         => v_QuantityReserved,
                                         x_qs                         => v_QuantitySuggested,
                                         x_att                        => v_QuantityAvailableToTransact,
                                         x_atr                        => v_QuantityAvailableToReserve,
                                         p_transfer_subinventory_code => NULL,
                                         p_cost_group_id              => NULL,
                                         p_lpn_id                     => NULL,
                                         p_transfer_locator_id        => NULL);

  RETURN(v_QuantityOnhand);
END get_onhand_quantity;





The script is available as a Google Document here (which might be slightly easier to see).

Now a quick test script to demonstrate how this works;

select ii.Onhand_Quantity
  from noetix_sys.INVG0_item_onhand_by_lot II
 where ii.ORGANIZATION_CODE = '01'
   and ii.Onhand_Quantity > 0
   and ii.Inventory_Item_ID = 5097
   and ii.lot_number = '155498RET'
   and rownum = 1;

select get_onhand_quantity(302, 5097, 'RETAIN', 37976, '155498RET') Onhand_Quantity
  from dual

The first step (at the top) is to use Noetix views to find the details of an item with Onhand Quantity greater than zero so that can be plugged into the Get_Onhand_Quantity function (it was also slightly comforting to see that both queries returned the same value).

What's it like for speed? Well that's an interesting question; it's substantially faster (as you'd expect; it's not going through a view) but as the timings for the first and second queries are 0.093 and 0.031 seconds respectively the difference isn't really significant in the wider scheme of things.

It will be interesting to see if a future version of Noetix uses the API (I'd certainly recommend it!).




Wednesday, February 15, 2012

Open-Source Replacement For Closed-Source Software

It's not often case that I'm forwarded something interested to read at work in a technical email but this time was a definite exception. The following URL cropped up in a conversation around Drupal;

http://www.opensource-it.com/open_source_alternatives_proprietary_enterprise_software

What it basically is is a website which allows you to search for a piece of closed-source Software, for example Microsoft Project and see that the Open-source equivalents are Open Workbench and OpenProj.

Slightly off-topic I know but I just found this website so incredibly useful I just thought I'd pass it on!

OpenSource-IT.com Website

SSRS: Scheduling/Subscribing To An Existing Report

This is a fairly simple blog post with details of how to schedule an existing report so that it will run automatically and send you the report. The process below describes using Reporting Services 2008R2 running in Sharepoint Integrated mode - standalone more is similar but not identical.

It's worth noting that in order to schedule a report to run it's vitally important that the critical parameters for the report (i.e. reporting period) have default values which automatically roll forward as the months pass; this way you only need to schedule the report to run one rather than having the schedule the report to run in each period (if you get the distinction). Your Report Writer/ System Administrator will be able to help you if you need parameter default values changing.

The first step is to go to the report URL;
SSRS "Specify Parameter Values"-default Report Web page

At the top-left is an "Actions" menu, from this drop down select "Subscribe";
Subscription Properties Web Page
On this page you can select how the subscription should be delivered (i.e. email, windows file share, Sharepoint document library) and the frequency that the report should run on and the delivery preferences (i.e. Excel file, web archive, etc).

Generally most of these you will want to leave as default except the "Delivery Event";
Subscription Options: Delivery Event

As you can see you have in essence three options;
  • When a report snapshot is created: This option is only relevant if you are using reporting snapshots. If you aren't sure (or don't know what one is) then this isn't the option to pick. Usually snapshots will be managed by your system administrator
  • On a shared schedule: Your administrator can set-up "shared schedules" on the server which allow multiple reports to be scheduled within a window and, in the event of scheduled downtime for instance, that window to be moved around. By attaching your report to a shared schedule it puts the execution of your report in the hands of the system administrator - this makes it a log easier for the System Administrator but if your report *must* run at a set time then this is not the option to pick
  • On a custom schedule: You are most likely to choose this option, this gives you complete free reign on when your report executes and how often it executes
If you select "On a custom schedule" and then click "Configure" you will be presented with the following screen;
Subscription Options: Defining a Custom Schedule
Changing the frequency at the top allows you to adjust the schedule. If you need a more complex schedule (for example run every Tuesday and on the First of the month) then you will need to configure multiple subscriptions in order to handle your request - there is no limit to the number of subscriptions you can create so this shouldn't be a problem.

Once you've defined your schedule you now need to pick the parameters for your report;
Report Subscription Parameters
As a general rule unless there is a specific reason to not use "Use Report Default Value" then I'd recommend you stick with that.

When all your parameters have a value click "OK" at the bottom of the web page, you will then be presented with the "Manage Subscriptions" page for this report showing your new subscription;
Manage Subscriptions Page for Report
If you with to add other subscriptions (i.e. you have a complex schedule which needs them) then you can click "Add Subscription" otherwise click "Close".

NOTE: If you ever want to un-subscribe from a report find the report in the document library, hover over it for a few seconds until the drop down menu appears, and then select "Manage Subscriptions".





Tuesday, February 7, 2012

SSRS: Poor Support For Troubleshooting Of Dataset Errors

With apologies this post is slightly more of a rant than an actual solution to a problem!

We have just completed our migration from Business objects (6.5 - ancient) to SSRS for reporting against our businesses most important system; Oracle e-Business Suite. As you can imagine this has been a major piece of work (300+ reports) and a major amount of time.

It's actually a tribute to SSRS that we have managed this in about 8 months with a team of FIVE people, one of which was junior, two consultants, and one full-time member of staff (me), plus another dropping in/out as needed.

What hasn't helped is Microsoft's attitude to error messages. Take this one for example;
Oracle (ORA-01427) Error Message in Report Builder 3
As you can see from the text it's a pretty simple error, a sub-query in DataSet1 is returning more than one row. Easy to fix (or at least debug) you'd think? No. Let's take a look at the datasets associated with the report;
Report Builder 3 Datasets
Now the first thing you'll notice is that there are a lot of Datasets associated with this report - it's a complex report bringing together data from multiple sources (a strength of SSRS).

The second thing you'll notice, when you look back at the error, is that *none* of the datasets are called "DataSet1".

Well that's easy, you'll be thinking, they're clearly numbered from the top down just look in the top one (or two, numbering might have started at zero!) and there you'll find your error.

Except no, neither of the top two datasets (PARAMETER_DEFAULTS or dsINVInventoryClasses) have sub-queries in the SQL and so it's impossible for them to be generating the ORA-01427 error that is being reported.

So now what? Well I need to go into each of the datasets, manually, and test them with the parameters the user is using until I find the error (and due to the poor nature of the reported error I have to open *all* of the datasets to make sure that the error isn't occurring in multiple datasets).

Now you'll also notice, if you look closely at the image above, that all of the datasets are shared datasets (i.e. are stored separately on the reporting server). Now in order to open each of these I first need to double-click the dataset to see where it is stored and what it's called;
Report Builder 3: Finding the Name/Location of a Shared Dataset
This gives me the location and name (NPL Parameter Default Values.rdl) of the first dataset.

Now to open it. Report Builder 3 doesn't allow you to have multiple items open in the same application so in order to look at a Shared Dataset I need to open a new instance of the application (which is annoying, but not really a big issue; think of it as a minor annoyance on the annoyance scale). So I open a new copy of Report Builder 3;
Report  Builder 3: Getting Started Wizard
That's handy isn't it? I've worked on this report fairly recently so surely Report Builder 3 will have the datasets I've been working on in my Recent files list? Surely? No. Of course not. The recent list contains only the reports I've been working on - Not the shared datasets (which if you think about it are actually the things that are substantially more likely to change that the reports). In fact if you look closely at the "Getting Started" dialog you'll see that while you can create New Datasets you have no way of opening existing ones as the "Open" option is for reports only as well.

So I click on the "Open" option and am presented with the root directly of the server I'm connected to;
Report Builder 3: Open Dialog

This is literally the one place on the server (if you're running in Sharepoint Integrated Mode - which we are) where there *can't* possibly be any reports or datasets (because it's impossible to save anything). It would be like when you open a document in Word it defaulting to the "My Computer" folder - sure it works, sure you can get to where you need, but it's "slightly harder than it needs to be" - a slogan I think should be adopted for Report Builder 3 in general!

Now I go into "Data Sets" folder and, after a few seconds wait, I'm presented with a blank dialog. Looking at the bottom you'll see that the "Items of type" field is set specifically to "Reports (*.rdl)" so datasets don't appear, if you want to see them you need to change it to "Datasets (*.rsd)"- and then all the reports disappear. Microsoft > Can we *please* have a "All Supported Files (*.rdl|*.rsd)" option?

So now we Open the dataset we are interested in and (NPL Parameter Default Values.rdl), run our test, and then move on to the next one. I'm going to skip a couple now, ones that work and I'm able to test in Report Builder and skip on to the next annoyance.

For those of you keeping track this is the dataset called "OE Item Type Demand Planning - Item Orders.rsd" (or dsOEItemTypeDemandPlanning_ItemOrders in the dataset list above).

I'm picking on this one because the report I'm testing allows you to enter multiple values into a parameter;
Report Builder 3: Editing SQL
This is configured as (looking at Parameters under Set Options);
Report Builder 3: Parameters
Now when you try and run the report you get the standard "Define Query Parameters" dialog;
Report Builder 3: Define Query Parameters dialog
Guess what? There is no ability to include multiple values for a single parameter so if (as in this specific case) your test conditions include a parameter having multiple values then you need to copy/paste the SQL from your dataset into a tool which does support it (I use PL/SQL Developer) and then run the SQL there.

Needless to say tracking down the specific cause of the error in this specific case took *hours* but it's not the amount of time taken that annoyed me enough to make me document it here; it's the sheer frustration of the process that needs to be followed and the need to use non-Microsoft tools (PL/SQL Developer) in order to do some testing.

So here are my recommendations (for Microsoft);
  • The obvious one; provide enough detail in the error messages for the user to know where the problem is. Did this really need to be said?!
  • I should be able to open Shared Datasets by right/double clicking them. I'm happy for it to spawn a new process if necessary but having me do it manually is just a time wasting step I don't need.
  • Datasets should receive the same treatment as reports in Report Builder 3 "Recent" listing.
  • The default open file-type should be both Reports and Datasets
  • If I'm using SharePoint Integrated mode it should remember the Document library I'm working in (maybe a Preference?) and the open dialog should take me right there.
  • If Reporting Services supports multiple-value parameters so should Report Builder 3.
Oh and just in case you're wondering I did go to the Connect website and did a search from SQL Server Reporting Services and had no results, then reporting services with also no results, and finally SQL Server with results - but no obvious way of reporting a Reporting Services issue.

Am I really alone in finding this just a frustrating?!

Footnote: *Please* refrain from telling me how much easier everything is in Visual Studio. We have around 50 non-IT users in the business who write reports who we are quite happy to give Report Builder to but would draw the line at Visual Studio and, in our experience, you either use Visual Studio or Report Builder - mixing them is just a recipe for Visual Studio users overwriting Report Builder users changes.

Friday, February 3, 2012

Configuring GMail Access (with Push Notification) on an iPhone/iPad/iPod

The first thing you're probably thinking is "why not just use the standard Gmail account type?" and it's a perfectly valid question. I recommend using exchange because it gives you Push Email - this means when you get email it "instantly" appears on your device rather than your device checking for email at regular intervals (called Polling Email). Depending on your volume of emails having Push Email could save you considerable amounts of battery power (and time).

To enable Gmail with Push Email on an iPhone follow these steps.

Open the "Settings" application go to "Mail, Contacts, Calendars" and click "Add account...";
iPad: "Add Account" Under Settings
As you can see you have a list of options of which the one you're probably using the most at the moment is Gmail. Click on the top one, "Microsoft Exchange";
iPad: Add An Exchange Account
Enter your details here;
iPad: Add An Exchange Account (completed)
Click on the blue "Next" button at the top right of the entry dialog. The will now be a few seconds pause which an attempt is made to automatically identify the server you're trying to connect to. Unfortunately this doesn't seem to work so after a few seconds you are presented with the same information again this time with a "Server" entry box;
iPad: Add an Exchange Account With Server
Enter the server "m.google.com" and then click "Next" again.

Your account will now be verified by Google and providing you have entered everything correctly you will then be presented with the option what to sync with your device;
iPad: Select Items To Sync
You can select any/all of the options (or even none).

When you're done click "Save" at the top right and you'll now have Push Email on your device.

Thursday, February 2, 2012

Oracle PL/SQL: Using DBMS_LDAP To Query Active Directory

It's always useful to be able to retrieve details from Active Directory when working within an Oracle Database (I'll do a separate post later on how we use LDAP records to update our employee email addresses stored in Oracle).

Oracle have created a package called DBMS_LDAP, I have to say when I first found it I thought it was new but it has apparently existed in one form or another since Oracle 9i. It is fairly self explanatory to use, the main (useful) function is SEARCH_S which does the querying. The function takes the following parameters;
SYS.DBMS_LDAP.SEARCH_S Parameter List
As you can see calling this isn't going to be simple based on the complexity of the parameters (RAW, TABLE OF ...) but actually the DBMS_LDAP package has a number of helpful definitions that mean it's not quite so daunting as it may initially appear so here is a simple example;

DECLARE
  v_SearchUsername         VARCHAR2(40) := '@@';
  v_LDAPSession            DBMS_LDAP.SESSION;
  v_LDAPAttributeQueryList DBMS_LDAP.STRING_COLLECTION;
  v_LDAPQueryResults       DBMS_LDAP.MESSAGE;
  v_BerElement             DBMS_LDAP.BER_ELEMENT;
 
  v_FunctionReturnValue    PLS_INTEGER;
BEGIN
  :Result := '';
  v_LDAPSession := DBMS_LDAP.INIT('@LDAP server@', '@port@');
  v_FunctionReturnValue := DBMS_LDAP.SIMPLE_BIND_S(v_LDAPSession,
                                                   '@domain@\@user@',
                                                   '@password@');
  v_LDAPAttributeQueryList(1) := 'mail';
  v_FunctionReturnValue := DBMS_LDAP.SEARCH_S(

    ld       => v_LDAPSession,
    base     => '@base location@', -- "DC=xx,DC=yy=DC=zz"
    scope    => DBMS_LDAP.SCOPE_SUBTREE,
    filter   => 'samaccountname=' || v_SearchUsername,
    attrs    => v_LDAPAttributeQueryList,
    attronly => 0,
    res      => v_LDAPQueryResults);
  v_FunctionReturnValue := DBMS_LDAP.COUNT_ENTRIES(v_LDAPSession,
                                                   v_LDAPQueryResults);
  IF DBMS_LDAP.FIRST_ENTRY(v_LDAPSession, v_LDAPQueryResults) IS NOT NULL THEN
    :Result := DBMS_LDAP.GET_VALUES(

                 v_LDAPSession,
                 DBMS_LDAP.FIRST_ENTRY(v_LDAPSession, v_LDAPQueryResults),
                 DBMS_LDAP.FIRST_ATTRIBUTE(

                   v_LDAPSession,
                   DBMS_LDAP.FIRST_ENTRY(

                     v_LDAPSession,
                     v_LDAPQueryResults),
                   v_BerElement)) (0);
  END IF;
  v_FunctionReturnValue := DBMS_LDAP.UNBIND_S(v_LDAPSession);
END;


You need to do a little updating in order to make this work for your configuration (entering the server, a active directory user account with permission to do the query, your base location, etc) but on our system this runs in a tiny fraction of a second.

NOTE: This patch of code is only returning the first record returned. You will encounter problems if the user name exists in different domains, but that issue hasn't arisen for us and I guess most companies will probably be ok.

The v_berElement is required by the FIRST_ENTRY call in order to allow you to iterate through the results but as we're just interested in the first record returned it is declared, used, but never referenced again in the code above.

Final comment, if you are running 11g you are likely to get;

"ORA-24247: network access denied by access control list (ACL)"

When you attempt to run the command (unless your user has already been granted LDAP access). You need to update the access control list granting access to the connected user. The solution is readily available on Google (but I might create a post over the next few days as I need to do it myself!).

SSRS: Far Future Date Issues For Date/Time Parameters

It's probably easiest just to show an example of problem (the quick solution is at the end).

Create a new report following the instructions here only use the following SQL;

SELECT TO_DATE(:Data_Value)
FROM DUAL

This assumes you are using an Oracle database to connect to. It's possible that this could be just an Oracle issue but it's pretty unlikely.

Once the report is completed run it with the parameter '01-JAN-2070';

2070 Parameter Passed Through
In the result you can see we have "1/1/2070 12:00:00 AM" which is exactly what you'd expect - the text value we passed through has been successfully converted to a date on the database server.

Now go into the properties of the Data_Value parameter and change the data type from Text to Date/Time;
Report Parameter Properties: Data type
Re-run the report and enter the same test date;
2070 Parameter Converted to 1970
You'll notice that rather than displaying 2070 the year has been altered in the output to 1970.

In fact if you choose any date after 01-JAN-2050 then it will automatically revert to it's 1900's equivalent (31-DEC-2049 is ok).

Now you might be thinking that this is something specific to do with the way the machine I'm running on is configured so in order to do a quick test I've opened Outlook and created an appointment for 2070 which seems to work fine;
Microsoft Outlook: 2070 Appointment
I've then done a search and confirmed the appointment has been saved for 2070 (and not 1970).

To further confirm it's an SSRS issue go back to the report and change the report title to display the parameter value;
Report Builder 3: Adding @Data_Value as the Report Title
Now run the report (with our 01-JAN-2070) parameter value;
Report Builder 3: Date Confusion
So from the look of it the Date/Time picker *isn't* the problem (otherwise the displayed parameter value would be wrong). That's actually good news as it means we can continue to use the (nice) Date/Time picker and resolve the issues with the values being passed in the code.

Go into the Query Parameters and change;

=Parameters!Data_Value.Value

To;

=Format(CDate(Parameters!Data_Value.Value), "dd-MMM-yyyy")

And now re-run the report;
Report Builder 3: Correct Date Values Displayed
This seems to indicate that the problem (for Microsoft) is due to the assigning of entered to parameter values. However we have our fix.


Wednesday, February 1, 2012

PL/SQL: When Were Your Tables Last Analysed?

As I'm sure any DBA will tell you one sure-fire way to have terrible-performing queries is not to update the statistical information Oracle is using to work out how to efficiently execute your query. Luckily Oracle keeps track of when the information was last updated for each column in the DBA_Table_Columns view.

Here is a simple query that will give you a list of tables that have not been analysed in the last week;

SELECT dtc.OWNER, dtc.TABLE_NAME, dtc.LAST_ANALYZED
  FROM dba_tab_columns dtc
 WHERE dtc.OWNER NOT IN ('SYS', 'SYSTEM')
   AND dtc.COLUMN_ID = 1
   AND dtc.TABLE_NAME NOT LIKE 'BIN$%'
   AND dtc.TABLE_NAME NOT LIKE '%$%'
   AND dtc.OWNER NOT LIKE '%$%'
   AND dtc.LAST_ANALYZED IS NOT NULL
   AND dtc.LAST_ANALYZED < TRUNC(SYSDATE) - 7
 ORDER BY dtc.LAST_ANALYZED DESC

This gives you a list of tables that haven't been analysed but, if you're looking at a large complex system like Oracle e-Business Suite (which I am) this isn't terribly useful as it's quite possible for table to not be analysed simply because it's not in use.

I've written this simple script that produces (via DBMS_OUTPUT) a list of the tables which contain records which haven't been analysed in the last week;

DECLARE
  v_RowCount  NUMBER;
  v_Startdate DATE;
  v_Enddate   DATE;
BEGIN
  dbms_output.put_line('"Table Owner","Table Name","Row Count","Last Analysed Date", "Time To Query"');
  FOR v_Table IN (SELECT dtc.OWNER, dtc.TABLE_NAME, dtc.LAST_ANALYZED
                    FROM dba_tab_columns dtc
                   WHERE dtc.OWNER NOT IN ('SYS', 'SYSTEM')
                     AND dtc.COLUMN_ID = 1
                     AND dtc.TABLE_NAME NOT LIKE 'BIN$%'
                     AND dtc.TABLE_NAME NOT LIKE '%$%'
                     AND dtc.OWNER NOT LIKE '%$%'
                     AND dtc.LAST_ANALYZED IS NOT NULL
                     AND dtc.LAST_ANALYZED < TRUNC(SYSDATE) - 7
                   ORDER BY dtc.LAST_ANALYZED DESC) LOOP
    v_Startdate := SYSDATE;
    BEGIN
      EXECUTE IMMEDIATE 'select count(*) from ' || v_Table.Owner || '.' ||
                        v_Table.Table_Name
        INTO v_RowCount;
    EXCEPTION
      WHEN OTHERS THEN
        v_RowCount := -1;
    END;
    v_Enddate := SYSDATE;
    IF v_RowCount > 0 THEN
      dbms_output.put_line(v_Table.Owner || ',' || v_Table.Table_Name || ',' ||
                           to_char(v_RowCount) || ',' ||
                           to_char(v_Table.Last_Analyzed, 'DD-MON-YYYY') || ',' ||
                           to_char((v_EndDate - v_StartDate),
                                   '0.9999999999'));
    END IF;
  END LOOP;
END;

SQL Server: Fixing Integer Division Division Issues

This was an interesting one I wasn't expecting to stumble upon this morning.

At the moment I'm writing some cost monitoring SQL that looks at the Execution logs on a reporting services instance and works out, at a rate per hour, how much time and money the business is spending just waiting for reports to execute (either fetching the data, process the data, or rendering) - this is on the assumption that when we add processors/ memory to a box it's nice to be able to show users the direct saving to them.

We also have a target of "6 seconds" so we can also measure how much we're missing our target by (and how much that's costing).

Now in order to do this I was trying to use the SQL;

SELECT 
  DATEADD(dd,0, datediff(dd,0, [TimeStart])) "Day",
  SUM([TimeDataRetrieval])/1000/60/60 "Time Spent Retrieving Data",
  SUM([TimeProcessing])/1000/60/60 "Time Spent Processing",
  SUM([TimeRendering])/1000/60/60 "Time Spent Rendering",
  (SUM([TimeDataRetrieval]) + SUM([TimeProcessing]) + SUM([TimeRendering]))/1000/60/60 "Total Time Spent",
  (6000 * COUNT(DISTINCT [ExecutionId]))/1000/60/60 "Six Second Watermark",
  (SUM([TimeDataRetrieval]) + SUM([TimeProcessing]) + SUM([TimeRendering]))/1000/60/60 * @Hourly_Rate "Total Cost",
  (6000 * COUNT(DISTINCT [ExecutionId]))/1000/60/60 * @Hourly_Rate "Six Second Cost"
  FROM [ReportServer].[dbo].[ExecutionLog2]
 GROUP BY DATEADD(dd,0, datediff(dd,0, [TimeStart]))
 ORDER BY DATEADD(dd,0, datediff(dd,0, [TimeStart])) DESC

As you can see it's pretty simple, it relies on the ExecutionLog2 view (included in Reporting Services as standard), applies a bit of daily grouping, and then works out some times and costs by dividing the time from the views (in milliseconds) by 1000, 60, and 60 in order to give hours.

The surprising thing (to me anyway!) was when you ran it;
SQL Execution: Integer-only Results?

Everything seemed to be coming back as nice round numbers. Clearly that's not going to be the case with execution times so what was going on?

After a bit of digging I turned up this useful page;


http://msdn.microsoft.com/en-us/library/aa276874%28v=sql.80%29.aspx

And looking at the section, abotu half-way down, titled "Result Types" you see the text;

"Returns the data type of the argument with the higher precedence. For more information about data type precedence, see Data Type Precedence."

The effect of this is that if you divide an integer by an integer you can only get an integer in return. Apparently, to the designers of SQL Server, one divided-by two is zero. In what way did they think this would be useful?!

Looking back at the SQL you can fix it either explicitly or implicitly. I'll provide both below and you can choose which one to go for.


First the explicit fix (using CASE(XXX As Float);

SELECT 
  DATEADD(dd,0, datediff(dd,0, [TimeStart])) "Day",
  CAST(SUM([TimeDataRetrieval]) As Float)/1000/60/60 "Time Spent Retrieving Data",
  CAST(SUM([TimeProcessing]) As Float)/1000/60/60 "Time Spent Processing",
  CAST(SUM([TimeRendering]) As Float)/1000/60/60 "Time Spent Rendering",
  CAST((SUM([TimeDataRetrieval]) + SUM([TimeProcessing]) + SUM([TimeRendering])) As Float)/1000/60/60 "Total Time Spent",
  CAST((6000 * COUNT(DISTINCT [ExecutionId])) As Float)/1000/60/60 "Six Second Watermark",
  CAST((SUM([TimeDataRetrieval]) + SUM([TimeProcessing]) + SUM([TimeRendering])) As Float)/1000/60/60 * @Hourly_Rate "Total Cost",
  CAST((6000 * COUNT(DISTINCT [ExecutionId])) As Float)/1000/60/60 * @Hourly_Rate "Six Second Cost"
  FROM [ReportServer].[dbo].[ExecutionLog2]
 GROUP BY DATEADD(dd,0, datediff(dd,0, [TimeStart]))
 ORDER BY DATEADD(dd,0, datediff(dd,0, [TimeStart])) DESC

The result;
SQL Server: Floating Point Values Displayed
Now the implicit fix;

SELECT 
  DATEADD(dd,0, datediff(dd,0, [TimeStart])) "Day",
  SUM([TimeDataRetrieval])/3600000.0 "Time Spent Retrieving Data",
  SUM([TimeProcessing])/3600000.0 "Time Spent Processing",
  SUM([TimeRendering])/3600000.0 "Time Spent Rendering",
  (SUM([TimeDataRetrieval]) + SUM([TimeProcessing]) + SUM([TimeRendering]))/3600000.0 "Total Time Spent",
  (6000 * COUNT(DISTINCT [ExecutionId]))/3600000.0 "Six Second Watermark",
  (SUM([TimeDataRetrieval]) + SUM([TimeProcessing]) + SUM([TimeRendering]))/3600000.0 * @Hourly_Rate "Total Cost",
  (6000 * COUNT(DISTINCT [ExecutionId]))/3600000.0 * @Hourly_Rate "Six Second Cost"
  FROM [ReportServer].[dbo].[ExecutionLog2]
 GROUP BY DATEADD(dd,0, datediff(dd,0, [TimeStart]))
 ORDER BY DATEADD(dd,0, datediff(dd,0, [TimeStart])) DESC

The result;
SQL Server: Floating Point Values Displayed

The advantage of the implicit fix is speed but that speed comes at a cost of clarity. I know if I'd spotted something that didn't seem to be doing anything useful in a piece of SQL I'm supporting I'd remove it and maybe in a couple of years time you'll have forgotten why the .0 is there.

Both work through so I'll leave it up to you to decide which to use.

NOTE: You'll notice that the two "result" images above are slightly different. This is due to the same problem (integer division) being caused by the milliseconds to hours conversion (/1000/60/60 vs /3600000). Not quite sure what to make of this!