Tuesday, April 30, 2013

EasyPush: Creating Member Groups To Control iPad Experience

This blog covers how to setup and configure Member Groups to control the distribution of content within the EasyPush iPad Application.

The first step is to login to the administrative console (back-end);

EasyPush: Umbraco Admin Interface
Click on the "Members" section in the bottom left;

EasyPush: Members Section
The bit we're interested in is the node titled "Member Groups". If you click the triangle you'll see the groups already configured on the server. To create a new group right-click the node and select "Create";

EasyPush: Creating a New Members Group
Enter the name you're after and then click "Create".

The new group you've created will then be listed at the bottom of the nodes in the navigator and automatically selected (so you'll see the properties);

EasyPush: Member Group Properties
You'll notice that the only property is the name.

Now you need to configure Members to be part of your new group.

Click the triangle (expand) the "Members" node in the Navigation View and then search for a user to add to your new group;

EasyPush: Members

Click on the "Properties" tab (1);

EasyPush: Member Properties (With Group Membership)
Select the group you want to add to the member in the list on the left (2) and then click the double-arrow to move it across (3).

Click "Save" to save the changes.

You now need to look at the others documents I've produced as part of my work with Easypush (see Easypush Documentation Summary) and create some content that is can only be deployed to the new group you've created to test it.

Thursday, April 18, 2013

MobileIron: Removing a Deployed Application From iPads

This is a simple quick guide to how to remove an application you are already deploying via MobileIron. This is useful if you need to remove an application urgently but don't want to delete it or if, like us, you are using Active Directory groups to control deploying the application but can't wait until AD replication has caught up to remove the application.

Log into MobileIron click on the APPS & CONFIGS tab, then “App Distribution”, select “iOS” in the Platform drop down, and then click on the “Add Name” column so that the applications are sorted in name order;
MobileIron: Deployed Applications
This gives you the list of all the application (both in-house and recommended applications from the iTunes store). It's probably easier to do a search for the application you're looking to remove. I'm going to remove the "Accellion" application you can see above. It's better to search for the exact application you want to remove as the last thing you want to do is just remove the *current* version of an application and roll all your users back to the previous version.

When you've done the search you will be presented with all the versions of the application you have in the system;

MobileIron: App Distribution - Multiple Versions
Select all the applications (click on the tick on the left), and then in the "Actions" menu select "Remove from Label";

MobileIron: Remove From Label dialog
The top item is labelled as "Partial" because the label is applied to one of the applications I've selected but not the other.

Select all the labels (check box at the top left) and then click "Remove".

You should now start to see the "Devices Installed" count decreasing as each device checks in and MobileIron does its stuff.

Tuesday, April 16, 2013

Lot Genealogy, Part 3: Debugging Lots and Batches

As you might have noticed I've just updated the LOT_GENEALOGY source code for this project to a new version following the discovery of a bug where the product of a batch is two different item numbers. This is a rare occurrence in our system, which is why it went undetected for so long, but it might not be in yours.

Rather than dwell on that I thought I'd talk about exactly what the process was that led to us discovering the fixing the error.

The first report came from the users; a lot which they knew had two item numbers was only showing one. Other lots didn't seem to be affected. This was then traced back to a single batch, 73591, running this SQL gave us the transactions that affected the cache for that batch. To publish this data I'm obfuscating the data (see here);

       HIDEDATA.Hide('LGG.LN', LGG.LOT_NUMBER) Lot_Number,
       HIDEDATA.Hide('LGG.ID', LGG.Item_Description) Item_Description,
 WHERE 1 = 1
   AND LGG.Batch_No = '73591';

The result of this is the following transactions;

LG Transactions For A Single Batch
As you can see the first thing we have is three "WIP Completion" transactions. Each of these represents the completion of the batch and the generation of an output. In this case Item Numbers 073081 and 07440. However without knowing what information was put *into* the batch in the first place it's not possible to usefully use this information in the Genealogy - It has to wait until the "WIP Issue" transactions are processed (each of which represents an ingredient).

The next stage for debugging the cache was to (on a test system!) change the LOT_GENEALOGY_GETTRANSACTIONS view so that rather than looking at *everything* it only looks at the records for a single batch - this is simply done by adding the following where clause to the view SQL;

-- Restrict transactions to a single batch (for debugging)
AND GBH.Batch_No = '73591'

Now we're restricted the input records to just those affecting that batch we can just re-build the whole cache - it will take longer to do the delete than the insert. The script to do it is;

  -- Call the procedure

Once that has been completed the entire content of the cache is the single batch we're tracking. The SQL to show it is;

SELECT HIDEDATA.Hide('LGG.LN', LGW.Master_Lot_Number) Master_Lot_Number,
       HIDEDATA.Hide('LGG.LN', LGW.Ingred_Lot_Number) Ingred_Lot_Number,
       HIDEDATA.Hide('LGG.ID', LGW.Ingred_Item_Description) Ingred_Item_Description,
       HIDEDATA.Hide('LGG.ID', LGW.Product_Item_Description) Product_Item_Description,
 WHERE 1 = 1

The result of this is;

Lot Genealogy Cache: Result for a Single Batch
As you can see the only product for Ingredient Lot LGG.LN 37 is item number 073002, if we look at the transactions earlier we can see that it should be reporting Item Number 07440 as well - it's not which means something is going wrong in the "WIP Issue" part of the cache processing.

If we look at the source code (available here - via Google Drive) you'll see that the final part of the WIP Issue is creating the records in LOT_GENEALOGY_BATCH_PRODUCT so the next stage to check is to see if these records are being created correctly. Here is the SQL;

SELECT LGBP.batch_number,
       HIDEDATA.Hide('LGG.ID', LGBP.Product_Item_Description) Product_Item_Description,
 WHERE 1 = 1
   AND LGBP.Batch_Number = '73591'

This gives us the result;

Lot Genealogy Batch Products
This shows us that the correct batch products are being recorded - this is important as we now have both ends of the transaction; the correct transactions are going in and the correct products are coming out. However we also know that the cache isn't being updated correctly therefore the culprits must be the two pieces of SQL that are actually doing the inserts.

If you look at the first one;

INSERT INTO NOETIX_SYS.Lot_Genealogy_WhereUsed
    SELECT v_Transaction.Lot_Number, -- Master_Lot_Number
           v_Transaction.Lot_Number, -- Ingred_Lot_Number
           v_Transaction.Item$item, -- INGRED_ITEM$ITEM
           v_Transaction.Item_Description, -- INGRED_ITEM_DESCRIPTION
           v_Transaction.Item_Type_Code, -- INGRED_ITEM_TYPE_CODE
           v_Transaction.Batch_Number, -- Batch_Number
           v_Product.Product_Lot_Number, -- Product_Lot_Number
     (SELECT 1
              FROM NOETIX_SYS.Lot_Genealogy_WhereUsed LGI
             WHERE LGI.MASTER_LOT_NUMBER = v_Transaction.Lot_Number
               AND LGI.Ingred_LOT_NUMBER = v_Transaction.Lot_Number
               AND LGI.Batch_Number = v_Transaction.Batch_Number
               AND (v_Product.Product_Lot_Number IS NULL AND
                   v_Product.Product_Lot_Number = LGI.PRODUCT_LOT_NUMBER));

You can see that this is actually just a select from DUAL so the fact that it's only working on the first pass (i.e. for the first product) means that the offending part of the code must be the NOT EXISTS.

Looking at the WHERE clause in the sub-select reveals that it's not using the Product_Item_Number field. Not using this field means that after it's inserted the first product it mistakes the second one for a duplicate and skips over it.

Altering the final AND statement to;

AND (v_Product.Product_Lot_Number IS NULL AND LGI.PRODUCT_LOT_NUMBER IS NULL OR (v_Product.Product_Lot_Number = LGI.PRODUCT_LOT_NUMBER AND v_Product.Product_Item_Number = LGI.PRODUCT_ITEM_NUMBER))

For BOTH pieces of SQL (the one updating  all the existing records as well as this one which is creating a new master record) addresses the issue logically, if you rebuild the cache from scratch (using the script above) and re-run the SQL to get the content of the cache you will now see;

Lot Genealogy Cache - Complete
The first thing you notice is there are a lot more records. The new records are for Item Number 07440 which is the one we were missing earlier.

If you do a source comparison between the new and old versions of the package you'll notice that this wasn't the only change I made - I've added a great deal more logging to the WIP Issue transaction so it's possible to see what is going on and what "SQL%ROWCOUNT = 0" actually means! I also added a warning to WIP Completion if no records are updated.

I hope this is of some interest ... Debugging an application, even one you've written yourself, is a skill that's very difficult to transfer. Hopefully there is enough information here for people to at least make a start for future issues (which will, by their very nature, be completely different I'm sure!).

PL/SQL: Safely Publishing Data For Analysis/ Obfuscating Data

The purpose of this blog post is to publish a way of obfuscating data so that you can publish data which can be analysed to spot trends but which does not include confidential information while at the same time allowing you to track back from the published information to the original data if external analysis raises issues which require investigation.

We have used this source code to allow us to provide information to our Internal Audit function which includes user and vendor information while keeping that information anonymous.

The first step is to configure the database to store the information. This is fairly simple, you just need to create a single sequence to provide the unique identifiers and a table to store the mappings between the original information and the identifies. By allowing the identifiers to be grouped (using Data Types) this reduces the possibility of someone using a piece of "known data" to work out other pieces of data.

Here's the SQL to create the sequence;

create sequence HIDEDATA_SEQ
start with 1;

And here's the SQL to create the table, and indexes for speed, to store the mappings;

-- Create table
  data_type varchar2(20) not null,
  id        number not null,
  value     varchar2(255) not null
-- Create/Recreate indexes
create unique index HIDDENDATASTORE_PK on HIDDENDATASTORE (data_type, id);
create unique index HIDDENDATASTORE_IDX1 on HIDDENDATASTORE (data_type, value);

Once these have been created your can create the package with the two functions (to Hide and Unhide - with apologies to the English language for that second one!).

The source code for the package is available here (via Google Drive).

Once you've created the package here's a quick test script;

 SELECT hidedata.Hide('TEST',
                     'the quick brown fox jumped over the slow lazy dog') Hide,
                                     'the quick brown fox jumped over the slow lazy dog')) UnHide

The result in my database, in which I've been doing some testing (exactly 23 times to be precise!) is;

Test Results Hiding/Unhiding Data

Friday, April 12, 2013

MobileIron: Problems Deploying iPad Applications To Active Directory Groups

Background: We have multiple AD domains but we configure and deploy the iPad from a single domain. Let's call it DM001. In Domain DM001 we don't have any end-users, they are in other domains each specific to the country of their users - for example UK001 for UK users, US001 for US users, etc.

Mobile iron is connected to the DM001 domain into which we have created a Group in Active Directory called "MobileIron_SW_Easypush". This group has a scope of "Domain local" and a Group type of "Security".

Into this group we have placed a group from each of the other domains, for example UK001\MobileIron_UK001_SW_EasyPush from UK users, US001\MobileIron_US001_SW_EasyPush for US users, etc.

This allows local IT groups in each Country to manage a local group in their AD Domain to add/remove their users rather than everything having to be done centrally (or with other people able to change AD groups in the admin domain).

The Problem: Users in all-but-one Country were picking up the software - no problem - while users in Switzerland weren't seeing this specific application in the Apps@Work folder but were able to download other applications they had been assigned permission to in other groups.

The problem is a little complicated by replication delays between the AD servers - but after waiting a few hours this could be ruled out.

The problem was eventually traced to this;

Active Directory: Problem Group Properties
AD groups which had a group scope of "Universal" were working, those with a scope of anything else were not.

Once the correct Group Scope was selected (and following an appropriate wait for replication) the problem was fixed.

This took a few hours of effort to work out, hopefully it will same someone else some time!

Thursday, April 11, 2013

Quick Guide: Creating a Document Thumbnail on Windows 7

This is intended to be really simple guide to creating 200x??? pixel thumbnails for an existing document. I'm currently using these small thumbnails as part of the EasyPush application as an alternative to "template" thumbnails (i.e. one for all Word documents). It looks much nicer if everything has it's own thumbnail.

The first step is to open the document you wish to create the thumbnail for. I've picked a PDF I just happened to have sitting around (Noetix Search Administrator guide). It's possible to use Word, Powerpoint, etc - just make sure you're viewing a "Print Preview" to ensure that you aren't getting any additional lines on the screen you're not interested;

Adobe PDF Reader
You'll notice that I've highlighted the enlarge/reduce buttons - click the reduce button until you can see the entire document on one screen. When you've managed to do that start the "Snipping Tool" (provided as part of Windows) and grab a capture of just the document image;

Full-size Document Image
You'll notice that as the document is predominately white I've included the border around the edge from Adobe Acrobat - this makes it stand out a bit better if you're using a white background.

If you look at the size of this image you'll notice that it's quite large; 539x666. This is way too big to be a thumbnail so you need to re-size it to make it smaller.

Save the image (from the "Snipping Tool") into a directory and then right-click it and choose "Edit". This *should* open it in Microsoft Paint;

Microsoft Paint: Screen Capture Image
I've pretty much decided that the maximum size of my thumbnails will be 200 pixels in any direction - this is generally a good size and fits most places where you'd need to use a thumbnail. To alter the size of the image in Microsoft Paint click on the "Resize" button at the top left;

Microsoft Paint: Resize Button
This will display the "Resize and Skew" dialog;

Microsoft Paint: Resize and Skew Dialog
The first thing to do is click on "Pixels" at the top and then change whichever is the larger of the Horizonal or Veritcal numbers to 200 - so long as you have "Maintain aspect ratio" checked the smaller will be automatically adjusted.

Click "OK".

Then save the image and you have your new thumbnail - see below;

Document Thumbnail - Final Version
The final re-sized size is 161x200.

BizTalk 2009, Part 1: Simple File Moving Application (My First Application)

So as part of my job I'm picking up some work with BizTalk 2009 server. I have used SSIS and have been told by many people that Biztalk 2009 is the "big brother" to SSIS.

This is my first Biztalk application and I'm blogging this while I'm creating it. I've received no training, and saw the Administration Console for the first time about 20 minutes ago. To quote Jeremy Clarkson; "How hard can it be?"
To start with I've right-clicked the "Applications" group in the Admin Console and selected "New";

Biztalk 2009: New Application Properties
I've entered the name "Simple File Copy" and a brief description. Once the new application has been created if you expand it in Navigation view you can see;

Biztalk 2009: Expanded Navigation View
This is going to be a simple application; it isn't going to involve Orchestrations, Policies, Schemas, etc - I'm fact it's only going to involve creating a "Receive Port" for the folder we want to drop files into and a "Send Port" for the folder we want to write the files out to.

So the first thing I'm going to do is to create a "Receive Port", right-clicking on the "Receive Ports" item in the Navigation View and selecting "New" and then "One-way Receive Port...";

Biztalk 2009: Receive Port Properties
I've named this Receive Port "rpSimpleFileCopy", I'm not going to use authentication as it's just going to be accessing a folder on the local machine. Next the "Receive Locations" section on the left;

Biztalk 2009: Receive Locations
Click on the "New ..." button;

Biztalk 2009: Receive Location Properties

Now I'm going to setup a simple file location for the input, select "FILE" in the type drop down and then click on "Configure" to set the details;

Biztalk 2009: FILE Transport Properties

As you can see I've just browsed to a folder I created earlier (AP_TESTRECEIVE) and selected it, I've also changed the "File mask" from *.xml to *.* (as I'm interested in everything).

Click "OK" (I don't want to do anything with Authentication and Batching - just accepting the defaults)
Click "OK" - we're back on the Receive Location Properties dialog and I'm happy to accept the default schedule.

Biztalk 2009: Receive Port Properties - Receive Locations

Click "OK" to save the Receive Port. This took a few seconds to save the settings.

We're now back at looking in the Admin console, if you click on the "Applications" node you'll see the list of currently applications with our new one currently showing as "Stopped";

Biztalk 2009: Installed Applications in the Admin Console

Click on "Send Ports" in the navigation view and select "New" and then select "Static One-way Send Port ...";

Biztalk 2009: Send Port Properties

I've changed the name to "spOutputPort", now select "FILE" and then click "Configure";

Biztalk 2009: FILE Transport Properties

I've set the output folder to a folder on the local machine and left everything else as default.

Click "OK", we're now mach to the main "Send Port Properties" dialog, select "Filters";

Biztalk 2009: Send Port Properties - Filters

Now what we're going to do is tie our Send Port to our existing Receive Port. To do this just select "BTS.ReceivePortName", "==" as the operator, and enter "rpSimpleFileCopy" as the value.

Everything else we're not going to change - click "OK".

Now that's it ... What we need to do now is to activate the Receive and Send ports we've created and the Application will start working.

Click on "Receive Locations";

Biztalk 2009: Receive Locations (Disabled)

Right-click the single line and select "Enable";

Biztalk 2009: Receive Locations (Enabled)

Now click on "Send Ports" in the Navigation View;

Biztalk 2009: Send Ports (Disabled)

Right-click the single line and select "Start";

Biztalk 2009: Send Ports (Enabled)

And that's it, now you just drop a file into the C:\AP_TESTRECEIVE and a few seconds later the file will disappear and if you check the C:\AP_TESTOUTPUT folder an XML file - named with a GUID - will have appeared.

As a final check here is a list of the Applications on the server;

Biztalk 2009: Applications - Simple File Copy (Started)
Getting to this stage took about an hour, maybe two. 

NOTE: Because we changed the file input to "*.*" any file dropped will be transferred, I've tried both text and XML files and everything seems to work fine.

Tuesday, April 9, 2013

Lot Genealogy (Part 2): Automating Testing of a Genealogy

If you are going to roll out any caching of your lot genealogy data the one clear thing you have to get right is testing. If, like us, you are working in a highly regulated environment it's necessary to ensure that you've got your testing right.

To this end I've created a new table called LOT_GENEALOGY_WHEREUSED_TESTS;

Lot_Genealogy_WhereUsed_Tests Table Description
The purpose of this new table is to hold copies of the lot genealogy data from Lot_Genealogy_WhereUsed. By holding these "snapshots" we can check after each rebuild of the cache that the data that was there and we had previously validated as correct is still there.

To create tests the following SQL will insert an existing Lot Genealogy into the testing table;

  SELECT 'ATR002', -- test_Ref
         'Product lot (166449) consists of two items (038003 and 038001)', -- test_description
         -- data from Lot Genealogy
   WHERE 1 = 1
     AND LGW.MASTER_LOT_NUMBER = '0490/0002';

This SQL is taking the lot genealogy for Master Lot Number 0490/0002 and copying it into the testing table, adding a test reference (ATR002) and a test description so we know what the test is supposed to be checking for.

That, as they say, was the easy bit. Now we need to create some SQL that is capable of running a test and returning a result. Our reporting tool (in case you can't tell from my other blog posts!) is SQL Server Reporting Services (SSRS) hence I'm going to split the SQL into two pieces, one to give me a list of all the tests and the other to run an individual test - SSRS will allow me to embed the latter as a sub-report into a report driven by the former.

List of Tests SQL

This was by far the easiest of the two;

                LGWT.Test_Ref || ' (Lot ' || LGWT.MASTER_LOT_NUMBER || ')' LABEL
 ORDER BY 1, 2

This returns something similar to;

List of Tests Generated Using SQL
I've added the bit of additional detail (for the Label) just so that in addition for being able to use this in the main report to get a list of tests I can also use it in the sub-report as a source for a picker on the "Test_Ref" parameter.

Run A Test SQL

This is slightly larger but here's the code then I'll try and explain it;

SELECT :Test_Ref "Test Ref",
         WHERE LGWT1.TEST_REF = :Test_Ref
           AND ROWNUM = 1) "Test Description",
       TR.Test_Row_Count "Test Row Count",
       TR.Cache_Row_Count "Cache Row Count",
       TR.Union_Row_Count "Union Row Count",
         WHEN TR.Test_Row_Count = TR.Cache_Row_Count AND
              TR.Cache_Row_Count = TR.Union_Row_Count THEN
       END "Test Result"
                 WHERE LGWT1.TEST_REF = :Test_Ref) Test_Row_Count,
               (SELECT COUNT(*)
                 WHERE LGW.MASTER_LOT_NUMBER =
                       (SELECT MASTER_LOT_NUMBER
                          FROM LOT_GENEALOGY_WHEREUSED_TESTS
                         WHERE TEST_REF = :Test_Ref
                           AND ROWNUM = 1)) Cache_Row_Count,
               (SELECT COUNT(*)
                         WHERE LGWT1.TEST_REF = :Test_Ref
                        SELECT MASTER_LOT_NUMBER,
                          FROM LOT_GENEALOGY_WHEREUSED LGW
                         WHERE LGW.MASTER_LOT_NUMBER =
                               (SELECT MASTER_LOT_NUMBER
                                  FROM LOT_GENEALOGY_WHEREUSED_TESTS
                                 WHERE TEST_REF = :Test_Ref
                                   AND ROWNUM = 1))) Union_Row_Count
          FROM DUAL) TR

As you can see it takes a single parameter, the Test Reference Number. How it works is it counts the number of records in the cache, counts the number of records in the test table, and then does a select of all the records in the test table and, using a straight UNION, all the records in the cache. Because of the way UNIONs work (stripping out duplicates) the COUNT of the number of records returned by the UNION should be the same as the number of records in each of the other two queries. If they are all the same the TEST_RESULT is 'PASS' otherwise it's 'FAIL'.

NOTE: I'm sure this could be done a lot more efficiently but to be honest given the relative sizes of the tables I don't think you'll be sitting round too long for a result. On our system it takes less then .02 of a second. Your mileage will vary, but probably not by much!

Now that I've got the SQL I've setup a simple SSRS report to display the result for a single test back to me;

SSRS Report Showing A Single Test Result
You'll also notice that the test result includes a listing of all the records included in the test. This is a simple SELECT * ... WHERE TEST_REF = ... so I'm not going to give you the SQL for it.

The master report looks like this (in Report Builder 3);

SSRS Master Reporting Showing All Tests
I've included text boxes (on each page as part of the footer) to record who the Tester and Checker are and the Date. For our internal quality purposes we need this level of detail, you might not but what's the harm?

When executed the report appears like this;

SSRS Testing Report - Final Result
Hopefully you will find this useful.