Bank File Conversion Overview

BAI and BTRS File Conversion – Overview

A division of Automotive Systems Training – A Microsoft Partner

  • Application: The Bank File Conversion App for Excel 2013 has been approved as of September 30, 2014. You can get your copy here. The application is available in English, French, German, Spanish and Japanese.
  • Purpose: This application converts BAI (Version 2) and BTRS (Version 3) files to both XML and CSV. This application supports the latest version of BAI file specifications which is called BTRS (Bank Transaction Reporting System). The application is designed to allow users to quickly view and analyze bank data in their Business Intelligence systems. The application not only sets the data in Excel tables but provides validated XML for inclusion in SQL Server tables as well.
  • Security: This application uses SSL encryption and since BAI (BTRS) files contain sensitive information, bank routing and account number information is obscured immediately. It is also recommended that you delete this information on any BAI files saved locally or in the cloud. If you open the BAI file in Notepad, the lines that start with ’02,’ have sensitive information in the next two fields and the lines that start with ’03,’ have the account number in the next field. All three of these fields are obscured by the application and you may also obscure them prior to as well. If you do, you may want to leave the first 4 digits of the account number intact, like the application does, because many files contain data on multiple accounts.
  • Overview: The applicationis designed toprovide your bank data in multiple formats to meet you Business Intelligence needs.
    • The application will set the BAI or BTRS data into Excel tables on the worksheets you choose based on your selection in the record selector drop down box. We recommend that you open a new sheet and then select a record type to have the application create the table in that sheet. You can then rename the sheet to reflect the records contained in the table.
    • When you check the Create XML checkbox, both the XML and XSD (schema) text will be available on their respective tabs in the task pane for saving to your computer or copying to the clipboard in the case of smaller files.
    • The XML file produced by the conversion is accompanied by a fully deterministic schema to allow XQuery analysis down to the text node level.
    • The CSV text blocks are created when you check the Create CSV checkbox. These text blocks are avaiable on the CSV tab for you to save or copy and paste wherever needed.
    • Follow the steps outlined on the Home tab of the application and, if you make a mistake, just skip to Step 7 and click (twice) the Convert Another File button. This will reset the application and you can try again.
  • Uses: This application is designed to be flexible, accurate and to allow you to analyze bank data using the functionality of Excel especially PowerPivot.
    • This application does not invoke the PowerPivot functionality directly however you can select which tables of the BAI file data (each record type has its own table) to create which then allows you to add the table to the PowerPivot data model. This function is called using the Add Table to Data Model selection on the PowerPivot tool bar. The benefits of PowerPivot for your analysis is that it supports creating table relationships and implements the very powerful DAX formula specification.
  • User Community: This site is our Help Desk and User Forum. Please join our community of users.

 

Posted in Apps for Excel, Bank File Conversion | Tagged | Leave a comment

Dealership Internal Reporting Systems

I have officially opened Automotive Systems Consulting (ASC) where we will be doing all manner of consulting assignments. Initially we will focus on much needed systems for the automobile dealership that of  internal management reporting.

Currently in dealership management systems there are rather antiquated, cumbersome and static management reporting schemes that take too much preparation time and lack even the most basic security.

Working with all dealership management systems we will design your efficient dealership internal reporting system with ease and care; being sure to protect your data while providing information (and incentive) to your team to perform better.

Call John Donnelly now for your no-charge consultation at 909-833-0115.

Posted in Management Reporting | Tagged , , , | Leave a comment

Analyzing Bank File Conversions XML Data Files using Power Query and PowerPivot

In a previous post, I showed how to use the Bank File Conversions App for Excel 2013 (BFC) to:

  • convert BAI Version 2 and BTRS files,
  • set the data in tables in Excel 2013,
  • add them to the Excel Data Model and,
  • work with them in PowerPivot.

Also, last week I demonstrated how to use SQL Server to store BFC XML output in XML data type columns. Storing the data in this way allows one to execute ad hoc queries (using XQuery) on the XML stored in the columns.

Bank File Conversions App for Excel offers several output options  for BAI or
BTRS data to facilitate analysis and storage. It is important to allow the storage of the data in a format that is easily queried. This goal is met by setting the conversion to output as 1) tables which then can be added to the Excel Data Model and, 2) in XML files. although CSV files are not so queryable (unless you are practiced with PowerShell), it is an essential form of input for many data visualization and analysis programs.

Today I want to talk about another option; using the XML file output from BFC directly as a data source for in PowerPivot. In other words, if one does not need to access multiple XML files at once, there is really no need to use a SQL Server to store the XML files. If your analysis is limited to one file at a time say, month by month or quarter by quarter, you can keep your XML files in a folder and use Power Query to ETL (extract, transform and load) the data into the Excel Data Model for use with PowerPivot.

This is not to say you can’t ETL more than one XML file at a time with Power Query but doing so might not be preferable to using SQL Server. It is also not to say that you may not simply ETL each file into the Excel Data Model and store there, adding new files to the data model as they come along and storing them all indefinitely in a Excel Workbook.

Eventually I will explore some more of these options but today I want to focus on the extraordinary opportunity Power Query affords us in the self-service BI realm. Power Query is the premier ETL tool for tabular data. Using Microsoft‘s newly developed “M” language, Power Query can extract from most data sources, transform the data in most any way and load it into the Excel Data Model.

There are a few references for what we are talking about today. In particular, here are some books you can read.

In order to extract, transform and load XML data from a file you need to click on the Power Query tab and, in the Get External Data section select From File and slide down and select the From XML like so:

From File Data Source Selection

From File Data Source Selection – Click to Enlarge

Once you click From XML a File dialog box will open and ask you to browse for, and select an XML file. When you do, a Query Editor window will open where you can then begin the transform process if there is any. Today I want to extract all deposit transactions on our account where distributed availability was recorded. That is, I want to review any deposit we were not given immediate credit for it in its entirety. Banks used to call this float and most still do; but the specifications indicate the use of the word availability now rather than float.

BFC calls any transaction record (type 16), where there is distributed date availability, a 16S record. And if there is a distributed time availability transaction BFC calls it a 16VT (variable time) record. In my line of work deposit transactions with timed availability are extremely rare so in this post we will limit our analysis to 16S records.

So let’s select From XML and browse to our file which, when we open, we get the PowerQuery Query Editor opening up for us to begin our transform. In order to get tabular data out of the XML file and in to the Excel Data Model, we need to have PowerQuery do some transforms. Here’s what you should see:

Power Query's Tabular Representation of the Root Node

Power Query’s Tabular Representation of the Root Node – Click to Enlarge

We see a tabular representation of the root node of the XML file. The root node (<File>) is not shown but its attributes are; as well as its first child node type, <Record>. There is only one type of child node in the BFC XML output which is the <Record> node. Another way to put this is that the <Record> node has no siblings. If there were additional child node types below the root (<File>) node we would see those as additional columns (one for each unique type) in the tabular representation. For reference here is the beginning of the XML file so you can see what PowerQuery is picking up:

Bank File Conversion XML Output

Bank File Conversion XML Output – Click to Enlarge

Now, to step down through the node types (Record, Fields, Totals, Total, Addenda, Batches, Tags etc) and view PowerQuery‘s tabular representation of each node type, you will need to click on the word Table (that the arrow is pointing to). Each instance of that node type will appear as a row in that table and there will be columns for each sibling node type as mentioned above. In plain english, you will see the following-

Power Query's Tabular Representation of <Record> node contents

First <Record> Tabular Representation of Node Contents – Click to Enlarge

which is all of the four (4) child node types of <Record> namely, <Fields>, <Totals>, <Batches> and <Addenda> as columns and every instance, subject to size limits (in this example 428 instances), of the <Record> node type as rows. You will also see each <Record> instances attribute values as well in the <Record> instance rows.

If you click on the Table link in any one of the <Record> instance rows in any child node column, you will see a representation of all the instances of that node. So if we click on the first Table in the <Fields> column we will see a table that is all the instances of <Fields> for that particular <Record> node. We see one row because, per the schema, <Record> can only have one <Fields> child node. Also, we see only one column because <Fields> has only one possible type of child node and that is <Field>.

The First <Record> Node's <Fields> Table

The First <Fields> Node’s Tabular Representation – Click to Enlarge

Now we have only one choice left and that is to see the Table that is the representation of this particular <Fields> node so we click on the Table link:

<Fields> Node Tabular Representation

The <Field> Node Tabular Representation – Click to Enlarge

So now we see the table representing the furthest descendant of the root node following the <Fields> axis off the <Root> node. We could have followed the <Totals> or <Batches>  or <Addenda> nodes in a similar fashion.

This exercise was to show you the significance of the Table structure and how it represents an XML file. Selecting a table in an instance of a node restricts you to the nodes data only and we can’t get at the text node values in this way.

Another thing to notice in the Query Editor is that each clicking of a Table link was recorded as an “Applied Step” in the Query Settings pane off to the right. This is a listing of the steps that you are executing to transform the data. Power Query saves those steps and incorporates them into the final query which you then save for future use. This is extremely powerful because the steps are written using a new language developed specially for this purpose by Microsoft called “M“. The language includes some wonderful functions and as your skills develop you can write entire ETL sequences without using the Query Editor.

In order to load the data we need into Power Pivot, we need to transform the XML file into a table of all the 16S (transactions with distributed availability) records including attribute values and all text node values. We couldn’t do this in the above exercise because we were basically following an axis by clicking on successive Table links. Now we will be expanding nodes in the following exercise to create this table to load into Power Pivot.

Starting at the same place as before (right after clicking on the From XML selection) we select the expand button next to the <Record> column header. I have drawn a red arrow  pointing at the button and I have highlighted it in yellow. 

The Expand Button

The Expand Button – Click to Enlarge

The Expand button essentially creates a table that has a row for each <Record> node  with a column for each child node or attribute you select to include. In this case when we click on the Expand button we see the following options presented:

Expand Button Selections

<Record> Expand Button Selections – Click to Expand

Before we address checking or unchecking elements we want to see in our expansion, we want to delete all the <File> node attributes (all the other columns other than <Record>) because we don’t need those in our result table. We just right-click on the column header and select Remove and we now see:

Selecting Record Elements to Include in Expansion

Selecting <Record> Elements to Include in Expansion – Click to Enlarge

In our case we are going to focus on <Record> node attributes and <Field> node attributes and text values. We will uncheck Totals, Batches and Addenda because we won’t be needing any data from those nodes. We do need the <Record> node attribute columns so we will keep them. Click OK and you should see:

Selecting Record Node Expansion Elements

Selecting <Record> Node Expansion Elements – Click to Enlarge

So now we have the first level of expansion of the <File> tabular representation which gives us a row for each <Record> node instance and a column for each attribute and child node type of the <Record> node instance. In this case there is only one child node available (<Fields>) because we unchecked the other possible child nodes to expand. We also now see that we have the <Record> attributes (record type, sequence number and currency code) that we want in our table.

First Expansion - <Record> Node Instances in Tabular Format

First Expansion – <Record> Node Instances in Tabular Format – Click to Enlarge

We still don’t see <Field> text nodes so we need to expand the <Fields> node. So click on the Expand button in the Record.Fields column header like so:

Record.Fields Expansion

Record.Fields Expansion – Click to Enlarge

There are no attributes on the <Fields> node in our XML file and the only possible child node type is <Field> so we need not uncheck anything. We simply click OK and we see:

Record.Fields.Field Instances with Record Attributes

Record.Fields.Field Instances with Record Attributes – Click to Enlarge

In order to get the text node element values we need to expand the <Field> node instances one more time. Click on the Expand button on the Record.Fields.Field header to see the elements to include in the expansion as follows:

Record.Fields.Field Expansion Selection Choices

Record.Fields.Field Expansion Selection Choices – Click to Enlarge

Notice the text node for each <Field> element is labeled Element:Text which we definitely want and we want both attributes as columns in our table as well. Click OK and we see:

Record.Fields.Field.Element:Text Expansion

Record.Fields.Field.Element:Text Instances Expansion With Attributes – Click to Enlarge

We are almost there but as you can see looking at the record type attribute (Record.Attribute:type) that we have all record types in our table and we are really only interested in distributed availability transaction (16S) records. In this case we simply filter the column using the filter down-arrow in the Record.Attribute:type column header like so:

Filtering the Record Type Attribute Column

Filtering the Record Type Attribute Column – Click to Enlarge

We uncheck all selections except 16S and click OK and we get the table we originally wanted as follows:

Data Table with 16S Records

Data Table with 16S Records – Click to Enlarge

We should name the query as something other than Query1. In the box that currently shows Query1 type in the name BFC 16S Record Query. I don’t like the column names however, nor do I like the order of the columns. So I will click on the column header of a few columns, drag them to a new position in the table and rename (right-click rename) them as follows:

  1. RecordType (Record.Attribute:type)
  2. RecordSeqNo (Record.Attribute:sequence)
  3. RecordCurrency (Record.Attribute:currency)
  4. FieldTitle (Record.Fields.Field.Attribute:title)
  5. FieldValue (Record.Fields.Field.Element:Text)

I have also deleted the Record.Fields.Field.Attribute:sequence element as it is not necessary for our analysis and it will interfere with the next step.

We have one more very important transformation that demonstrates the power of Power Query (living up to its name). The FieldTitle column needs to be pivoted. Meaning we need the FieldTitles to have their own columns with the FieldValues in the appropriate column. Otherwise PowerPivot Pivot Table analysis would be very difficult calling for the creation of numerous Calculated Columns and/or Fields. So we highlight the column we want to pivot (FieldTitles) and we click on Transform –> Pivot Column (in the Any Column area) and we will see:

Pivot Column Dialog Box

Pivot Column Dialog Box – Click to Enlarge

The default selections will not match the image above so make these changes 1) the values column needs to be FieldValue – this says put these values under the appropriately titled column corresponding to the FieldTitles value and 2) in the advance options select don’t aggregate because we want the detail, not counts of the detail. Click OK and it will transform the table to look like this:

Data Table with FieldTitles pivoted

Data Table with FieldTitles Pivoted – Click to Enlarge

Our table is done now and, in order to load the table into the Excel Data Model, we need to click on File –> Close and Load to and you will see the following:

Close and Load Dialog Box

Close and Load Dialog Box – Click to Enlarge

The default options are different from the ones shown so be careful. I typically don’t want the table set in Excel but rather in the Excel Data Model so I checked Only Create Connection and Add this data to the Data Model. If you only want to work with the data as a table in Excel you can just accept the default options selected. When you click the Load button you will see Power Query load the data from the XML File into the Excel Data Model like so:

Power Query Loading the Table into the Excel Data Model

Power Query Loading the Table into the Excel Data Model – Click to Enlarge

When it is done we will see our new query (BFC 16S Record Query) listed under Workbook Queries below the query already added to the book previously called Batches like so:

Power Query Loading Complete

Power Query Loading Complete – Click to Enlarge

To show that the data table is in fact loaded into the Excel Data Model click on POWERPIVOT –> Manage like so:

Select Manage to Switch to Excel Data Model

Select Manage to Switch to Excel Data Model – Click to Enlarge

And you will see the 16S Record table Power Query added to the Excel Data Model as follows:

16S Record Table Added to Excel Data Model

16S Record Table Added to Excel Data Model – Click to Enlarge

In order to analyze the data, click on the Pivot Table button in the center of the ribbon in the above image and Excel will prompt you for a location for the table like so:

Create Pivot Table Dialog Box

Create Pivot Table Dialog Box – Click to Enlarge

Click OK and the sheet should look like this:

Pivot Table Created

Pivot Table Created – Click to Enlarge

I will close the Worksheet Queries pane and build a quick Pivot Table to show the data like so:

Sample Pivot Table Using Data Table

Sample Pivot Table Using Data Table – Click to Enlarge

The Query Editor that we used in Power Query is a user interface to help construct the ETL steps in the M language. For those who would like to simply paste the code into the Advanced Editor (in the View tab) here it is:

let
Source = Xml.Tables(File.Contents("C:\Users\John\Desktop\BAIConversion\
   Output Files\BTRS MAX XML Only - XML File from BTRS-Quad Sample - 8.5 MB.xml")),
#"Removed Columns" = Table.RemoveColumns(Source,{"http://www.w3.org/2001/
   XMLSchema-instance", "Attribute:id", "Attribute:version",
   "Attribute:last_modified"}),
#"Expand Record" = Table.ExpandTableColumn(#"Removed Columns", "Record",
   {"Fields", "Attribute:type", "Attribute:sequence", "Attribute:currency"},
   {"Record.Fields", "Record.Attribute:type", "Record.Attribute:sequence",
   "Record.Attribute:currency"}),
#"Expand Record.Fields" = Table.ExpandTableColumn(#"Expand Record",
   "Record.Fields", {"Field"}, {"Record.Fields.Field"}),
#"Expand Record.Fields.Field" = Table.ExpandTableColumn(#"Expand Record.Fields",
   "Record.Fields.Field", {"Element:Text", "Attribute:sequence",
   "Attribute:title"}, {"Record.Fields.Field.Element:Text",
   "Record.Fields.Field.Attribute:sequence",
   "Record.Fields.Field.Attribute:title"}),
#"Filtered Rows" = Table.SelectRows(#"Expand Record.Fields.Field",
   each ([#"Record.Attribute:type"] = "16S")),
#"Renamed Columns" = Table.RenameColumns(#"Filtered Rows",
   {{"Record.Fields.Field.Element:Text", "FieldValue"},
   {"Record.Fields.Field.Attribute:sequence", "FieldSeqNo"},
   {"Record.Fields.Field.Attribute:title", "FieldTitle"},
   {"Record.Attribute:type", "RecordType"},
   {"Record.Attribute:sequence", "RecordSeqNo"},
   {"Record.Attribute:currency", "RecordCurrency"}}),
#"Reordered Columns" = Table.ReorderColumns(#"Renamed Columns",{"RecordType",
   "RecordSeqNo", "RecordCurrency", "FieldTitle", "FieldSeqNo", "FieldValue"}),
#"Removed Columns1" = Table.RemoveColumns(#"Reordered Columns",{"FieldSeqNo"}),
#"Pivoted Column" = Table.Pivot(#"Removed Columns1",
   List.Distinct(#"Removed Columns1"[FieldTitle]), "FieldTitle", "FieldValue")
in
#"Pivoted Column"

You will need to change the file path but if you use the XML Output from the conversion of a BTRS file with similar content, everything should be the same.

This was a very long and involved post. If you see any errors, inefficiencies or inconsistencies please comment.

Posted in Bank File Conversion, Working With the XML Data | Tagged , , , , | Comments Off on Analyzing Bank File Conversions XML Data Files using Power Query and PowerPivot

Storing Bank File Conversion XML Output in SQL Server

The purpose of this post is to give users (or prospective users) an idea of the benefits of storing their bank data in SQL Server (2005 or greater) as an XML data type. In particular converting BAI and BTRS files from the bank format using the we developed called Bank File Conversion (BFC) App for Excel 2013 which can be found here. There is a Free Trial offer to allow you to thoroughly test the app for your purposes.

The BFC App for Excel 2013 securely converts your BAI or BTRS file into either Excel tables, CSV files or XML files. The XML files conform to a fully deterministic schema that is included with the App on the XSD tab. You can cut and paste it anywhere needed but the T-SQL to setup the schema collection is shown below. The schema can be seen in your browser by clicking here.

I run the BFC App for Excel once a month for each bank account I am responsible for and I use PowerPivot to analyze the data for float issues, unusual transactions and to run through some Pivot Charts. I also compare transaction code statistics to previous periods. I also save the monthly conversion output as an XML file in my Bank File Conversion database for querying should auditors or others require data extracts.

First we need to create a database which I called BankFileConversion. Next we need to create a schema collection. The schema collection is necessary so SQL Server knows how the XML is to be structured and is important for XQuery to work properly for data retrieval. If you store XML data in a column of your table as an XML datatype and would like to use XQuery to query the data down to the text node level (the values in the XML node text and not just XML fragments) you must have a fully deterministic schema added to the database’s schema collection. The database is added:

BankFileConversion database setup for tables to hold the XML datatype and other data.

BankFileConversion database setup for tables to hold the XML datatype and other data.

 

 

 

 

 

 

The T-SQL query to create the database as seen above and add the schema to the schema collection is as follows:

CREATE DATABASE BankFileConversions;
GO
USE BankFileConversions;
GO
CREATE XML SCHEMA COLLECTION BankFileConversionsSchemaCollection AS N'

... this portion of the schema omitted ....

';
GO

After running the above statements, we will want to add the table using T-SQL as follows:

CREATE TABLE [BankFileConversions].[dbo].[BankFile](ID int primary key,
BAI xml(BankFileConversionsSchemaCollection), BankMonth nvarchar(10),
BankYear smallint, RowAddedDate smalldatetime) 

As you can see, we created the table with an ID column and a column labeled BAI where we need to specify the XML data type and schema collection. We also included a column for the month the bank data covers, the year and the date this row was added to the table (which can be automated). Now we execute the T-SQL to insert a row into the table as follows:

INSERT INTO [BankFileConversions].[dbo].[BankFile] values(1,'
 <ns:Record type="01" seq="1" currency="">
 <ns:Fields>
 <ns:Field seq="1" title="RecordCode">01</ns:Field>
 <ns:Field seq="2" title="SenderID">121000248</ns:Field>
 <ns:Field seq="3" title="ReceiverID">FSATU014</ns:Field>
 <ns:Field seq="4" title="FileCreationDate">06/02/14</ns:Field>
 <ns:Field seq="5" title="FIleCreationTime">1559</ns:Field>
 <ns:Field seq="6" title="FileIDNum">01</ns:Field>
 <ns:Field seq="7" title="PhysRecLngth">080</ns:Field>
 <ns:Field seq="8" title="BlockSize"></ns:Field>
 <ns:Field seq="9" title="VersionNum">2</ns:Field>
 </ns:Fields>
 </ns:Record>
 <ns:Record type="02" seq="2" currency="">
 <ns:Fields>
 <ns:Field seq="1" title="RecordCode">02</ns:Field>
... this portion of the XML omitted ....

 <ns:Record type="49" seq="10565" currency="USD">
 <ns:Fields>
 <ns:Field seq="1" title="RecordCode">49</ns:Field>
 <ns:Field seq="2" title="AcctCtrlTotal">3964875.54</ns:Field>
 <ns:Field seq="3" title="NumOfRecords">106</ns:Field>
 </ns:Fields>
 </ns:Record>
 <ns:Record type="98" seq="10566" currency="USD">
 <ns:Fields>
 <ns:Field seq="1" title="RecordCode">98</ns:Field>
 <ns:Field seq="2" title="GroupCtrlTotal">9464569.79</ns:Field>
 <ns:Field seq="3" title="NumOfAccts">3</ns:Field>
 <ns:Field seq="4" title="NumOfRecs">462</ns:Field>
 </ns:Fields>
 </ns:Record>
 <ns:Record type="99" seq="10567" currency="">
 <ns:Fields>
 <ns:Field seq="1" title="RecordCode">99</ns:Field>
 <ns:Field seq="2" title="FileCtrlTotal">24198852765</ns:Field>
 <ns:Field seq="3" title="NumOfGroups">42</ns:Field>
 <ns:Field seq="4" title="NumOfRecs">10707</ns:Field>
 </ns:Fields>
 </ns:Record>
BaiFile>', 'May', '2014', '06/02/2014 09:16:00')

Now that we have the database created, the schema loaded, and the table created and loaded, we can now run a few queries. To do that we will not be querying rows of data in a SQL Server table but rather XML data in rows that is contained in XML datatype columns. That is, we will be searching in XML data structures saved in columns in rows of a table. Our table, dbo.BankFile happens to have only on row representing May of 2014 bank data.

XQuery is rather involved but you can get up to speed relatively quickly with the help of Jason Strate’s blog series here, some great books listed below and some other resources you can find by searching on “XQuery.” XQuery, to me, is a language that allows you to run XPath queries on the SQL Server XML datatype. If you are not familar with XPath I would read up on it at W3C Schools here.

Let’s just dive right in a take a look at a simple query that returns all of the Total elements for the month of May and year 2014:

WITH xmlnamespaces('http://www.bankfileconversions.com' as ns)
SELECT [BAI].query('/ns:BaiFile/ns:Record/ns:Totals/ns:Total')
FROM [BankFileConversions].[dbo].[BankFile] WHERE BankMonth='May'
and BankYear='2014'

The results of the query can be seen here in SQL Server Management Studio:

Total Element Query Results

Results returned from XQuery retrieving all Total elements – Click to Enlarge

The results show in a row as an XML fragment highlighted to indicate a link. When you click on the resulting XML fragment link a new tab will open up to display the XML like so:

XML Fragment Result of Totals XQuery

XML Result Fragment exploded on a new tab – Click to Enlarge

The XML fragment is approximately 3,000 lines and so it is not all shown here. A typical medium-sized business with multiple accounts will have as many lines. It is important to know that the query returned XML nodes from the column queried. This is not typically useful for those of us who use Excel. We need tabular data to add to our PowerPivot data model.

If however, you were simply running an ad hoc query to view in XML Notepad for example, this would be fine. Since the query result is a set of nodes though, you will need to create a root node (in this case named sample) to wrap the whole thing for XML Notepad to recognize it as valid XML (otherwise you will only see the first node because that node by itself is recognized as valid XML).

Totals Query Results in XML Notepad

Totals Query Results in XML Notepad – Click to Enlarge

So the question then is how do we get tabular data that we can add to our PowerPivot data model? We shred the XML using the CROSS APPLY command. This is complicated stuff where normally only dbas tread. I suggest though, that one can become reasonably proficient through experimenting and reading one of these very excellent books:

So the CROSS APPLY command has to do with making a table out of XML data and I won’t try to explain it any further because I am not qualified. However rather than retrieve one large XML fragment, the following XQuery statement retrieves a row for each row containing a single Total node; but still as a fragment:

WITH xmlnamespaces('http://www.bankfileconversions.com' as ns)
SELECT c.query('.')
FROM BankFileConversions.dbo.BankFile
CROSS APPLY BAI.nodes('/ns:BaiFile/ns:Record/ns:Totals/ns:Total') as t(c)

Notice two things; 1) the SELECT statement has no XPath statement and the CROSS APPLY statement now has one. To keep it simple I see this as the CROSS APPLY creates a set of, in this case, BAI.nodes using the XPath query passed in. Then my query method called on that set of nodes (c) says that c should just return itself (using the dot “.” operator). So I get the set of Total nodes with each node returned separately as follows:

Totals Query using CROSS APPLY

Totals Query using CROSS APPLY – Click to Enlarge

Now if I click on one of these XML fragments in the rows in the result set, I get a single Total node like so:

Single Total Node

Single Total Node – Click to Enlarge

Even though we have shredded the requested node set into individual nodes, the question still remains how do we get tabular data?

In order to do this we need to tell the SELECT statement to retrieve individual values from the XML. We want to see the Field text values in columns with a row for each Total node in the set requested. I’m really only interested in repeating type code (RepTypeCode), code description (CodeDesc) and the repeating amount (RepAmount) fields so the value() methods will request only those text node values. I changed the XPath query in the CROSS APPLY command so I have the full set of Record nodes to work with. This gives me access to the Record‘s sequence attribute value to include as an ID for the Record the Totals are from.

WITH xmlnamespaces('http://www.bankfileconversions.com' as ns) 
SELECT c.value('(@seq)[1]','INT') AS RecSeq,
 c.value('(ns:Totals/ns:Total/@seq)[1]','INT') AS Total1Seq,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="CodeDesc"]/text())[1]','nvarchar(max)')
     AS CodeDesc1,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="RepAmount"]/text())[1]','decimal(10,2)')
     AS Amount1,
 c.value('(ns:Totals/ns:Total/@seq)[2]','INT') AS Total2Seq,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="CodeDesc"]/text())[2]','nvarchar(max)')
     AS CodeDesc2,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="RepAmount"]/text())[2]','decimal(10,2)')
     AS Amount2,
 c.value('(ns:Totals/ns:Total/@seq)[3]','INT') AS Total3Seq,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="CodeDesc"]/text())[3]','nvarchar(max)')
     AS CodeDesc3,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="RepAmount"]/text())[3]','decimal(10,2)')
     AS Amount3,
 c.value('(ns:Totals/ns:Total/@seq)[4]','INT') AS Total4Seq,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="CodeDesc"]/text())[4]','nvarchar(max)')
     AS CodeDesc4,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="RepAmount"]/text())[4]','decimal(10,2)')
     AS Amount4,
 c.value('(ns:Totals/ns:Total/@seq)[5]','INT') AS Total5Seq,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="CodeDesc"]/text())[5]','nvarchar(max)')
     AS CodeDesc5,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="RepAmount"]/text())[5]','decimal(10,2)')
     AS Amount5,
 c.value('(ns:Totals/ns:Total/@seq)[6]','INT') AS Total6Seq,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="CodeDesc"]/text())[6]','nvarchar(max)')
     AS CodeDesc6,
 c.value('(ns:Totals/ns:Total/ns:Fields/
   ns:Field[@title="RepAmount"]/text())[6]','decimal(10,2)')
     AS Amount6
FROM BankFileConversions.dbo.BankFile
CROSS APPLY BAI.nodes('/ns:BaiFile/ns:Record[@type="03"]') as t(c)

The result set will be the field text values requested for the first 3 Total children of each Totals node in the set created by CROSS APPLY. There is a specific syntax to the value() method which includes an XPath statement that identifies a scalar value and the data type of that value. The SELECT statement will cycle through the set of nodes that are Records and their descendants (as specified in the CROSS APPLY XPath query). Then the XPath queries in the value() methods start the search path at the Record node level and move through to the Field descendant with the requested attribute value.  In order to get an individual value (a scalar) we need to specify an item number as in [1], [2] etc.. A [1] tells the XQuery to get the first value in the set returned by the XPath query in the value() method. So the results we get are as follows:

Tabular results from Totals XQuery

Tabular Results for XQuery – Click to Enlarge

There are 84 rows in all which matches the number of  03 records in the BAI file. None of the 03 records have more than 3 Total nodes which is why we see NULL in the columns related to a 4th, 5th or 6th Total.

We covered a lot material in this post so please put your questions or comments in the comments. I will post in the near future a whole list of XQuery statements that will work for you. As your understanding  grows, you can modify and test new approaches using the XQuery statements I supply.

Stay tuned.

Posted in Apps for Excel, Bank File Conversion, Working With the XML Data | Tagged , , | Comments Off on Storing Bank File Conversion XML Output in SQL Server

Using Bank File Conversion App for Excel with PowerPivot

Bank File Conversion App for Excel 2013 (BFC) is designed with PowerPivot in mind. In fact, the three outputs provided by the conversion (Excel Tables, XML and CSV) are readily imported into PowerPivot for further analysis.

This post uses a sample workbook that you can access here. Once the workbook is opened in Excel Online, click File–> Save As–> Download Copy.

Why PowerPivot? Because it unleashes incredible capabilities in Excel. There are three that come to mind immediately: 1) no table size restrictions in PowerPivot (Excel maximum is about a million) 2) in PowerPivot one can join tables imported from a multitude of sources and 3) in PowerPivot large numbers of data sources can add data tables to the data model.

Posted in Apps for Excel, Bank File Conversion | Tagged , , | Leave a comment

Auto Dealership Group Used Vehicle Department Objectives

Last year I rolled out what I considered a simple and effective objective for the Used Vehicle Departments in a large dealership group. I was addressing a room full of dealership General Managers (GMs) and Used Vehicle Department Managers (UVMs); about 60 people. My aim was to let our dealership teams know what was expected of them with regards to gross profit.

The crux of the presentation was to set an objective that each car should contribute 10% of its cost to the dealerships gross profit in 45 days. Basically to state it conversationally, our inventory age and gross per unit expectation was:

“If you are going to take the full 45 days to sell the car, I want to see a 10% (of cost) gross profit ($1,500 on a $15,000 car) in the till. If you turn it in 23 days, I’m satisfied with a 5% gross profit. If you are going to drag it out to 70 days, I want 15%.”

The constraint was lot space, not the target percentage. I wanted them to know that if they wanted to sell volume, it would be fine if they had a lower GP% target. That is, if your average turn is 20 days, $750 gross profit on a $15,000 unit is fine.

I had more than one GM tell me it was complicated so I drew him this:

My objective seemed unusual because most believed that, in practice, after a certain age, a vehicle’s expected gross profit drops (or at least that is what most of them experienced).

Its true though, when a car doesn’t sell in 45 days we feel we most likely screwed up somewhere in the process of stocking and selling the car. But maybe not. If we didn’t screw up then why didn’t the car sell? It could be any number of factors such as:

  • The vehicle wasn’t in pristine shape. That is, something was wrong with the car that kept customers from offering us top dollar for it.
  • The salespersons involved in showing the vehicle didn’t use their best selling skills nor did they always follow the approved selling strategy.
  • The Desk Manager didn’t use their most aggressive negotiating position in working offers up to the 45 day mark.
  • The car wasn’t well represented on the internet. That is, it wasn’t described well or the photos weren’t good enough to get the car on the customer’s short list.

One factor I didn’t include in the list is that the car wasn’t “right for the lot”. In the days before internet marketing I would say that could be a big factor; not so today. If you have a new car franchise, then your lot is fine for most cars you want to stock and market on the internet. I say most because, even though your lot may be a good place to sell all vehicles, your staff may not be skilled enough to represent Lamboghinis and Ferraris.

So the same graph above may actually, in practice, look like this for some dealerships:

If a UVM experiences a relationship like this, it does not prevent him from meeting our expectations, it simply compels him to make sure all units are sold prior to the 45 day mark. If, for some reason, he keeps a car beyond 45 days, he knows he is expected to make a home run on the eventual sale of the unit.

Posted in Auto Dealerships, Used Vehicle Director | Tagged , , | 2 Comments

ADP (CDK Global) Expertise – Your Team, Making You More Money

This blog post appeared originally on our ADP (CDK Global) Expertise blog.

ADP (CDK Global) has recently announced a new data server product available to their customer base. The product is currently called Managed Data Access (MDA).

I was considering writing a series of blog posts about this exciting new offering on my main site here; but then I thought that this rather extensive opportunity needs a place of its own.

I am sure you will decide to take advantage of this offering and when you do ADP (CDK Global) will perform an initial full ETL (extract, transform and load) of your dealership’s data into a PostgresQL relational database server. ADP will then keep the database current with at least daily differential updates.

It’s true that some dealership groups have been doing their own ETLs into a SQL Server database for many years utilizing the ADP DAP program. DAP (Data Access Program) basically offered a way for dealerships to access APIs (application programming interfaces) to pull data. The DAP APIs do not lend themselves to object relational mapping and hence only the simplest data extractions were possible without some programming effort. These groups that have been doing the ETLs to SQL Server have had to have some pretty sharp Database Admins on staff to build the relational data model. Attempting this process was worth it though so these dealership groups could write queries and perform the joins necessary to get relevant actionable data.

The power of the MDA product though is in the connectivity of powerful data analysis and presentation software to your ADP data, namely Microsoft Excel. Beginning with Excel 2010 and continuing with Excel 2013, Microsoft has made Excel into a premier self-service data analysis (aka business intelligence) platform. Microsoft has done so with the advent of a few key free Excel add-ins namely:

    • Power Query – Makes adding data to Excel simple from just about anywhere.
    • PowerPivot – Makes joining data from disparate sources (or the same source) effortless for truly actionable data analysis.
    • PowerView – Makes data analysis beautiful and unbelievably easy with effortless slicing, dicing and drill-down capability.

With a little help from us in the initial setup, a dealership manager (whether it be the Dealer, the GM or a Department Manager or all of the above), will have access through Excel to a level of understanding of their operation heretofore unknown in the car business.

We will post here those projects that have an immediate, profound effect on the dealership’s or dealership group’s growth and/or profitability, based on sound principles, utilizing ADP (CDK Global) Data and Microsoft Excel. Simple projects that make a big difference.

In the mean time, give us a call if you have any questions.

Posted in Auto Dealerships | Tagged , , , , | Leave a comment

Auto Dealerships Need Business Intelligence

ExcelTurn data into actionable information in ADP.

Business Intelligence is simply Microsoft Excel used in a new way. A way that takes a little help to set up but is extremely simple to use thereafter. The introduction of Microsoft’s PowerPivot for Excel 2010 and its subsequent enhancement in Excel 2013, has allowed General and Department Managers to improve their efficiency many times over.

Automotive Systems Training has coupled ADP’s (CDK Global) Managed Download Access product with Excel (and other data sources) to allow managers to quickly and effortlessly review their operating stats. This is incredibly important to those responsible for multiple dealerships as switching from store to store requires only a button click.

I recently blogged in greater detail on our ADP Expertise blog here.

Only those areas of the operation that are of use to the manager or executive are included in their personal Excel workbook such as sales and gross profits by person responsible. No longer does the manager have wade through printed ADP reports, they simply click on buttons on an Excel worksheet such as this:

FI Example

This particular Excel worksheet is for a workbook belonging to the CEO of a multiple dealership group. The box areas along the top is a selector for the dealership for which the CEO would like to review a Lien Analysis. When the CEO clicks on a different company (other than FVWH) the table in the center changes to the figures applicable to the new store in less than a second. The CEO can combine stores by selecting multiple buttons at once.

There are a number of buttons down the left side of the information table that allows the CEO to review the stats for isolated situations such as leases only, different months etc.

This form of Business Intelligence requires only three things:

  • Microsoft Excel 2013
  • CDK Global (ADP)
  • Automotive Systems Training to get you started

Call now (909) 833-0115 for a free consultation on how this powerful tool will make you more efficient than you thought possible.

Posted in Management Reporting | Tagged , , , | Leave a comment

Bank File Conversion App for Excel 2013 Overview Video

A quick overview of the Bank File Conversion App for Excel 2013 allowing BAI V2 and BTRS bank files to be securely converted to Excel tables, CSV text blocks and validated XML.

Posted in Apps for Excel, Bank File Conversion | Tagged , , , , | Leave a comment

New Bank File Conversion App for Excel – Securely Analyze Bank Transaction Data

Automotive Systems Training (AST) has some exciting news to share with all of you who interact with bank data. AST released an App for Excel to the Microsoft Store which was approved last week. You can purchase this app here at an introductory price.

This application is designed to give analysts (whether they be CPAs, CFOs or Business Intelligence Specialists) a secure, quick and effortless way to convert bank transaction files into usable Excel tables, CSV text blocks or validated XML. We have a support site where you can read through the knowledge base and post questions in our forums here.

We are big fans of Power Pivot and the data model as well as Power Query and Power View. Once the BAI or BTRS data is converted into an Excel table it is a snap to add it to the data model, pull down cash transactions from G/L detail and create any needed columns and relationships. From there I can use Pivot Tables and Charts to do all the analysis needed including a plain vanilla bank reconciliation.

This app is what Microsoft calls a Task Pane app which sits on the right side of the currently active worksheet like so (you can see column T of the active sheet):

Bank File Conversion App for Excel

The application, once it is purchased, can be inserted into any workbook at any time and saved in the workbook if so desired. Microsoft has specific instructions here.

Inserting the App

In the above screenshot a new workbook is opened and the app is inserted by selecting Insert -> Store and then searching for Bank File Conversion and clicking on Buy. A more in depth post about purchasing and inserting the app can be found on the support site here.

Once you have purchased the app,  you will only have to select Insert -> My Apps -> Bank File Conversion to insert the app into any workbook that is currently active.

Download the Bank File from your Bank

In order to convert your bank data go to your banks site and download the month’s transactions in BAI2 or BTRS format. BTRS is the latest format which includes some very nice enhancements for batch transactions such as Lockbox and ACH Deposits. BTRS also supports structured (and unstructured) fedwire addenda. Whichever your bank supports, download the transactions for the month and save in your chosen folder. Once you have the file saved, it is time to convert and set the converted data in Excel tables.

The following are the steps outlined on the Home Tab of the app as you can see in the above screenshot. This post will walk you through the conversion steps in this post. However, there is a useful user guide on the support site that is highly recommended.

Step 1 – Set the File in the App

The app is setup in step by step format. To get started, on the Home tab, go to  Step 1 and Browse for the file you just downloaded from your bank. After you have selected the file, the app quickly obscures any bank or account numbers in it and calculates a hash of the file contents. The user interface looks like the following as it obscures and calculates a hash:

Setting the file to be converted

Step 2 – Select the Conversion Output

Once the app has completed Step 1, move to Step 2 where you will check the boxes to have the app generate the converted output you require.

Step 2

The selection in Step 2 tells the app what types of output to create during the conversion process. In order to use the app to set tables in Excel the Select CSV box must be checked. The real benefit to the Select CSV option is not that you get CSV text blocks for each record in the bank file, but rather that the app will be able to create tables and set selected record types in those tables using the drop-down selector (to be discussed soon in Step 5). If you do not check the box Select CSV, the drop-down selector will be inactive. Because most folks do bank reconciliations and analysis in Excel it is good to always Select CSV at a minimum.

Select XML on the other hand, creates an XML file. If you want to store your bank data in XML files, the benefits of which are considerable, then check the Select XML box. In a future post we will show you how to store you bank data in XML data type columns using the schema that comes with the app. There are many benefits to using SQL Server (containing your bank data in XML columns) as a data source for your Bank Data Analysis Excel workbook. A portion of the file is presented immediately below using XML Notepad:

BAI 03 Record

In addition, the relatively new Power Query accepts XML files as data sources out of the box, where neither Excel nor Power Pivot does. If you have a chance to use this app with Power Pivot, Power Query and Power View, you will notice that the sky is the limit! The Select XML option will allow you to download your bank data in an XML file and also download the Schema for inserting into SQL Server. By the way, we believe that SQL Server Express, PostgreSQL and MySQL can handle XML data types as well.

Step 3 & 4 – Convert the File

Once you have decided on the output, move on to Step 3 and simply click on the Convert File button. Once clicked, in the Step 4 area, a spinner and the word converting will appear to tell you that the conversion is underway. Also, a progress bar will appear at the very top of the app.

Step 4

When the conversion is done, the spinner will disappear and a message panel will slide up and tell you that the conversion was successful.

Success

Step 5 – Set Excel Tables

If you checked the Select CSV box in Step 2, after the conversion is complete, the drop-down box in Step 5 will be available for you to set the desired records into Excel tables. Clicking on the down arrow in the Records Type button area will display the drop-down which will look like this for BTRS files:

Step 5

The drop-down box will display a shorter list of record types for BAI files which will not have the 16 Detail with addenda records and the 16 Detail with batch records. In order to effectively set the selected record into an Excel table in the active worksheet, make sure the active cell is set where you want the upper left corner of the table to appear. If there is not enough room to set the table you will get an error like so:

Table Collision Error

Like the error message says, it is easier if you always set your tables on a new sheet starting in cell A1. Although it is not necessary to do so, it makes things cleaner for me.  Once a table is set, it will look like this one immediately below, which is the table produced when selecting 16 Detail records:

16 Detail records

Step 6 – Save XML File and CSV Text Block Output

Once you have set all the tables you need, you can access your other output on the appropriate tabs; CSV to save all the CSV text blocks as a text file, and XML to save the XML file to you hard drive. If your file is small enough you will have the option to copy the entire output to the clipboard or, in the case of CSV, copy each record type individually from text areas on the CSV tab.

Step 7 – Do It Again

If you have more files to convert simply click on the Convert Another File button and the app will reset and make itself ready for a new conversion. Only the app is reset, Excel itself remains in the same state you had it in before; no workbooks are closed, no sheets are altered or cleared; nothing happens in Excel except the inserted app has the file input box and output memory cleared. Make sure you have saved your output from the previous conversion before clicking the Convert Another File button; even if you forget though, it is a simple matter to convert the file again.
Step 7

Options

That is the process step by step. There are a few options that makes things quicker and less  if you have many files to convert. On the Help tab, at the top there are three options available:
Options

The first option, Turn off notifications (non-error) and warnings, allows you to eliminate two slide up notifications: the Success notification that occurs after a conversion is complete and the Warning message that appears when you click Convert Another File. That is all it does but it streamlines the conversion process if you have many files to do at once say to create daily XML files for inserting into SQL Server.

The second option, Turn off the obscuring of bank and account numbers, will do just that. The app obscures the important private numbers in the file prior to allowing the server to convert. This is to ensure the utmost in security. If your company or bank pre-processes the bank file and obscures these important numbers obviating the need for this process, simply check this box. If you check this box it will remain checked for the duration you have the Excel workbook open and if you save your workbook, they will be checked when you open the workbook the next time. In other words, these settings persist as long as you save your workbook. If you don’t save your workbook with the app inserted and re-insert the app in a different workbook the settings will not persist.

The second option setting should not be checked unless you pre-process your files. However, checking the box will speed up the app considerably on very large files. On normal sized files for small to medium size businesses, the time required to obscure a file is negligible.

The third option, Turn off file hash check before conversion, will stop an important process that makes sure the file you set in the input box is the exact structure that is converted. A standard feature of the app is to run the entire file through an algorithm that calculates a hash total. With large files, disabling the hash calculation will speed up the overall conversion process. However, the reason calculating a hash total is a standard feature of the app is to protect against data corruption. If your internet connection is inconsistent, subject to interference or any other anomaly, it is recommended that this feature remain enabled.

Buzz

The Buzz tab gives you the opportunity to ask questions and make comments about the app. If you include an email address in the form on the Buzz tab we will get back to you.

Again please feel free to peruse the support site at www.bankfileconversions.com.

If you wish to purchase the app please do so at https://store.office.com

We know this app will save you time and increase your efficiency!

Posted in Apps for Excel, Bank File Conversion | Leave a comment