I ran into this recently so decided to share my “panic moments” with all in case it helps someone else. It happened that I accidentally deleted the file in TFS thinking I didn’t need it. Something like this:



And as per habit, I checked in the changes to get into this mess:


Luckily I remembered the option to “show deleted items” in Visual Studio:



After enabling it, I switched back to VS, did a refresh and sure enough the deleted file was there:


Then it was just a matter of choosing the “Undelete” option from the right-click menu and committing the change:




TFS committed the “undelete” operation and the file showed up – Phew!



As you all know, with the current influx of iPads, iPhones, Galaxy SIII, or in short, mobile devices, into our life & work, we have become accustomed to the use of an “app”, to help us fulfill our daily activities.  As such expecting a mobile interface or app for a desktop application is the expected norm in these days.

With that in mind, I have been working on a mobile interface to SCA and have finished the first set of requirements, that I had in mind. I have opted for an HTML5/JavaScript framework since that frees me from worrying about app submission and learning the details of each SDK (iOs/Java). Additionally, I can host the files on a Web Server so all updates are in place and the user just refreshes the web-app’s URL, in the browser (Safari/Chrome), to get the latest changes.

The app makes use of existing SCA CmdLets like, Get-SCAComputerInfo, Get-SCADbServerInfo and Get-SCAPerfCounters, which execute on the web-server and return data in JSON format which is consumed by the app. It consists of 2 layers:

  1. Web/Data Layer – Done in ASP.NET MVC 3. This executes the SCA CmdLets and returns data in JSON format
  2. UI Layer – Done in JavaScript framework, Sencha Touch 2.2. This consumes the data and displays it.

Something like this:


This is how the current screens looks when run on an emulator:

  1. Start Screen – Lists all SCA servers registered on the IIS serverSCAServers
  2. Choose Info Screen – Allows choosing Server or Plant info


  3. OS Info Screen – Loaded after clicking the disclose link for a server recordLoading


  4. User Info Screen – Shows a grouped list of active users on the SCA server


  5. Performance Info Screen – Shows data on OS & MSSQL counters with visual indicators on the counter state – normal or critical. Tapping on an entry shows detailed description of the counter


  6. Input Plant Info – On selecting Plant Info  


  7. Plant Details – Symbols share, dates on when reports & catalog views were last regenerated, etc.


  1. IFC information – State of IFC service, % completion, etc..


  2. Database information  – Grouped by DbType and listing size, version and name-gen server information



  3. Check Name-Generator – This tab allows you to test the name-generator associated with the current plant. If successful, you get a naming-count back otherwise you get an error message as shown in cases below:



There maybe some modifications as we try to finish this. Depending on user feedback and scope, we may add additional features/screens, so your comments will be helpful here.

If you have worked with Smart-3D for sometime, you know from experience that, there are bunch of files, which need to be tracked for changes during the lifetime of a project. Obvious ones that come to mind are:  reference data or bulkload files, custom source code files (.vb/.cs/etc.), sql scripts, etc.

Some people use custom in-house software to track changes, some use excel spreadsheets to note this, etc. The gist is – you need some kind of configuration management in place so that any changes made to critical files can be monitored and rolled back if needed. In this article, I’m going to show you a free & robust open-source version control system, Git, that can easily serve typical needs of 3D administrators.


To start with, head over to http://git-scm.com/download/win and download the install utility. Run the exe to start the installation process and accept the following prompts:




Adding, Tracking and Committing Files:

In this workflow, I’m going to be using a simple bulkload file, PipelineProp-Ex.xls, that adds a custom interface, IJUAPipelinePropEx, to CPPipelineSystem. I’ll be using this file to go over typical Git cmd that one would use. The file is something like this:


The file resides under D:\SP3D\Test. So to make Git aware of this file, I need to:

A) Create a Git repository in that folder

B) Add the file to the Git repository or more correctly the staging area to begin tracking changes

C) Commit the file when I’m sure about the changes


Remember, each time you modify a file, you have to “add” or “stage” the file for Git to “commit” that version of the file. The above 3 steps are what you will use during most Git operations.

Let’s look at these in detail:

  1. Right-Click on the folder and select “Git Bash Here” option. This should open the Git command shell as show below



  2. The shell support typical cmd like ls –l to list files and clear to clear the screenimage
  3. Make current directory a Git repository by using the cmd: git initimage
  4. Let’s look at the status now. So issue a git status
  5. At this stage, Git is telling me that there is a file that is untracked and I need to add it to the staging area so that Git can track it. image
  6. So I’ll add the to the staging area by using the cmd: git add Pipe <TAB> . Use the tab key to fill in rest of the filename. This pushes the file to a “staging” area. So if you change the file now and do a commit, the last changes won’t go thru. since Git requires that you “add” and then commit to get the “last” version. Let’s issue a git status to see what we have:image
  7. I’m going to commit the clean xls file so that I have a point from where to start modifying the file. So I’ll issue the command git commit –m “Initial clean state” and then do a git statusimage
  8. As you can see, Git is telling us that the directory is “clean”. In other words there are no changes to track now.
  9. Let’s fix, our identity issue that Git complained earlier by using the cmds (change to your id & email): git config –global user.name ‘Sunit Joshi’ followed by git config –global user.email sunit.joshi@intergraph.com image
  10. Now let’s go ahead and make changes to the file. The change is simple – we add ‘A’ to each column in CustomInterfaces sheet and then save the file. image
  11. Issue a git status cmd, and sure enough Git will report that there are changes now. Note that it’s asking us to again “add” the file to the staging area and then commit if needed.image
  12. If at this stage, if I’m not sure about the change, I can undo all the changes completely by doing a: git checkout PipelineProp-Ex.xls to rollback. So let’s try it:image
  13. As you can see, the “A”, that I added in the columns is gone. Also make sure to exit Excel, otherwise you will get the message about unable to ‘”Unlink the file”, like I did. At this stage I’ll do a git status again (you can clear the screen using clear cmd) and I have this now: image
  14. Since I rolled back the changes, the file is not in the staging area and also not committed. Remember in git, you need to first add the file to staging area and then commit the changes. You can do this in 2 steps, or just in one using: git commit –a –m ‘Initial clean state” (which adds and commits)
  15. Follow this with a git status cmd and this is what you should see: image
  16. I’ll go ahead and add the “A” back in the columns and save the file. Then I’ll do a git status followed by git commit –a –m “Changed for bulkload” and then again a git status to see the stateimage
  17. I’ll now bulkload this file, which will cause the “A” added to be removed, and a log file created by the bulkload processSNAGHTML7dfee9
  18. A bit later I can see that the bulkload process has finished since the “A” have been removed imageimage
  19. Lets look at what Git says now. So do a git status. Sure enough we see that it reports that the xls file has been modified and also that a new file, .log, has been created.image
  20. So at this stage, I can either commit the file or rollback . Since I’m ok with the change, I’ll commit the file and note the details in my commit message
  21. Since I want to commit only PipelineProp-Ex.xls file, I’ll do it in 2 steps: git add PipelineProp-Ex.xls followed by git commit –m “After bulkload”image
  22. If I wanted to save the log file too, I can do a git add SJ_CatV11.log followed by git commit –m “After bulkload log”.
  23. If I don’t want Git to track the log file, I can create a .gitignore file in the current directory and *.log in it which will cause Git to ignore any .log files.
Checking Out File Versions:

So far we have looked at versioning files, so let’s see how we can checkout a file from a point back in time or in other words from an earlier commit.

Let’s say someone comes along and wants the excel file that we bulkloaded successfully, with all the changes – how do we get him this file? This is where commit history and checkout process comes into picture. Let’s see how we can use these to solve this issue:

  1. First issue a git log to see the commit historySNAGHTML2dcc38b
  2. The one we are interested in, is the one with message “Changed for bulkload”. The entry we need is the commit_id, the value in yellow, which is an SHA1 checksum of the file contents stored in Git repository. You can select the whole entry (though Git needs only first 5 chars) and then do a git checkout commit_idimage
  3. At this stage, it’s best to copy the file to a different folder, say C:\Temp, and then open it there to see if it has the earlier data. If you open it in the same location, you may run into the issue where after you do a git checkout master (next step), Git will say that there are changes that you need to commit or stash (most likely due to Excel). So at this stage you may need to do a git reset –hard and then a git checkout master. If I open the Excel file now, I can see the changes at that point in time – nice & simpleimage
  4. So at this stage, I can go ahead and email this file to the concerned analyst.
  5. If you do a git status now you will see that it reports that we are not in any branch. To see our file in latest state, we need to go back to master branch.image
  6. So just do a git checkout master and then check the file, and it should be version from “last” commit. So always remember to go back to your master branch after a checkout.image

That should be enough to get you started with versioning your files using Git. I would encourage you to checkout the official documentation at Git’s website which goes into more detail and covers other advanced operations.

We just released an important update to SCA, 5.4.6, that adds additional functionality to the PowerShell CmdLets delivered with the tool, which will be very useful for Smart-3D Admins.

The additional functionalities are available as new methods and switch parameters, added to existing SCA CmdLets, namely Get-SCADbServerInfo and Get-SCAPlantInfo.

Let’s look at these new features in context of the above functions. I’m assuming that you have already uninstalled the old version, installed the latest from E-Customer, and updated the PowerShell SCACommands from the menu in SCA as shown below:

Setup SCACommands


New Functionality Name Returns

BackupPlant(siteDb, plantName, backupFolder)

Object with properties:
IsValid (bool)
Message (string)
Method CreateUser () String data


This method allows you to backup a plant from the PowerShell prompt. It returns an object with property IsValid, that is set to true if the backup was successful, and a property, Message, that holds the backup operation’s results. The neat thing with this command is that, you don’t need a specific SP3D client machine (actually SP3D Client is not needed on the machine at all) to backup a plant. In other words, you can backup V9->V11 plants, using this method call.

For MSSQL, the backupFolder would be the path to a folder on the database server while in case of ORACLE, you would specify a UNC path. SCA makes use of PS Invoke-Command to create the bcf file on the remote folder.

In the e.g. below, I backup an MSSQL plant to H:\Temp folder on the database server, SP3DSMP7, from a PowerShell prompt on my local machine. I store the results in $results, so that I can view the job outcome later. As you can see, the $result.Message has the job details.



If you make a mistake and key in an invalid entry, like plant name, the Message property will give you a notification regarding that. Below, I mistakenly keyed in “MDR_MDB” instead of “MDB:




This method allows you to quickly create an Oracle user with SP3D_PROJECT_ADMINISTRATORS role. When invoked with no arguments, it uses your current login credentials to create the user and when invoked with specific DOMAIN\Username, it uses those values.



For Get-SCAPlantInfo, we added a new switch –AllPlants, that when specified, gets a list of all the plants on a server. The list contains plant name, version, database name, size and the site it belongs to.

This is a quick and useful way to enumerate plants on your database servers. Since Get-SCAPlantInfo supports pipeline commands, you can pipe multiple server names to the command and it will fetch plants on each server.

In the e.g. below, we get plant list from a single SCA server and pipe the output to the built-in grid, which support sorting and filtering.


In this case, we list all the SCA servers and then pipe that to the Get-SCAPlantInfo command and finally pipe the results to the grid.


As you can see, support for command-chaining pattern is what makes PowerShell really useful here. With traditional programming/scripting methods, one would have to store the collection, write a loop to traverse that, etc., etc…you get the point.

As most Smart-3D admins know, regularly inspecting Oracle alert log and MSSQL database server log, is an important part of  daily admin tasks. Most critical errors related to failed database start-ups, initialization, failed login attempts, issues with capture/apply processes, etc. are written to these logs. Reading MSSQL logs is still simpler compared to Oracle, since in Oracle the alert log resides deep in directory structure that is hard to remember. Further, the path changes between 10g (under C:\ORACLE\PRODUCT\10.2.0\ADMIN\SP3DSMP1\BDUMP)  and 11g (under C:\ORACLE\diag\rdbms\sp3dsmp7\sp3dsmp7\trace) for each instance , making it more harder to keep track of it.

The SCA CmdLet, Get-SCADbServerInfo, tries to alleviate this issue a bit by providing the ReadLog() method, that works for both Oracle (1g/11g) and MSSQL servers. The only input it needs is the SCA server name and it figures out the path for the instance under the SCA server context. Plus, you can run the command remotely from a client machine, and as long as you have permissions to the Oracle server, the command will return the contents of the log file without you having to RDP to the server  machine and browsing to the deeply buried file, to open it, which I think is pretty powerful and useful. Also since the  output is returned by a PowerShell CmdLet (Get-SCADbServerInfo), you can additionally pipe it to other PowerShell cmds like “where”, “last”, etc. and transform the data to your liking.

So let’s go thru. the steps on how to use this command using both Oracle and MSSQL server registrations. I’ll further break the filtering in steps, so you can see how I arrive at the end-result:

Step-1: Finding your SCA Server Names:

First step is to import the SCACommands module and then run the Get-SCAServer CmdLet to see your current server registrations. I’ll be using the ones, highlighted below:


Step-2: Initializing Get-SCADbServerInfo –

Next, you would initialize the Get-SCADbServerInfo using an SCA servername as the input. So I would do the following to init the MSSQL server first:

  $db = Get-SCADbServerInfo SMP7SQL

Once you have done this, you can call the $db.ReadLog() method and pipe it to Get-Member CmdLet to see the colns. returned in the output. As you can see in the image below, “Message”, is the field I’m interested in and since it’s of type String, it can be filtered using -like or -match PS operators.

AsI’m interested in getting any “failed” messages from the first 10 lines (ReadLog() for MSSQL is sorted by DateTime in descending format), I would need to:

1. Get first 10 lines :  $db.ReadLog() | select -first 10

2. Filter for “failed” string using the where operator. The current item in PS is denoted by $_ and $_.Message gets me the Message property : $db.ReadLog() | select -first 10 | where {$_.Message -like “*failed*”}

3.  Format to fit the screen & wrap any long line. So my final cmd line is: $db.ReadLog() | select -first 10 | where {$_.Message -like “*failed*”} | ft -auto -wrap

Reading MSSQL Log

Reading MSSQL Log

Moving onto Oracle, I first init the object for Oracle using:

$db = Get-SCADbServerInfo SMP1_ORA

I can also look at the no of lines in the alert log, by piping the output to the measure command. The output from ReadLog() for Oracle is individual lines of string and Oracle keeps appending messages to the alert log, so latest messages are towards the end of the log. In this case, I want to look at the last 2500 lines in the alert log and filter the ones that start with an Oracle error.  As you know, any Oracle error has the string “ORA-“ in it, so this is what I’ll use in my filter.

So my filter for Oracle is: $db.ReadLog() | select -Last 2500 | where {$_ -like “ORA-*”} | ft -AutoSize

Note how I use PS CmdLet to help me “filter” what I need, and that’s where PS really shines:


Reading Oracle Alert Log

Sending Email Notification:

We will make this a bit more interesting and send an email of important message using the Send-EmailMessage PS CmdLet. So this is what you will need to do:

1. Setup you smtpServer in a variable – I have set mine in $smtpServer in the PS window

2. Convert the output of the previous filters to a String using Out-String PS cmdlet since the -Body parameter for Send-EmailMessage accepts that and store that in a $messages variable

3. So my final filter string would be:  $messages = $db.ReadLog() | where {$_.Message -like “starting*”} | ft -AutoSize | Out-String

Sending Email

Sending Email

Thus you can you can see that, using SCA CmdLets in tandem with PS built-in ones, can help you come up with interesting ways to ease some your admin pains. Hopefully this will pique your interest in exploring PS more on your own. Feel free to leave a comment if you have any questions.

Since a few people have asked, here’s the list of  workflows that you need to execute to make sure that PowerShell CmdLets work on all machines

  1. Make sure the SCACommands folder with all the files reside under C:\Users\<your_login_id>\Documents\WindowsPowershell\Modules path
  2. Enable execution policy by running, Set-ExecutionPolicy RemoteSigned, and then accept the prompts
  3. Enable remoting by running, Enable-PSRemoting, and accept the prompts
  4. Check to make sure the Remote Registry Service is up and running on all machines
  5. Check to make sure the Windows Managed Instrumentation service is up and running in all machines

Hopefully these steps should allow you to invoke the SCA CmdLets successfully. Feel free to post a comment, if you run into any issues.

Since we have CmdLets for other SCA tasks, implementing one for obtaining plant information, was the next logical thing to do. So without further ado, here’s one for Smart-3D plant: Get-SCAPlantInfo.

It takes parameters, SCAServer name, SiteDb, Plant name and an optional switch, TestNameGenerator, that allows testing the name-generator for the model database. When specified, it invokes the name-generator component for the model database and returns the computed count in the ComputedName property.

Here’s the help screen for Get-SCAPlantInfo. Note, that in PowerShell, you can look at help information for any command, by using the Get-Help CmdLet:

Get-SCAPlantInfo Help - Click for larger image

Get-SCAPlantInfo Help

Here’s the output of running the command against an Oracle database in Smart-3D V11R1. You get information on plant’s symbol share, the databases that constitute the plant, the las time reports and views were regenerated, the state of the catalog, the ComputedName value,  the count of S3D objects,  the users connected to the plant, etc.


Plant info with details on report, views & catalog status

I think this will be pretty useful and combined with Get-SCADbServerInfo , Get-SCAPerfCounters, and PowerShell tasks and exporting (Excel/Html/etc.) capabilities, you can pretty much implement a dashboard that displays relevant information suited to your needs.