Feeds:
Posts

Visualizing S3D Data With SCA CmdLets & Graphviz

Visualizing data as a graph is a very effective way to “understand” the data. A graph can be thought of as a node with edges that link to related data. In this blog we’ll look at how to automate graphs for a couple of S3D scenarios:

• Reporting plant versions

This graph relates S3D version with plants that are at the same version.

• Reporting IFC status for plants in a site

This shows IFC status for plants serviced by an IFC Server

A .doc file is provided below with sample PowerShell script. Just download the file, rename to .zip and extract the .ps1 file within.

A Taste of Graphviz

Graphviz is an open source graph visualization tool that help you create such type of graphs using it’s DOT language to layout the structure.

So the above layout can be written in dot language as:

digraph graphname {
rankdir=TB
node [shape=box]
Server [label="Server-01"]
P1 [label="Plant-01"]
P2 [label="Plant-02"]

Server->P1 [label="V11"]
Server->P2 [label="V11 R1"]
}

In this case:

• digraph – Specifies a directional graph (most common)
• rankdir – Specifies the direction of layout: TopBottom or LeftRight (LR)
• label – Specifies the label for each node
• -> – Links a node with it’s related node

Calling Graphviz from PowerShell

For Windows, there’s a zip package that can be downloaded from Graphviz site. We’ll be using the dot.exe from this package to construct graph bitmaps. So go ahead and download the zip and extract the exe to somewhere in your path. In my case, it under the current folder where I’m running the PowerShell ISE.

To run the above command in PowerShell, use this snippet:

@"
digraph graphname {
rankdir=TB;
node [shape=box]
Server [label="Server-01"]
P1 [label="Plant-01"]
P2 [label="Plant-02"]
Server->P1 [label="V11"]
Server->P2 [label="V11 R1"]
}
"@ | .\Graphviz\bin\dot.exe -Tpng -o .\Test.png



Highlight the lines above and right-click to choose the option to “Run Selection”. This will execute the highlighted text and you should see a Test.png in your current folder. The –Tpng –o .\Test.png clause instructs dot.exe to output a “png” file and save it as Test.png.

Using SCA CmdLets with Graphviz

Since SCA comes with a host of CmdLets that allow you to probe Site, Plant and IFC information, let’s see how we can use a couple of these in concert with Graphviz to output something useful for a S3D administrator.

Couple of scenarios one can think of are:

• Outputting list of plants under a site with their versions
• Outputting IFC status for a list of plant

Visualizing S3D Versions:

Let’s look at the 1st one. SCA has a cmdlet, Get-SCAPlantInfo which when invoked as:

Get-SCAPlantInfo <server> -AllPlants

will output a list of all plants on the server. Something like this:

So if we needed to output this information with version as a starting node and plant name as edges, it would look something like this in .dot language:

digraph S {
rankdir=TB;
"09.01.16.00" ->"S3D"
"09.01.16.00"->"S3D_Copy"
}

To automate the creation of this grammar, we can use a PowerShell script that acts as a DSL for the DOT language as detailed in a blog by Doug Finke here (and explained in his book Windows PowerShell for Developers – a great read).

Using the PowerShell script we can wrap the calls to create a new graph and add edges within it:

New-Graph S {
Get-SCAPlantInfo Local_SQL --AllPlants | ForEach {Add-Edge $_.Version$_.Plant}
} | .\Graphviz\bin\dot.exe -Tpng -o .\Plants.png

The resultant Plant.png shows this (below). You can already see the usefulness in visualizing this data :

Visualizing IFC Status

Taking this a step further, Get-SCAPlantInfo has an –OnlyIFCStatus option that lets you output IFC information for a plant. Something like this:

If we want to run it against all the plants in a site and create a data graph, we would need to:

1. Gather all plants on the server using Get-SCAPlantInfo –AllPlants
2. Filter the ones that we’re interested in maybe by site or name
3. Call Get-SCAPlantInfo with the plant and use –OnlyIFCStatus to get ifc data
4. Use Add-NewEdge to add the plant with IFC data

And we get the resultant graph:

That should be all. You can take these concepts and use it with other CmdLets like Get-SCADbServerInfo (db & user info), Get-SCAComputerInfo, etc. Hopefully this will encourage you to use Graphviz and SCA CmdLets to easily and efficiently visualize S3D data.

PowerShell Script

Read Full Post »

Running SCA on remote machines with PowerShell 3.0

I recently had a requirement to start SCA automatically on remote machines. This was needed for our stress test where we normally have a bunch of servers taking part in the test with different roles: MSSQL Server, IIS Server, File Server, etc.

Now SCA, version 5.7.2 onwards, comes with a console app, SCARunner, that allows you to run SCA from command line. That is fine if you are starting ScA from a batch file interactively, but still doesn’t solve the issue of running it remotely. Luckily PowerShell with it’s remoting and WMI capabilities is geared for just these kind of task, and helped me solve this problem.

In this blog, I’ll show you one way of accomplishing this task.

First Step

The first step is to install SCA on a remote server and then register the server within SCA. I choose the use the default hostname as the registration name since it makes it easier to use it within the PowerShell script. In the image below, my machine name is SJOSHI

Second Step

The next step is to test the SCARunner on the remote machine. To do that, you need to open a DOS/Command window in SCA installation folder which is C:\Program Files (x86)\Intergraph\SCA

To start SCARunner you need to specify these options:

-s  – The registration name. So in my case it would be SJOSHI

-f  – The folder where you want to save the files. The scan xml and perfmon .blg file

So for example, if I needed to start SCA on my machine, I would do:

SCRunner –s SJOSHI –f D:\Temp\Low

When invoked, SCARunner first runs a scan of the registered server and saves the scan xml. Then it creates a perfmon collector with the name of the machine and starts it.

To stop this collector , I need to do:

SCARunner –stop

And this will stop and remove the collector from perfmon.

Third Step

To be able to invoke this from PowerShell remotely, I need to first enable remoting on the server. This is done by running the command, Enable-PSRemoting –Force on the server from a PowerShell admin prompt.

Fourth Step

So at this stage we have the server configured (SCA & PowerShell) and we know the options to run SCARunner. The remaining part is to work out the PowerShell script, so let’s tackle that now.

PowerShell with its WMI Cmdlets, allows you to invoke methods against WMI classes. In our case, we need to start a process, SCARunner, so the Win32_Process with it’s Create method looks like an ideal candidate.

To invoke a WMI method, we can use the Invoke-CimMethod Cmdlet which takes the class name, method name and an arguments dictionary. So let’s see how you can use it.

Say I want to open a file, test.txt in my D:\Temp\Low folder using this class. I can do this in command line by typing notepad D:\Temp\Low\Test.txt

And the file does open up. To do the same in PowerShell, you need to type:

Invoke-CimMethod –ClassName Win32_Process –MethodName Create –Arguments @{CommandLine=“notepad D:\Temp\Low\Test.txt”}

And notepad does open up:

What’s with the –Arguments field ? Well the Get-Help on Invoke-CimMethod shows it to be a dictionary (key-value type).

So it takes, a key and a value. The key name is CommandLine and value is the path to exe with arguments if reqd. The key name is obtained from the Create method description as shown here

So we can get notepad to open via WMI. Now lets try it against a remote-machine. If you do a Get-Help Invoke-CimMethod in PowerShell, you will see that it takes a –ComputerName or –CimSession parameter.

So type the same command with an additional –ComputerName option followed by a machine name that you have for this test:

Invoke-CimMethod –ComputerName XYZ ..rest of the command as before.

As you can see from the image below, the remote process is created and shown under Task Manager. Do note, the remote processes don’t show a UI because of security reasons.

So let’s now try with the SCARunner application against a remote machine on which SCA is installed (and server registered). Everything will be same as before except the CommandLine, which should now include the path to SCARunner along with the start-up options:

And you can see that it does start successfully which is pretty cool!

To stop it, you can just use the –stop option, in the $cmdLine variable:$cmdLine = "C:\Program Files (x86)\Intergraph\SCA\SCARunner.exe -stop"

In my setup, I use a CSV file to import a list of servers using the Import-Csv cmdlet, and startup SCA on those machines.

Hope this motivates you to use PowerShell to automate remote-tasks which would otherwise be hard to accomplish in a simple and efficient way.

Read Full Post »

Git- Easy Version Control For Your Bulkload Files

If you have worked with Smart-3D for sometime, you know from experience that, there are bunch of files, which need to be tracked for changes during the lifetime of a project. Obvious ones that come to mind are:  reference data or bulkload files, custom source code files (.vb/.cs/etc.), sql scripts, etc.

Some people use custom in-house software to track changes, some use excel spreadsheets to note this, etc. The gist is – you need some kind of configuration management in place so that any changes made to critical files can be monitored and rolled back if needed. In this article, I’m going to show you a free & robust open-source version control system, Git, that can easily serve typical needs of 3D administrators.

Setup:

To start with, head over to http://git-scm.com/download/win and download the install utility. Run the exe to start the installation process and accept the following prompts:

Adding, Tracking and Committing Files:

In this workflow, I’m going to be using a simple bulkload file, PipelineProp-Ex.xls, that adds a custom interface, IJUAPipelinePropEx, to CPPipelineSystem. I’ll be using this file to go over typical Git cmd that one would use. The file is something like this:

The file resides under D:\SP3D\Test. So to make Git aware of this file, I need to:

A) Create a Git repository in that folder

B) Add the file to the Git repository or more correctly the staging area to begin tracking changes

C) Commit the file when I’m sure about the changes

Remember, each time you modify a file, you have to “add” or “stage” the file for Git to “commit” that version of the file. The above 3 steps are what you will use during most Git operations.

Let’s look at these in detail:

1. Right-Click on the folder and select “Git Bash Here” option. This should open the Git command shell as show below

2. The shell support typical cmd like ls –l to list files and clear to clear the screen
3. Make current directory a Git repository by using the cmd: git init
4. Let’s look at the status now. So issue a git status
5. At this stage, Git is telling me that there is a file that is untracked and I need to add it to the staging area so that Git can track it.
6. So I’ll add the to the staging area by using the cmd: git add Pipe <TAB> . Use the tab key to fill in rest of the filename. This pushes the file to a “staging” area. So if you change the file now and do a commit, the last changes won’t go thru. since Git requires that you “add” and then commit to get the “last” version. Let’s issue a git status to see what we have:
7. I’m going to commit the clean xls file so that I have a point from where to start modifying the file. So I’ll issue the command git commit –m “Initial clean state” and then do a git status
8. As you can see, Git is telling us that the directory is “clean”. In other words there are no changes to track now.
9. Let’s fix, our identity issue that Git complained earlier by using the cmds (change to your id & email): git config –global user.name ‘Sunit Joshi’ followed by git config –global user.email sunit.joshi@intergraph.com
10. Now let’s go ahead and make changes to the file. The change is simple – we add ‘A’ to each column in CustomInterfaces sheet and then save the file.
11. Issue a git status cmd, and sure enough Git will report that there are changes now. Note that it’s asking us to again “add” the file to the staging area and then commit if needed.
12. If at this stage, if I’m not sure about the change, I can undo all the changes completely by doing a: git checkout PipelineProp-Ex.xls to rollback. So let’s try it:
13. As you can see, the “A”, that I added in the columns is gone. Also make sure to exit Excel, otherwise you will get the message about unable to ‘”Unlink the file”, like I did. At this stage I’ll do a git status again (you can clear the screen using clear cmd) and I have this now:
14. Since I rolled back the changes, the file is not in the staging area and also not committed. Remember in git, you need to first add the file to staging area and then commit the changes. You can do this in 2 steps, or just in one using: git commit –a –m ‘Initial clean state” (which adds and commits)
15. Follow this with a git status cmd and this is what you should see:
16. I’ll go ahead and add the “A” back in the columns and save the file. Then I’ll do a git status followed by git commit –a –m “Changed for bulkload” and then again a git status to see the state
17. I’ll now bulkload this file, which will cause the “A” added to be removed, and a log file created by the bulkload process
18. A bit later I can see that the bulkload process has finished since the “A” have been removed
19. Lets look at what Git says now. So do a git status. Sure enough we see that it reports that the xls file has been modified and also that a new file, .log, has been created.
20. So at this stage, I can either commit the file or rollback . Since I’m ok with the change, I’ll commit the file and note the details in my commit message
21. Since I want to commit only PipelineProp-Ex.xls file, I’ll do it in 2 steps: git add PipelineProp-Ex.xls followed by git commit –m “After bulkload”
22. If I wanted to save the log file too, I can do a git add SJ_CatV11.log followed by git commit –m “After bulkload log”.
23. If I don’t want Git to track the log file, I can create a .gitignore file in the current directory and *.log in it which will cause Git to ignore any .log files.
Checking Out File Versions:

So far we have looked at versioning files, so let’s see how we can checkout a file from a point back in time or in other words from an earlier commit.

Let’s say someone comes along and wants the excel file that we bulkloaded successfully, with all the changes – how do we get him this file? This is where commit history and checkout process comes into picture. Let’s see how we can use these to solve this issue:

1. First issue a git log to see the commit history
2. The one we are interested in, is the one with message “Changed for bulkload”. The entry we need is the commit_id, the value in yellow, which is an SHA1 checksum of the file contents stored in Git repository. You can select the whole entry (though Git needs only first 5 chars) and then do a git checkout commit_id
3. At this stage, it’s best to copy the file to a different folder, say C:\Temp, and then open it there to see if it has the earlier data. If you open it in the same location, you may run into the issue where after you do a git checkout master (next step), Git will say that there are changes that you need to commit or stash (most likely due to Excel). So at this stage you may need to do a git reset –hard and then a git checkout master. If I open the Excel file now, I can see the changes at that point in time – nice & simple
4. So at this stage, I can go ahead and email this file to the concerned analyst.
5. If you do a git status now you will see that it reports that we are not in any branch. To see our file in latest state, we need to go back to master branch.
6. So just do a git checkout master and then check the file, and it should be version from “last” commit. So always remember to go back to your master branch after a checkout.

That should be enough to get you started with versioning your files using Git. I would encourage you to checkout the official documentation at Git’s website which goes into more detail and covers other advanced operations.

Read Full Post »

PowerShell -SCA CmdLets for Smart-3D Admins

As you all may know, MS added an entire new command and scripting language, PowerShell, as part of the base OS starting with Windows-7. Although it has been available with XP as a downloadable component, it has really been made into, what you call, a first-class citizen with Windows-7 – it’s included with OS and exposes 236 CmdLets that cover a vast gamut of functions, from querying services to running remote jobs.

I have been playing with PowerShell for sometime now and I have come to the conclusion that, it’s a powerful tool in an S3D Admins toolkit. The scripting language is simple to start-with and is targeted at admins rather than developers; although you can create real complex scripts with it.

Here’s a simple (because of the syntax) but useful example, that queries for services on different servers that start with the name “MSSQL*”:

Get-Services

As you can see, in one-line, you are able to run a command, Get-Service, against multiple remote machines, and have the output returned in a nicely formatted table, sized to fit the console window. I think this pretty much conveys the power and depth of the PS language and supporting environment. Hopefully this should pique your interest in knowing more about PowerShell and seeing how it may best fit in your work environment. I won’t go over much on the language & it’s syntax, since MS sites have loads of information on it.

What I will go over now, are the CmdLets, that have been added to SCA (V5.3), which I think will surely benefit 3D Admins. I’ll go over in detail, how to use the CmdLets, including configuration and options.

Step 1 – Find your PowerShell Modules Path

The SCA CmdLets are included as a PowerShell module, that need to be installed in your PS module-search-path. The easiest was to find this is, to query for the Env:/PSModulePath variable as shown in the bitmap below:

Module Path

So you would create a folder called, SCACommands, underneath one of your module search paths and copy the downloaded files into that folder, as shown below:

SCACommands Folder

Step 2 – Confirm Module Availability

The next step is to make sure that the module is available to be loaded. You can do that by running the command, Get-Module -ListAvailable, and it should list SCACommands, as one of the modules:

ListModule SCACommands

Step 3 – Import SCACommands & List Functions

The next step would be to load SCACommands module using, Import-Module SCACommands. You can then check the functions exposed by the module, by running, Get-Command -Module SCACommands. If everything goes well, you should see the output below:

List Functions

Step 4 – Using Get-SCAServers

This CmdLet allows you to quickly list all the servers registered with SCA. The name coln is important here, since that is what Get-SCADbServerInfo and Get-SCAPerfCounters CmdLets, require as an input. The Get-SCAComputerInfo requires a hostname instead, so that you don’t have to register all machines in SCA.

Step 5- Using Get-SCAComputerInfo

This CmdLet lets you easily gather hardware and OS specs from a machine including logged on users. It requires a hostname as input and the machine does not have to be registered with SCA.

You can use a command line like, $comp = Get-SCAComputerInfo localhost , to store the information in a local variable, called$comp. You can then output useful information like hard-disk specs, video-card info, etc. as shown above. To query more than a single machine, use pipeline input as in, ‘machine1’, ‘machine2’ | GetSCAComputerInfo

Step 6 – Using GetSCADbServerInfo

This CmdLet allows you to gather information about a database server that includes version info., memory usage, connected-users, oracle patch info, etc. It accepts pipeline input, so you can query multiple machines if required. Again it’s best to store the output in a local variable so that you can view properties that are collections, like ConnectedUsers. If you need to only view users, you can use the switch, -OnlyUsers, and that will cause the command to only emit user logged on to the database server.

Information on Oracle Patches is sorted by DateApplied field

Information on users connected to MSSQL database

Step 7 – Get-SCAPerfCounters

Important OS performance counters can be viewed using this CmdLet.  It also accepts an optional switch parameter, -GetDetailed, that when specified, get information on expensive queries for MSSQL and Hit Ratios for Oracle.

Oracle

MSSQL

I hope that these CmdLets will be a useful addition to a 3DAdmins toolkit. Best would to download and play with these, and as always, let me know if they would like me to go in more detail on their usage. In the next series, I’ll go over CmdLet for monitoring 3d Symbols, comparing shares and checking catalog state. Till then, happy scripting!

Read Full Post »