Get useful output from Get-MigrationBatch

No matter if you are doing an on-premises migration or a move to Office 365 – when you issue a PowerShell command you expect useful output. So when you execute the following command:

Get-MigrationBatch

You see the following result:

Get-MigrationBatch1

What the…. useful? Where’s the information about the failure rate, etc etc etc? Especially because the ECP shows us this information? What did the developers of this cmdLet think (or not think) an administrator wants to know?

So let’s pimp this command by using simple PowerShell cmdLets.

Get-MigrationBatch | select identity,Status,TotalCount,SyncedCount,FinalizedCount,FailedCount,Notificationemails | Format-Table

And now the output is much more helpful than before:

Get-MigrationBatch2

If you are a PowerShell beginner and ask yourself now: “How can I find the exact names of the fields I am looking for?” Well here is the answer: execute this command and search for the parameters you need and start building your own output:

(Get-MigrationBatch)[0] | select *

This shows you all attributes of the first (and so also of all other) migration batches. If you have only one batch, you can simplify it a little bit:

Get-MigrationBatch | select *

If you plan to exercise more with Mirgation Batches a good point to start is the Technet help for Get-MigrationBatch

Splitting up a big CSV file to multiple smaller CSV files using PowerShell

Sometimes you have the problem that you get large Excel or CSV lists to work with but you want to split them up by a certain criteria, e.g. company. Your goal may be to be able to create multiple Exchange Hybrid migration batches.

To guide you through this task, my example data looks like this:

Firstname;LastName;Country
Max;Mustermann;Germany
Maria;Musterfrau;Germany
Igor;Smirnow;Russia
John;Dow;USA
Felix;Frisch;Germany

As you can see, it doesn’t matter how the data is sorted.

To get started you must specify some parameters to control the behaviour how PowerShell will split the file:

$GroupField = "Country"
$Delimiter = ";"
$csv = "C:\tmp\MyBigCSVWithTonsOfData.csv"
$outfolder = "C:\tmp\CSV-Files\"

What do these parameters do? It’s simple:

  • $GroupField specified which CSV column will taken as identifier for the split process.
  • $Delimiter specified the delimiter you are using in your CSV.
  • $csv is the full path to the CSV file you want to split up in multiple smaller files.
  • $outfolder is the folderpath where the generated CSV files will be stored.

After the preparation you must read the CSV and prepare the data to be splitted:

$all = Import-Csv $csv -Delimiter $Delimiter
$groupedCollection = $all | Group-Object $GroupField

You see I use the command “Group-Object” which acts like the Excel function “Filter”. With this command you can group structured data and the result are multiple groups which contain the single entries. So if your CSV input file contained 5 entries, myou group by country and 3 of the people have “Germany”, 1 has “USA” and one has “Russia” the result of these 2 lines will be a return value of 3 groups:

  • One group with 3 entries – the people with country “Germany”
  • One group with 1 entry – the person with country “USA”
  • One group with 1 entry – the person with country “Russia”

PSGroup

Having a deeper look at the return value, you can see that the variable $groupcollection is an array. You can check this by executing this command:

$groupCollection.GetType()

Working with an array is easy, you probably know it from other scripts you wrote with PowerShell. Just create a foreach-loop and iterate through the items:

foreach($group in $groupedCollection)
{
   Write-Host ("Group '" + $group.Name + "' // " + $group.Count.ToString() + " Members")
   $group.Group | ConvertTo-Csv -NoTypeInformation -Delimiter "," | Out-File ($outfolder + $group.Name + ".csv")
}

And voila, the result is that you have 3 CSV files – one for each country:

PSGroup2

Now you can continue your work with the new smaller CSV files. As you may have noticed, I’ve hard-coded the output files with delimiter “,”.  Of course, if you need another delimiter, feel free to adjust it to your needs.

So after all that you can copy/paste the whole tiny script at once here:

$GroupField = "Company"
$Delimiter = ";"
$csv = "S:\CSV-Files\MyBigCSVWithTonsOfData.csv"
$outfolder = "S:\CSV-Files\"

$all = Import-Csv $csv -Delimiter $Delimiter
$groupedCollection = $all | Group-Object $GroupField

foreach($group in $groupedCollection)
{
   Write-Host ("Group '" + $group.Name + "' // " + $group.Count.ToString() + " Members")
   $group.Group | ConvertTo-Csv -NoTypeInformation -Delimiter "," | Out-File ($outfolder + $group.Name + ".csv")
}

Using PowerShell to connect to Lotus Notes COM object

Sometimes you have to query and/or change certain data in a Domino Directory. There are in general 2 options to connect to Lotus Notes:

  1. via LDAP
  2. via the Notes COM Object

Querying via LDAP is a standardized and easy-to-understand way. But it has some disadvantages:

  • you cannot perform write operations
  • multi-value fields are hard to identify

So the alternative is using the COM object. For this the above mentioned disadvantages do not count. So the question is now – how to do this in PowerShell??

First of all, what you need is an installed Lotus Notes Client on the machine which will run the script. This Lotus Notes Client must be installed in Single User Mode (no Multi-User!) to work with the examples below. Single User install means that the Notes-Data folder is stored as a subfolder of the Notes installation directory, not in the users’s profile. Next I highly recommend a rich editor. I personally prefer Sapien PowerShell Studio, but others like the integrated ISE or PowerGUI work as well.

Before you can query anything, you must make sure, that the PowerShell window is running in 32-bit mode. There are several ways to do that and I saw each work or not – depending on client Operation system, .NET Framework versions, local permissions, … So try out what’s working best for you.

Option 1:

$PowerShellRunSpace = [System.Runtime.InterOpServices.Marshal]::SizeOf([System.IntPtr])

if (($PowerShellRunSpace -ne 4) -and ($PowerShellRunSpace -ne 2)) {exit} #because you have a 64-bit PowerShell

Option 2:

if([Environment]::Is64BitProcess -eq $true) {exit} #because you have a 64-bit PowerShell

Now after that is done, we can start connecting to Notes. To do that you have to understand how Notes “thinks” and works in the background. That’s the way the client behaves, too. To be able to work with Notes you need a Notes Session which can be opened by using your Notes ID and the corresponding password. Once this is open, you can do anything, which your ID is allowed to do, that means, open Notes databases, create Notes databases, read and modify documents, …. And how does all this work? With some kind of in-line LotusScript! Yes! That means if you can develop Lotus Script and PowerShell, then you can do anything with PowerShell which you can do with Lotus Script. So let’s look how the connection to the Domino Directory works:

$NotesSession = New-Object -ComObject Lotus.NotesSession #open the Notes Session

$NotesSession.Initialize(“mySecretNotesPassword”) #provide the password for the ID file

#$NotesSession.Initialize() #if no password is provided, Notes will ask for the password

$NotesAdressbook = $NotesSession.GetDatabase(“Domino01/NotesOrg”, names.nsf, 0)

You see the above Script code is Lotus Script which opens the names.nsf file from the Domino Server “Domino01/NotesOrg”. Now you can continue open Views, Documents, edit them, save your changes and do anything you want.

The COM object is really cool, but it has one very big disadvantage: It is very slow. So I highly recommend to use functions like GetDocumentByKey() and GetAllDocumentsByKey() as often as possible. This will drastically increase the performance of your script.

Happy Notes-PowerShell-Scripting!