Migrating BANAMEX

7. September 2007 04:14 by Mrojas in General  //  Tags: ,   //   Comments (0)
Yesterday someone told me and I checked out El Financiero online. There's an article about Artinsoft there. If you don't know about us. We do Software Migration, and BANAMEX decided to upgrade their platform from VB6 to .NET. It's a huge amount of code, and a very interesting project :) See: Article

Quoted by El Financiero on an article about moving to 64-bits

30. August 2007 10:45 by Jaguilar in General  //  Tags: , ,   //   Comments (0)

Some time ago I was interviewed (via email) by El Financiero, a weekly business-oriented newspaper from Costa Rica, regarding 64–bit technologies. A small quote from the interview was published a couple of weeks ago, along with some information I gave them on the advantages of moving to 64–bits.

The technical journalist from the newspaper did an article on how the Costa Rican Central Bank, BCCR, is moving their payments system (SINPE) from 32–bit to 64–bit servers, and the benefits they are getting from the move. These benefits include enhanced speed and database performance, given the large memory capacity of the new architecture. This is a fairly large system that handles over 3 million financial transactions per month.

ArtinSoft had some involvement in moving this system from Visual Basic 6.0 to Visual Basic .NET some time ago, in the dawn of the .NET era. There is even a published case study on the system – you can find it here.

Their plan currently is to slowly move all their systems to 64–bit over a period of 2 years.

You can check out the article here:  BCCR ajustó tecnologías (you may need to be registered with the site).

The importance of the Ready analysis

29. August 2007 10:57 by Jaguilar in General  //  Tags: ,   //   Comments (0)

A large percentage of the work I do here at ArtinSoft is related to what we call  Ready programs. The Ready program is part of the Ready-Set-Go methodology – a migration project methodology developed here at ArtinSoft that has given us great results.

The Ready assessment program, or Ready, is the first stage of this methodology. As you are probably aware, the more planning you put into a project, the higher the probability that the project will be succesful. Well, with the Ready, we do a an in-depth analysis of the project before we start, and come up with a detailed project plan that takes into account any risks and possible issues with the migration.

The first step in a Ready is a thorough assessment of the size, complexity, migration goals, and testing procedures for your current application. This step involves a 5– to 10– day on-site analysis of an application. During that week(s), we perform interviews with the development, PM and testing teams, to get a feel for the project and gather enough information to proceed with the project. Once we come back to our office, we work alongside the development team to create an accurate estimate of the effort required to perform the migration.  This normally includes any customization of the migration tools necessary to minimize the manual effort in the project.

The final product of the Ready program is a detailed written report that includes a fixed-cost proposal for completing the migration. This is usually delivered two or three weeks after the on-site visit. This report, on its own, has tremendous value for the organization. It summarizes the requirements for the migration, and the issues that need to be addressed even if the project is not performed by ArtinSoft. It can also help in justifying the need to modernize outdated applications.

The Ready program is a low-cost, low-risk approach to getting detailed information on your migration project. For more information on the ready, and on the overall methodology, check out the Ready-Set-Go methodology page at ArtinSoft's website.

I/O x86 Virtualization at last!

28. August 2007 06:54 by Jaguilar in General  //  Tags:   //   Comments (0)

Ever since we started working on the Virtual Server seminars, we’ve been hearing about I/O Virtualization, and how it will improve the virtualization landscape as the VT instructions did. Well, today Intel unveiled its vPro platform, with this new technology.

 The technology is called Virtualization Technology for Directed I/O or VT-d. VT-d controls access from Virtual Machines to memory at the physical page level, preventing one VM from accessing other VM’s memory. This has the side effect of virtualizing interrupts and DMA tranfers, which in turn should increase the performance of virtual machines since the VMM would no longer need to trap and emulate the behavior from virtual machines.

For more information, check out or this article at Intel’s website, which contains a detailed explanation of the platform. For a more digested approach, check out this coverage at Arstechnica.

Call PHP from C#

23. August 2007 12:00 by Mrojas in General  //  Tags:   //   Comments (0)
Sometimes you might have a situation where you have a VB Application you want to migrate to C# but you also have some PHP code you are depending on? Sounds familiar? Well it isn't really a typical scenario jajaja But anyway if you had the urge to do that now project Phalanger can help you with that just check it out And if you need help in the VB6 to C# stuff just write any question you have :)

Geek Survival Kit

23. August 2007 09:35 by Mrojas in General  //  Tags:   //   Comments (0)
As any geek I live frustrated by the lack of good tools in Windows. Notepad, and Paint are just a couple of examples. I was just dreaming for a nice, light replacement for those applications. Well, recently somebody wrote a nice page with amazing freeware that you can use: Amazing Tools

MS Virtualization has a new home

21. August 2007 06:33 by Jaguilar in General  //  Tags:   //   Comments (0)

In case you missed it, Microsoft recently unveiled a new Virtualization Website. This website centralizes the information about all of Microsoft’s virtualization products.

I especially like the page about the different Virtualization Solutions offered by the company. It also caught my attention that they now have Softgrid application virtualization fully integrated with the virtualization stack.

Why upgrade from VB6 to .NET – Part 2: Migration benefits

16. August 2007 06:32 by enassar in General  //  Tags:   //   Comments (0)
On my last post I mentioned some of the motivations of one of ArtinSoft’s largest customers to upgrade their critical Visual Basic 6.0 and ASP applications (around 5M total LOC and 9,000 total users!) to the .NET platform. They expect more than US$40M of accumulated benefits in 5 years as a result of this investment in software migration, and considering the Total Property Benefit (TPB) and the Total cost of Ownership (TCO), recuperation time is 4 years in the most probable scenario.

In this particular case, the following migration benefits would help to reduce current costs, increase the income, steer clear of new costs, and avoid losing market position:

1- Reducing the number of incidents and the total cost associated with the fixes: Having around 3-4 incidents per year at US$3,500–US$4,000 per fix, it was estimated that a migration would reduce this number between 60–70%.

2- Avoiding business disruption: When avoiding the increase in the number of incidents, business disruption is prevented. Migration averts a negative impact upon the company’s value chain caused by the degradation of a business process supported by the system.

3- Providing a competitive advantage: Migration provides an advantage over competitors, allowing the quick development of new system functionalities demanded by the customers. Some of the .NET’s characteristics that facilitate this are distributed technologies support, Web Services, Remoting and Windows Communication Foundation. It is estimated that the effect of losing competitiveness ranges between 4-5% of the revenue per year.

4- Reducing new development efforts: With about 300 new developments per year, a migration to the .NET platform would reduce the effort between 18% – 22%. Some of the .NET characteristics that allow this are the fact that registry configuration or DLL registration is not necessary, a better deployment (sharing of multiple DLL versions, XCOPY, incremental installations, One-Click) and increased productivity (Just-In-Time (JIT) compilation, Common Language Runtime (CLR), Common Type System (CTS), .NET Framework Class Library (FCL), Garbage collection, Integrated Development Environment, Task List).

5- Increasing new developments reuse: Some of the .NET integration characteristics that allow this are the easy interaction with .NET components and legacy systems and COM interop allowing the use of components from the original application. Reuse of new developments would range between 16 – 20%.

6- Improving system performance: the .NET platform provides several improvements in this area, such as a multithreading and ASP.NET.

This is just an example of the reasons and some of the expected benefits for a particular migration project, and this varies from case to case. However, let’s be honest: any upgrade is generally a complex task, but in most cases the alternative is to drop behind the competition and go out of business. So this might be a good time to assess your investment in business applications.

Moving to a new home

15. August 2007 16:31 by Csaborio in General  //  Tags:   //   Comments (0)
I will be now blogging via wordpress.  This is just a matter of convenience for myself.  Please update your bookmarks / RSS feeds.

New blog address:  http://csaborio.wordpress.com/


How to Find Which Nodes are running your Tasks in the Compute Cluster Scheduler

15. August 2007 13:06 by Csaborio in General  //  Tags:   //   Comments (0)
When you submit a job to be run in a Compute Cluster Server, you will find some information about the running tasks at the bottom pane.  Information such as error output, task name, and so on is shown, but there is one vital piece of information that should be shown (IMHO) by default and it is not: what nodes are running the current task?

Luckily, this can easily be solved by right clicking on the column headers, selecting Add/Remove Columns and adding the Allocated Nodes column.  This will make it easier to know where to look for output.  The following clip shows how it is done (BTW, if you have not checked Jing, make sure you do, it's amazing):


Oops...nevermind, apparently our Community Server blog cannot embed objects correctly :(

Check out the video in this link:

Finding Nodes of your Job

Java Books for business managers

14. August 2007 04:17 by Jaguilar in General  //  Tags: ,   //   Comments (0)

The other day I had a meeting with a client that is considering converting his Informix 4GL application to Java, using our tools. It was an interesting situation, since he was a in business development, and not really a programmer. He then asked me if I could recommend him some books on Java from a business perspective. I agreed – without knowing how difficult that task could be.

I couldn’t find any recent books that would give an overview of the platform for non-technical personnel, especially decision-making managers. I found a couple of books from the start of the Java era, 1996/1997, that talked about what I was looking for, but they are out of print and it doesn’t look like they were ever updated. They are:

Does anybody know a good Java book that meet the criteria that I’m looking for?
Leave a comment with your recommendations!

How to Specify Boot Order when Using Parallels

13. August 2007 09:18 by Csaborio in General  //  Tags:   //   Comments (0)
Today while tryng to do a V2V migration form Parallels to VMWare, I had the need to boot my current VM into BartPE so I could create an image with Tru Image.  When the VM started up, I tried to press al sorts of F key combos to get to the BIOS screen.  In Virtual Server/Virtual PC, pressing F2 does the trick; but in Parallels it was not working.

After taking a trip to the VM preferences, I found that this is set within the VM properties, and is actually quite simple:


How to Debug MPI Applications in Visual Studio 2005 (Part 2)

9. August 2007 06:30 by Csaborio in General  //  Tags:   //   Comments (0)
To recap, on my last post we went through some of the steps that need to be taken when debugging an MPI application, namely:

-Install the x64 remote debugger
-Copy mpishim to an accessible loction
-Modify the registry to avoid UNC path problems in the future

Let's go ahead and finish the rest of the steps in order to debug an MPI application.

Step 4: Configure an Empty Job with the Job Scheduler

The job scheduler is a utility by which all jobs that are submitted to the cluster are managed.  If you want to have something done at the cluster for you, then you need to use the job scheduler.   Debugging is no exception, as you need to create an empty job that will host your debugging application.

To get started, open the job scheduler and from the File menu, select Submit Job:



Name your job "Debugging Job" and move over to the Processors tab.  Select the number of processors you would like to use for this job and then (this is actually quite important), check the box that says "Run Job until end of run time or until cancelled".   Failure to check this box will cause the empty job to run and finish - which is not what we want.   We want the job to continually run, so that Visual Studio will then attach the running processes to this specific job.  Don't forget to mark this!:



Next, you need to move to the Advanced tab and select which nodes will be part of your debugging scheme.  In this case, I will only use 2 nodes, namely Kim03a (the head node) and Kim02a:



Click on submit job, you should see your job running.  Make sure you write down the ID of the job (in this case, it is 3) as you will need this info later on!!



Step 5: Configure Visual Studio

Open Visual studio and the project you are working on.  Go to project properties and access the Debugging section.  From there, instead of the Local Debugger, select MPI Cluster Debugger:



The following screenshot shows my debugger properties window with all necessary values filled in:



Let's go ahead and talk about each of these values:

MPI Run Command: This needs to be mpiexec for MPI applications

MPIRun Arguments:
  The first argument "-job 3.0" is to specify which is the job in the scheduler to use.  In my case, it was 3 when I created the job, and the 0 is to specify the task, which every job has by default.   We then have "-np 2" which is used to specify that we will be using 2 nodes for this job.  Finally you see I have "-machinefile \\kim03a\bin\machines.txt".  The "-machinefile" is used to specfify the UNV location of a text file that contains the names of the machines that will be part of this job.  The text file should have the names of the machines on each line.

MPIRun Working Directory: Use this location to specify the path where any output will be written to.  Remember NOT to use absolute paths but rather UNC paths to make sure that this location is available to every node.

Application Command: This is the UNV path to the MPI application that you would like to debug.  This application HAS to be compiled to 64-bit and debugging symbols should be in that same directory as well.

MPIShim Location: In this location, specify the path to the mpishim.exe binary that you copied in step 2 of this tutorial.  Remember, mpishim should exist on each and every one of the machines at the specified local path.
MPI network security mode: I usually change it to Accept connections from any address to avoid problems

You probably also noted that there is an Application Arguments window.  In this row you would specify any additional commands you would like to send to the application.

Apply the settings, hit F5 and you should be ready to go and debug your processes.   While trying to get this to work, I experienced pretty much every error out there, so post in the comments if you any issues and I will help you resolve them.   Happy debugging!

How to download the Microsoft Compute Cluster Pack

8. August 2007 13:26 by Csaborio in General  //  Tags:   //   Comments (0)
The Compute Cluster Pack can be downloaded from Microsoft's site; however it is not as trivial as it sounds.  These steps will hopefully make it easier to obatin the bits:

  1. Go to http://www.microsoft.com/windowsserver2003/ccs/default.aspx
  2. Click on the Get the Trial Software link:


  3. Click on the big blue button that says Get Started Today
  4. Sign-in to microsoft
  5. Select your country from the list
  6. Fill out the information that is being requested
  7. Review your order total (whooping $0.00), agree to the terms and conditions and click Place Order
  8. You will get a receipt and can now click on link to go to the installation instructions
  9. You will then be presented with the option to download the Compute Cluster Pack:



Enjoy!

How to Debug MPI Applications in Visual Studio 2005 (Part 1)

8. August 2007 10:25 by Csaborio in General  //  Tags:   //   Comments (0)
While assisting some customers at a High Performance Computing Event, I had the need to remember how to debug an MPI application.  See, when you create  distributed applications that will run on various computers (nodes) you need to use special tools to debug them.  Think about it, you want have a centralized Visual Studio instance and be able to debug each process within the same IDE.  Even though the idea sounds demented, the implementation is actually quite simple given that you follow the steps carefully.  Let's get started.

This is lengthy tutorial, so it will most likely be split into various steps.  Edit: It is now a 2 part tutorial, Part 2 is found here.

Step 1 : Install the Remote Debugger

You need to install the Remote Debugguer on EACH of the nodes that will run the application you are trying to debug.  The remote debugger is included on the Visual Studio 2005 distribution media within the “\vs\Remote Debugger\x64” folder.
 
You need to install it on each of the compute nodes (and on the head node if it is going to be working as a compute node).  Once you install it, make sure you fire it up so that it will be awaiting connections.

You need to use the x64 remote debugguer.   Distributed applications on Windows Server 2003 Compute Cluster edition NEED to be 64-bit if you would like to debug them with mpishim.



Step 2: Make mpishim Easily Accesible

When you install the remote debugger, mpishim is installed.  Mpishim is the binary responsible for launching the processes on each of the nodes for debugging.   The default location for mpishim is "C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\Remote Debugguer\x64".  The trick here is to copy all those binaries from that x64 folder to a place that is easier to specify (such as c:\windows\system32).  By doing so, you do not need to specify the whole path of mpishim when modifying the project properties debug info (which will be done later on).

Furthermore, you want to make sure that you copy mpishim to the same location on EACH compute node.   That is, if you coiped mpishim on c:\windows\system32 on Node 1, then you must copy it for the rest of the nodes as well in the exact same directory.



It is a good idea to copy all of the files within that directory in order to avoid missing on a dependency that mpishim may have.

Step 3: Modify the Registry

Cmd.exe has an issue with UNC paths.  MPI Debugging relies on these paths so just to be safe and make sure nothing breaks, carry out the following modification on each of the clusters.  Access the following registry key:

HKEY_CURRENT_USER\Software\Microsoft\Command Processor

Add a DWORD entry entitled “DisableUNCCheck” and set the value to 1:



That about covers the first half, on my next post I will cover the what needs to be done at the scheduler and visual studio level.   Read the second part in this link.

Why upgrade from VB6 to .NET – Part 1: Compelling reasons to migrate

7. August 2007 11:42 by enassar in General  //  Tags:   //   Comments (0)
One of ArtinSoft’s most recent customers performed a thorough analysis around upgrading all their critical Visual Basic 6.0 and ASP applications, accounting for about 5 millions of lines of code, to C# and ASP.NET 2.0. End of Life for Visual Basic 6.0 and the Sarbanes-Oxley Act (SOX) played a big role on their decision. The end of extended support for VB 6.0 in March 31, 2008 means that there will not be access to new technologies that allow to take full advantage of the company’s hardware investment, and that the business will find it hard to react to market changes. Plus their own corporate IT management policies state that all business areas should only use software and operative systems supported by an approved provider and have an appropriate patch/upgrade configuration mechanism.

A migration aligns with the corporate strategies in many ways. Business wise, they highly value any investment in initiatives that contribute to improve customer royalty, increase employee productivity and reduce costs, and it was estimated that improving system performance will increase user productivity between 0.04 – 0.05%. And regarding their systems, they look forward to minimizing the learning curve and required trainings, procuring an extensible and easy-maintainable code base, and maintain knowledge through the preservation of business rules, comments and cross references.

But why chose ArtinSoft’s VB to .NET automated conversion solutions and Microsoft’s .NET as the target platform? Well, they listed several reasons.

I’m not going to evangelize on the benefits of .NET, but I would like to mention a couple of our customer’s motivations. The .NET platform is the base of a complete strategy that integrates all Microsoft’s products, from the OS to market applications. It is a response to a growing market of web-based business processes. On the long run, it seems that Microsoft intention is to replace API Win32 or Windows API with the .NET platform. This could be because their lack of detailed documentation, uniformity and cohesion between its components, causing issues when developing Windows-based applications. .NET addresses most of these issues by providing a single, easily extensible group of interconnected blocks that eases the development of robust applications. In general, the .NET framework has a lot of advantages for increasing productivity.

On the other hand, they chose ArtinSoft’s solution basically because it proved to have a much lower risk and overall cost than other alternatives. They’ve been developing most of their applications since 1990, and an automated migration allows preserving all the business rules that exist in a core-business application.

On my next post I’m going to provide more details about the expected benefits for this specific migration project.

7z what?

4. August 2007 01:59 by Csaborio in General  //  Tags:   //   Comments (0)


Sending files can sometimes be a bit pesky.  Depending on the size limit set by your SMTP server, you will usually end up splitting the file in smaller chunks or following some other method.  There are various compression mechanisms out there that work OK, such as zip or rar.   Lately, I have found a new compression format called 7z.

I have been compressing archives and the level of compression of 7z (especially with word files that have embedded images) is amazing.  I was able to compress a 24 Mb word file to 4 MB in less than 5 seconds.  I really don't know if RAR offers the same (or better) compression ratio, but AFAIK, there is no RAR compression solution for OS X.

7z has various clients available for lots of platforms out there.  My personal for OS X is 7zX.  Download one of them and give them a try, you might save a byte or two.

VHDMount and TrueImage - Be Careful when formatting drives

26. July 2007 05:59 by Csaborio in General  //  Tags:   //   Comments (0)
Today I was helping a colleage to restore an image that was created with TrueImage to an empty VHD.  The process can be narrowed down to the following steps:

  1. Create the image using TrueImage
  2. Create two VHDs on the target VM (one to store the image, and the other to restote the image to)
  3. Mount the VHD that will hold the TrueImage image using vhdmount and format it
  4. Copy the Image from step 1 onto the VHD
  5. Unmmount the image and commit changes
  6. Boot into drive image and restore the image

Sounds easy?  Well, usually it is - but today I faced a problem with step 3 above.   When I mounted the VHD and formatted it, everything seemed fine, but when TrueImage tried to read the VHDs, it barfed and said that it was "unsupported".  This was very weird since I mounted the VHDs time and time again without problems and the contents were inside.   The VHDs were formatted as NTFS,  so there really was no reason for the "unsupported" file format.

We tested and decided to format the partitions from within TrueImage using the new hardware wizard.   After formatting them, I mounted the one that would hold the TrueImage image, copied the file and then tried to restore the image once again.

To my surprise, this time around TrueImage read both drives and the image contained on the VHD that I just copied.  After that, it was just a matter of specifying the source and target and the P2V migration was underway.

Bottom line, if you are planning on using empty VHDs with TrueImage (the bootable CD), make sure that you format the VHD with the utility itself and not within Windows using vhdmount.  Why?  Dunno, but if someone would like to shed some light on this issue, please be my guest!


Migrate SQL Server 2000 to SQL Server 2005

19. July 2007 17:20 by Mrojas in General  //  Tags:   //   Comments (0)

The first time you have to upgrade your database from SQL Server 2000 to SQL Server 2005, you might thing is simple process.

And and in very simplistic way of looking at the world it is.

But a REAL migration, from a REAL production database can be a lot more complex that what you can imagine.

A good friend of mine has had to experience recently the various venues of migration like this, and I really recommend his blog, he gives "Jedi" style tips you must consider if you want the Force to be with you.


For all things related to VB6 to C# or VB.NET software migration, be sure to visit Artinsoft's website.

Categories