Resize VHDs

13. March 2007 07:39 by Jaguilar in General  //  Tags:   //   Comments (0)

I just saw over at the Virtual PC Guy’s Blog that a new tool is available that allows you resize existing VHDs. You can check it out at the vmToolkit website.

According to the announcement, with this tool, you can resize VHDs (both increase and decrease their size), and can also be used to convert between Fixed and Dynamic disks.

Virtualization's downsides and how to minimize them/overcome them

9. March 2007 09:01 by Jaguilar in General  //  Tags:   //   Comments (0)

Today I saw an article over at ComputerWorld that talks about Virtualization's downsides. It brings out some interesting points, but I think you can easily overcome them. Here are my comments on some of the points they criticize.
(Since I am more familiar with Virtual Server than with competing products, I will concentrate on Virtual Server’s features)

  • Increased uptime requirements: This is something that has been on my mind since I started working with Virtual Server. Recently I had the change to setup a Virtual Server host cluster, and I think that is the way to go to minimize this concern. With Windows’ clustering capabilities, you can take out a node in the cluster, and another node will continue running the virtual machines. The virtual machines will only be unavailable for a few seconds while the resource group changes from one node to the other.
  • Bandwith problems: One of the recommended scenarios for running Virtual Server is to remove everything from a NIC on the host server except for the virtualization driver. You can extend this recommendation, and use several NICs on the host, each NIC associated with just one VM. Still, if you have something like a very high-traffic website, you’ll probably be better of running it on a physical server instead.
  • Cost accounting - license compliance: Microsoft has released new licenses for their server products that make them virtualization-friendly. You can review it here, and see how many licenses you need for your planned configuration. Licenses for other application may be messy, though.
  • Vendor support: This is something that the industry will have to sort out. Something similar happened with the move to multi-core systems. Some vendors treated dual-core CPUs as two processors (thus requiring two licenses), while other adapted a “per-socket” approach. Other even adapted even stranger policies (dual-core = 1.75 licenses ???). Again, the market will have to adapt to the new virtualization paradigm.
  • Management Complexity: Managing virtual machines as opposed to physical machines is definitely more complex – you basically have to perform all the management tasks that you normally do for physical boxes, PLUS the overhead of managing virtual machines. Management tools are still in their infancy, but with the eventual release of CarmineSystem Center Virtual Machine Manager plus a fully WMI-based API, the management effort should be reduced significantly.

Then again, there are some workloads that don’t work well when virtualized. Database servers are a classical example of this – IMHO it is a better idea  to have one large database server and hosts several database in that SQL Server instance (for example), than to have several virtual machines each one with a separate SQL Server instance hosting different databases.

Dispelling VB6 Migrations Fear Uncertainty and Doubt

8. March 2007 10:46 by Fzoufaly in General  //  Tags:   //   Comments (0)

Jeffrey Hammond of Forrester Research released yesterday a piece to help dissipate the FUD around Visual Basic Upgrades.

The executive summary of the Trend piece "Keys to Successfull VB6 Migration: Dispelling App Dev Professionals Fear, Uncertainty, And Doubt" can be found at: http://www.forrester.com/Research/Document/Excerpt/0,7211,41746,00.html

I agree with Jeffrey's view of the situation.  In a few words, VB6 customers should seriously consider a migration (it's about time!) and with the proper project consideration migrations can be executed almost painlessly.  I certainly recommend the research!

Multiple Remote Desktops (on Windows)

8. March 2007 10:21 by Csaborio in General  //  Tags:   //   Comments (0)
I use CoRD for multiple desktops on OS X - it works like a charm.  But what about Windows users, what can they use?  After reading a bit on Dugie's Perspective, I read about MuRD (multiple remote desktops), which does exactly this trick. 

Further reading on his blog points out to vXCopy, which is like xcopy on sterorids (which was like the copy command on steroids).

I haven't tested any of the two, but I am sure the Virtualization crew at the upcoming Munich event will put vXCopy to the test when copying VMs through the network.

Longhorn Hypervisor in Action (and an Intel intro to Virtualization)

5. March 2007 10:49 by Csaborio in General  //  Tags:   //   Comments (0)

Longhorn Server will have a virtualization role that will basically replace the need to use Virtual Server when working with Longhorn.  If you want to play around with the new virtualization role as of now, it is not going to be possible because this feature not present in any public betas. 

If you are curious about what this "role" is and a little more information on how this will impact that way we run virtual machines, take a look at the following demo done byJeff Woolsey:


Video: Longhorn - Windows Server Virtualization

Try this link instead for no-registration hassle


If you are still a bit lost in terms of what virtualization is, check the Intel webcast below:

These videos introduce virtualization technology and point out the differences between sofware and hardware based virtualization. They explore existing and emerging usage models of Virtualization: Server consolidation, disaster recovery, development/testing, load balancing, fast provisioning, etc They introduce the next generation of Intel� Virtualization Technology (Intel� VT) and VT -d (Directed I/O) and suggest usage models for the future enterprise.

The Vista Story...

5. March 2007 10:07 by Csaborio in General  //  Tags:   //   Comments (0)

Following on my previous post, I have finished installing Vista on Paralles.  There were various things I had to do to succesully finish the upgrade:

  • Expand the size of my virtual disk to at least 15 GB.  The Vista installer will expand various files and needs this space
  • Increase the memory size of the VM to 512
  • Upgrade to the latest version of Parallels and use the menu that allows Vista in believing that the machine is OK to install Vista:

ZZ5A784E5E

First off, all my applications worked after the upgrade, I was very impressed about this fact.  On the other hand (I don't know if this is a Vista or Parallels issue), things are kind of slow.  Windows XP running with 256 MB of RAM ran a lot faster than Vista with 512 MB of RAM.  It seems like my CPU usage is higher when using Vista under Parallels.

These issues have me revert to Windows XP, which I will keep using until an upgrade for either Vista or Parallels that address this issue is released. 

VMWare Fusion with Graphic Card Support

5. March 2007 09:55 by Csaborio in General  //  Tags:   //   Comments (0)

At this point, Parallels is one of the key players when it comes to virtualizing any x86 operating system on OS X.  I use it on daily basis and cannot live without it.

VMWare has been working on its own version of the VMPlayer for OS X, called Fusion.  The Betas that I have worked with have been not that impressive in terms of performance.   VMWare claims that some of the sluggishness comes from debug code embedded into the application.  On the other hand, Fusion offers some innovative features:

  • Support for 64-bit VMs
  • Ability to assign a CPU/core to the host, and the other to the program

Recently they have upgraded their beta to support 3-D acceleration directly from the graphics card.  This means that very soon, Mac users will be able to play 3-D games on Windows within OS X.  Considering that the list of Mac games is extremely short, this will really make things interesting for the Mac gaming market.  There is a video on YouTube that has a demo of the application running:

http://www.youtube.com/watch?v=xF_CoXsXtk4 

Your move, Parallels ;)

A lightweight PDF Reader

5. March 2007 09:44 by Csaborio in General  //  Tags:   //   Comments (0)

One of the things I really like about OS X is that it has a built-in PDF Viewer.  What's so special about this PDF viewer?  I think what I like the most is the fact that it is not bloated.  Adobe's Acrobat reader can take a while to launch, installing takes around 20 MB of downloads, and on older machines can really perform poorly.

I stumbled upon Sumatra PDF over the weekend, and I think it is a great replacemente for Adobe's Reader in Windows.   I think you cannot beat Sumatra, especially when it comes to the price (free!)

Do you know of any other PDF readers that are fast & lighteweight?

Virtualization Events in Europe

1. March 2007 05:20 by Jaguilar in General  //  Tags:   //   Comments (0)

In a couple of weeks we will be teaching two Virtualization for Developer Labs in Europe. The first one will be in Munich, on March 13–15, and the second one will be the following week in Reading, on March 20–23.

In this labs we show you in great detail how to leverage Virtual Server’s COM API and WMI methods in you own management applications. You will also learn how to create scripts to automate the management of Virtual Server installations, and you’ll also get to use the betas for the System Center Virtual Machine Manager.

For more information, don’t forget to check out the Virtualizacion Events website.

HP Developer Workshops - Long Beach, CA

1. March 2007 04:58 by Jaguilar in General  //  Tags:   //   Comments (0)

The first HP Integrity Developer Workshop for 2007 is coming up in a couple of weeks. It will be held in Long Beach, CA, on March 13-15.

In this workshop you get 3 days of training in your OS of choice (HP/UX, OpenVMS, Linux or Windows), plus an HP Integrity rx2620 server with the new dual core “Montecito” Itanium chip. It is an excellent deal, so make sure to check out the Workshop overview and the agenda (PDF link) and reserve a space today!

Effective Lines of Code in Visual Basic migrations

28. February 2007 15:46 by jpena in General  //  Tags:   //   Comments (0)

One of the most important metrics that we use to measure the size of a Visual Basic 6.0 code base to be migrated is called Effective Lines of Code.  This measurement represents all those lines that will require a certain amount of migration effort and in the case of Visual Basic 6.0 to .NET migrations, it includes the following:

  • Visual Basic 6.0 code lines: this is the main component of the code base to be migrated and denotes all those VB6 code lines written by the programmers of the source application.
  • Visual lines of code: this includes all the code that is automatically generated by the VB6 forms designer.  This code belongs to .frm and .ctl files and is not visible to the programmer (if you open a .frm or .ctl file in a text editor such as Notepad, you will see this visual code at the beginning of the file).  The reason why we include this as part of the code base to be migrated is that VB6 user interface also represents a manual migration effort, together with VB6 source code.

Naturally, the Effective Lines of Code metric does not include blank or comment lines since they do not imply any migration effort.

 

The myth of the working day

23. February 2007 16:18 by jpena in General  //  Tags:   //   Comments (0)

Working days, or business days, are usually said to be 8 hours long.  On an average day, you may get to work at 8:30 am and leave by 5:30 pm, having 1 hour for lunch (although this differs from one culture to another, just ask someone from Mexico and you’ll see what I mean!).

Anyway, people usually spend 8 hours at work everyday.  However, this doesn’t mean that people are 100% productive in the tasks they’ve been assigned to during those 8 hours.  During a normal working day, normal people also check email, make phone calls, talk to their coworkers and do other things that are not necessarily related to the tasks they are working on.  As a result, working days are pretty much like soccer games: while a soccer game is said to last 90 minutes, the effective playing time is usually much less than that.  Likewise, the effective working time in an 8-hours business day is lower than 8 hours.

The amount of effective hours may vary from an organization to another and from an individual to another.  Organizations that keep good project metrics may have a better idea of the average number of effective working hours per day.  The important thing to keep in mind here is that even when a team member may be assigned full-time to a task, it cannot be assumed that he or she will devote 8 hours per day to that task.  Therefore, it makes sense to think that a task that has an estimated effort of 16 person-hours will take a little more than 2 days to complete with one resource.

PaXQual: a silly language for analyzing and rewriting Web Pages

23. February 2007 12:42 by CarlosLoria in General  //  Tags:   //   Comments (0)

Let us gently start meeting PaXQuAL, a prototype language that we are shaping out and adjusting for the emergent purpose of symbolically expressing analysis and transformation simple tasks around Web Pages, all this circumscribed in the context of refactoring, as we have been done in previous posts.

And as we also have already done days before, sometimes we just want to digress a little bit from practical and realistic issues, just to expose some theoretical ideas we find somehow interesting (and probably nobody else). I just can promise that no Greek letter will be used, at all (in part because that Font is not allowed by the publishing tool, I confess).

Anybody (if any) still reading this post is allowed to right away express the nowadays classical: “Yet another language? How many do we count up by now?” Claim is justified because everybody knows that every problem in Computer Science is solved proposing a (the) new one. Now it is my turn, why not. It’s a free world. For the interested reader a technical paper will be hopefully available with further details at this site, soon.

Actually PaXQuAL (Path based Transformation and Querying Attribute Language is his real name; is pronounced Pascual) is not that new and different from many other languages, developed for real researchers at the academia and industry. We wanted to imagine a language for querying and transforming structured data (eg. XML, HTML) and from that sort we have many available as we know. What new material can be proposed at this field for someone like us? Actually, what we really want is to operationally relate CSS with some special sort of theoretical weird artifact we had been exploring some years ago that we may dare to call Object-Oriented Rewrite Systems or Term-rewriting Systems (TRS) with extra variables and state (as a result of some work developed by and joint with actual researchers some years ago).  Considering TRS in this case natural because CSS is indeed a kind of them and that field has a rich offering of tools for useful automated reasoning. And we can find them useful here, we guess.

The question that pushed us back to the old days is: given an interesting, so simple and practical language, like CSS is, what kind of object-oriented rewriting logic can be used to describe its operational semantics. You may not believe it but this is a very important issue if we are interested in reasoning about CSS and HTML for refactoring purposes among others. And we are, don’t we?

CSS is rule-based, includes path-based pattern matching and is feature (semantically attributed) equipped, which all together yields a nice combination. CSS can be considered “destructive” because it allows adding or changing (styling) only attributes of tags where remaining “proper content” does not result destructively rewritten. It is not generative, by such a reason (in contrast to XSLT and XQUERY). And that leads to an interesting paradigm. For instance, following is a typical simple CSS rule for setting some properties of every tag of the kind body.

body {

     font-family: Arial, Helvetica, sans-serif;

     background-color: #423f43;

     text-align: center;

}

Of course more explicit rules like this one can be declared but further, an inheritance (cascading) mechanism implicitly allows that attributes may be pushed down or synthesized as we know from attribute grammars.

That all is nice but we feel we had to be original and want to propose the crazy idea of using something similar to CSS for purposes beyond setting style attributes, for instance for expressing classification rules allowing to recognize patterns like the ones we explained in previous posts. For instance, that a table is actually a sort of layout object, navigation bar or a menu, among others. Hence, we would have a human-readable, querying and transformation language for Web Pages, a sort of CSS superset (keeping CSS as a metaphor what we think might be a good idea):

Let us by now just expose some examples (where we advert concrete syntax in PaXQuAaL is not yet definitive). For instance, we may want to eliminate the bgcolor attribute of any table having it because is considered deprecated in XHTML. We use symbol “-:“ for denoting execution of the query/transformation as in Prolog.

 :- table[bgcolor]{bgcolor:null;}

We may want to add a special semantic attribute to every table directly hanging from a body, indicating it may be a surface object for some latter processing. We first must statically declarate a kind of table, “sTable”, accepting a surface attribute because we are attempting to use static typing as much as possible (“Yes I am still a typeoholic”)

@:- sTable::table[surface:boolean;]{surface:false}

Symbol “@:-” is like “:-” but operating at the terminological level. And then we have the rule for classifying any table instance hanging from the body tag, directly:

:- body sTable{surface:true;}

Many more "interesting" issues and features still need to be introduced; we will do that in forthcoming post. Hence, stay tuned.

smbclient

22. February 2007 18:04 by Mrojas in General  //  Tags:   //   Comments (0)
I like Linux and consider it as a very interesting OS alternative
However sometimes there are simple things that I just do not know how to do.
Windows is still everywhere and specially getting thing
from a Windows to a LINUX box can be tricky.
For example recently I had to restore a database
from a Windows DB2 to a Linux DB2 so I had the backup and needed to
eove it to the linux box.
So use smbclient!
But how.
Ok.
to connect to a windows share do something like this:

smbclient -user domain/username //machine/sharename

this thing will ask for your password and now you are connected.
Smbclient is just like an ftp.
But how do you copy a directory
I found these instructions in the internet:

smb: > tarmode
smb: > lcd /tmp
smb: > recurse
smb: > prompt
smb: > mget your_directory/

as simple as that.
So hope that helps

Virtual Machine Additions for Linux download link

22. February 2007 04:26 by Jaguilar in General  //  Tags:   //   Comments (0)

For some reason, several people have told me that they are not getting the link to download the Virtual Machine Additions for Linux on Microsoft Connect anymore. If that happens to you, try the direct link: https://connect.microsoft.com/content/content.aspx?ContentID=1475&SiteID=154 . You will still need to enter your Passport Live ID in order to access the Connect website, but that link should take you directly to the Linux Additions page.

Going Vista

20. February 2007 09:13 by Csaborio in General  //  Tags:   //   Comments (0)

So what is the story with Vista?  You've read the hype, you've seen the reviews but I bet not many have messed around with it.  I will take challenge and not only install, but upgrade my current Windows XP virtual machine running in Parallels to Ultimate Vista.

Basically I am doing this because I do not want to install Vista on a clean image and have to reinstall all the software that would require re-configuration.  What I have on my VM that I hope does not break in Vista is the following:

  • Live Writer
  • Visual Studio 2005
  • Office 2004
  • Visual Source Safe

Not to bad huh?  I will keep posting my progress made when moving to the new OS by Microsoft.

As of now, I have just upgraded to the latest Parallels version which should let me upgrade to Vista...launch the installer and TADA:

ZZ50876BA8

One click after I get my first obstacle:

ZZ0EF6BAFC

Turning off VM and increasing memory...BBL

Virtual PC 2007 FINAL is out!

19. February 2007 08:36 by Jaguilar in General  //  Tags:   //   Comments (0)

Today Microsoft released the final version of Virtual PC 2007. You can download it here. This version fully supports Vista, both as a Host and a Guest, supports AMD and Intel hardware virtualization, and also supports 64–bit Host operating systems.

You can get some more information at the Virtual PC Guy’s WebLog, or directly on the VPC 2007 homepage.

Bad software is terrible for business and the economy.

18. February 2007 02:11 by Fzoufaly in General  //  Tags:   //   Comments (0)

A recent article by Jason Pontin on the Ney York Times began with exactly these words.  It went on like this:

... Software failures cost $59.5 billion a year, the National Institute of Standards and Technology concluded in a 2002 study, and fully 25 percent of commercial software projects are abandoned before completion. Of projects that are finished, 75 percent ship late or over budget.

The reasons aren’t hard to divine. Programmers don’t know what a computer user wants because they spend their days interacting with machines. They hunch over keyboards, pecking out individual lines of code in esoteric programming languages, like medieval monks laboring over illustrated manuscripts.

Worse, programs today contain millions of lines of code, and programmers are fallible like all other humans: there are, on average, 100 to 150 bugs per 1,000 lines of code, according to a 1994 study by the Software Engineering Institute at Carnegie Mellon University. No wonder so much software is so bad: programmers are drowning in ignorance, complexity and error.” ...

 

Doesn't the above paragraph look like the perfect reason why an automated migration of a valuable application makes sense?  When you are doing an automatic migration you do not have to guess the intentions of the users.  You have the perfect specification.  You have a working application!  Then you might ask, if it is working why should you even touch it in the first place?  And we are going back to the reasons to migrate.  Typically an application becomes a candidate for a migration if it continues to support the business but need to evolve and it is written in a technology/platform that does not provide the best advantages to the current business scenario.  When an application meets these characteristics one of the common decisions is to throw it a away and re-build it!  Here is when we again enter in the cycle where most projects go down an unmanageable spiral of over time and over budget mainly because of the isses in translating business requirements into working code.  An alternative is the automated migration.  Take the best possible specification (the app itself), use the computer to upgrade it to a more modern environment (automatic migration) and take advantage of the latest techniques that software development tools can provide (.NET).  Why start from scratch when all you need is to continue to evolve a working app on a modern set of technologies?  VB6 application can be moved .NET, it is possible to extend their life, recover all the embedded knowledge and continue to extract value from them for years to come.

If programming is so hard as the New York Times implies, why shouldn't we use techniques to reduce the amount of required programming?  Automatic migration is one of those techniques.  The article goes on illustrating another potential solution: Intentional Programming.  The idea is to capture the intention of the users and translate them into usable programs.  Again, more support for my thesis, why not use a working application as the source of intentions?

Time reporting fun, Part II

17. February 2007 13:43 by jpena in General  //  Tags:   //   Comments (0)

In order to get the most from a time reporting system, several requirements should be met.  Historical data that is accumulated in the system will be more accurate and meaningful depending on the way team members report their hours.

 

Periodicity is very important.  Time reports are usually more accurate when team members update their hours daily.  If reporting is done weekly, team members will hardly remember how much time they spent on each task at the beginning of the week.

 

Insist that team members report their hours accurately.  Sometimes people don’t work exactly 8 hours a day, so when time reports are “flat” (i.e. 8 hours every day), you may be looking at a symptom of inaccurate reporting.

 

It is also important to break down the tasks in meaningful sub-tasks, so team members won’t be confused when reporting their hours.  If possible, include a description of the tasks; this description will serve as a future reference for post-mortem analysis and historical data retrieval.

 

Finally, it is not recommendable to use reported hours as a criteria to reward team members.  Doing so may introduce more biases in the reports and may cause a negative team response.

Time reporting fun

17. February 2007 12:54 by jpena in General  //  Tags:   //   Comments (0)

Establishing a time reporting system within an organization can be a very challenging task, but it presents important advantages once the system is in place.  Let’s face some hard facts about time reporting:

  • Time reporting is necessary: if you don’t know how much time the team is spending on each task, you’ll hardly know if the original effort estimates for the tasks were correct.  Also, time reporting allows you to keep historical data that will be helpful in estimating future projects and optimizing your processes.
  • Time reporting is overhead: of course time reporting is not part of the main tasks that your team members are supposed to execute.  Because of this, your time reporting system must be fast, friendly and easy to use.  If a team member spends more than five minutes reporting his/her hours, then something’s wrong with the system.  Also, if a project manager or team leader has to spend hours chasing team members that haven’t reported their hours on time, then there is definitely a problem.
  • Time reporting is cultural: nobody really likes time reporting when it is first introduced.  If there is no plan to communicate the advantages of time reporting to team members, they will probably be reluctant to report their hours in a periodic and timely manner.  Basically, you have to sell everybody the idea that time reporting is important and key to the project’s success.  Organizations that have succeeded at creating a time reporting culture now possess extensive historical data and knowledge about their own processes.

Categories