Wednesday, December 3, 2008

Permission Denied - SQL Express 2005 Problem

I installed SQL Express 2005 on my Windows Vista PC and a SQL Server Express 2005 instance with the name GopinathMPC\SQLExpress is created successfully.

As a first step of using the new SQL Server Express 2005 I started creating a database with the query

CREATE DATABASE TestDB

GO

Unexpectedly execution of the above query failed with the following error message

Msg 262, Level 14, State 1, Server GOPINATHM-PC\SQLEXPRESS, Line 1
CREATE DATABASE permission denied in database 'master'.

The error message indicates that I don't have enough permissions to create the database. The login which I'm using to access my Windows Vista has administrative privileges, but still I'm not granted administrative privileges on the SQL Server instance.

Looking through the documentation of SQL Server Express, I found the that

Windows Vista users that are members of the Windows Administrators group are not automatically granted permission to connect to SQL Server, and they are not automatically granted administrative privileges.

Now it is very clear that event though I'm an administrator on my Windows Vista OS I don't have administrative rights on SQL Express 2005 Server. So I need to get administrative rights.

How to Grant Administrative Rights on SQL Express 2005?

  1. Log in to Windows Vista using your administrative account
  2. Open SQL Server Surface Area Configuration Application ( Start --> All Programs --> Microsoft SQL Server 2005 --> Configuration Tools -->SQL Server Surface Area Configuration)
  3. Click on Add New Administrator (pointed in the image) link

CREATE DATABASE Permission Denied - SQL Express 2005 - Image 1

  1. A new window with title 'SQL Server User Provisioning on Vista' popup and displays the permissions on the left panel.
  2. Select the permission 'Member of SQL Server SysAdmin role on SQLEXPRESS' available on the left panel and add it to the right panel with the help of add button( button with > text) available in the window.

CREATE DATABASE Permission Denied - SQL Express 2005 - Image 2

  1. Click on OK button to save the changes.
  2. That's all now your Windows login has administrative privileges on SQL Server.

HAPPY DATA-BASING

A Tour on 3rd Party tools

1. DevExpress

2. Infragistics

3. Telerik

4. ComponentOne
5. Caphyan Advance Installer.
6. Aqua Data Studio (for E-R designs)
7. Farpoint Spread
8. DotNet Bars
9. Source Offsite
10. Multisite
11. ClearQuest
12. Fiddler
13. Report Sharp Shooter

Monday, December 1, 2008

Top Reasons why we should have requirement documents?

1. Requirements are not features. You simply must know what your users must have to do their jobs. Distinguish thoughtfully and carefully between what is needed and what the vendor is offering. At the same time, you must be able to factor into the requirements documentation other considerations, such as budget limitations and regulatory constraints. You will need to train your users how to specify their requirements, which takes tact, training, and patience. A request usually starts with: "All I really need is " and then wanders away. A vague or ambiguous statement -- regardless of its intent -- will consume user time, IT time and cause some other loss, such as the opportunity to do another project. This issue is methodology neutral: Agile methods will have the same problem if user time is not focused and users are not trained. You cannot fulfill the requirements until they're precise and complete and you understand why they're important to the business.
2. Features are not requirements. Eliminate decisions made by squeaky wheels --- sometimes referred to as "design by whine" -- where the loudest department gets the most resources. Your goal is to make certain that business decisions made with regard to requirements (and their associated expenditures) will generate the return you're expecting. Start by working backward, to match the problems solved with existing software to identify the problems that must be and can be solved only with new software. Don't lose anything you have now that is essential to the business; it's easy to forget "hidden" requirements that oftentimes are taken for granted or ignored.
3. Work with your resources and manage with discipline. Developing an effective requirements document costs money and managing resources requires discipline. A politically expedient decision, even if it's cost effective in the short run but is not the right decision in the long run, will cost time and money -- eventually. From a strategic point of view, ask if the requirements process itself is going to cost more than it's worth. You will find that using the enterprise requirements documentation process will help you avoid useless meetings, which saves time and money, but more about that later. When the appropriate constituents are communicating with each other, they accept the value of their participation because they can track the project's payback to the entire organization.
4. On-going documentation is essential. It's necessary that you continue to manage your requirements, because sometimes internal departments continue to push for a project, application or feature that was denied in the last upgrade or purchase. Perhaps it wasn't worth the cost, or its payback wasn't acceptable to the enterprise as a whole. By maintaining continuity in your requirements documentation, you can see why a prior request was denied. If the reason hasn't changed, it's likely there is no need to investigate it again -- and you can avoid bringing in a "new" solution to solve an equally "new" problem. Referencing prior documentation also can help you eliminate ongoing costs for applications and hardware that are no longer used.
5. Identify and understand the workarounds and desk drawer systems. You need to understand what in your current application made a workaround or desk drawer system an attractive solution. Then, make certain the upgrade eliminates these systems and their causes, with this caveat: You might learn, in the course of such an investigation, that a certain desk drawer system has a unique function or user, and that it's best left alone.
6. Focus your users' attention. You want to learn from them what can be done better, faster or more efficiently, but realize that users don't always know their current business needs. In fact, users at different levels of an organization have different perceptions of business needs, priority and urgency. They tend to segregate by department or management level " dismissing problems because "it's not an IT issue" -- but an effective requirements document requires integration and impact analysis for all departments. Know, too, that the needs of other stakeholders (senior management, operations personnel and human resources, for example) must be considered. User frustration comes from asking for one thing and receiving another; help them articulate their needs carefully and fully. When what a user asks for is not possible, feasible or within budget, say so, because unmet expectations foster dissatisfaction. Always ask what's the relative value of a new "requirement" to avoid spending $10 to save a dime. And don't forget: users take what they have for granted, so make sure it's carried forward in the new request.
7. Benefit from "outside" expertise. An objective outsider, can, will and should ask the questions an insider cannot. What works? What doesn't? What frequent requests create problems because they're difficult to meet with the existing application? That's the outsider's function -- to have no vested interest in how the problem is solved. It's a common pitfall in requirements documentation to describe a problem in terms of its supposed solution -- which might not be the best, or most cost effective, approach. An outsider also can provide useful guidance on how to handle regulatory requirements that affect information systems, but that most business people are not aware of, such as the prohibition against retail businesses retaining credit card numbers after a transaction.
8. Consider your upgrade as an investment and apply metrics to it. A comprehensive requirements document will help you decide if the benefits of the new application or upgrade justify the cost. How much more productive can the organization be with the upgrade, compared to the current application? Track paybacks for projects and information assets to determine when to re-invest or stop further investment. It's possible that an application was a good investment -- at the time. When it was installed, perhaps it saved every one of your 5,000 clerks 30 minutes a day -- but today, you have 5 clerks and it saves them three minutes a day. That feature probably isn't worth the cost. And some stand alone features -- such as desktop publishing -- aren't needed now, thanks to the tools available in word processing applications.
9. Save time. Reduce rework by spending time in the requirements gathering and analysis process, where it's much cheaper to eliminate a "need" than realizing it costs too much (or isn't cost effective) downstream in the design, development or application stage. Time invested in the initial steps of the process provides more time for decision making at the right time. More control of the process will help keep it moving; participants will know what to expect as you move from blue sky thinking to brainstorming to understanding to decision making. Tracking the results of your meetings will document what you've agreed to do and what you've agreed not to do and who made the decision.
10. Save money. To start, you will eliminate ongoing costs for applications and hardware that your requirements information gathering tells you are used no longer. With the right subject matter experts and decision makers involved at the right time, you will not be able to move forward on ideas that sound promising but contain potentially expensive problems that would have to be solved later in the process -- and at a greater cost. You will open many avenues to managing costs, whether it's eliminating unnecessary systems, consolidating a value-added system or evaluating long-term versus short-term value. You might learn, for example, that in the year required to acquire and install a new system the opportunity to use it will disappear. One of the most promising benefits of developing enterprise requirements is the strong possibility you will establish standard solutions to many of your seemingly "unique" problems, obviating the need for custom solutions -- and that can be a real time and money saver.

Wednesday, November 12, 2008

Version Control with Tortoise SVN (part - 2)

TortoiseSVN Visual Studio Integration

Download the following Subversion.zip files which contains some settings files for VS2005 and VS2008
http://www.esnips.com/doc/6fe6f31e-ae60-4838-9be7-6fe22661d0e0/subversionhttp://www.esnips.com/SignInAction.ns?action=forcesign

The settings file gives you a new toolbar, a new menu, and new items on the context menus in the solution view.  You get some nice TortoiseSVN standard icons to make it even easier.

This all works with Visual Studio 2005 and 2008 (and even the Express Editions), along with a standard installation of TortoiseSVN.  Unless you want to edit the commandline in the External Tools I recommend installing TortoiseSVN to the default location.

It’s quick to set up, just go to your “Tools” menu and choose “Import and Export Settings” and follow the instructions. Ensure the tick box for which settings to import are all ticked.  If the settings don’t work first time, try again, as Visual Studio can be a bit flaky when loading.  It will replace your External Tools because they have to be stored in the correct numbered slot due to the way they are implemented in Visual Studio.

Files contained in the Subversion.zip files :

SubversionMenu.vssettings

This is a settings file for Visual Studio 2008.  This adds a menu to the IDE for TortoiseSVN with the appropriate icons.

SubversionMenuToolbar.vssettings

This is a settings file for Visual Studio 2008.  This adds a menu for TortoiseSVN as well as a toolbar using the appropriate icons.

SubversionMenuToolbarContexts.vssettings

This is a settings file for Visual Studio 2008.  This adds not only the menu and toolbar, but also adds the items to the appropriate context menus for files and solutions.

SubversionMenuToolbarContextsVS2005.vssettings

This provides the menu, toolbar and context menus for TortoiseSVN in Visual Studio 2005.

SubversionInstall.vbs

Use this file if you want the external tools only.  These will be appended to the list, otherwise use the settings files.  Don’t use in addition to the settings file.

This simply installs a set of External Tools into Visual Studio for common TortoiseSVN operations. It can be installed on versions above Visual Studio.NET (version 7.0). Currently it is configured for Visual Studio 2008 (version 9.0), to make it work on other versions change the variable “strVisualStudioVersionNumber” as outlined in the file’s comments.

Also, if you have installed TortoiseSVN in a non-default location, make sure that you change the variable “strTortoiseSVNBin” to the correct binary path. Make sure that the backslashes are doubled up.

Supported TortoiseSVN operations

The following Subversion/TortoiseSVN features are covered in the integration:

  • Commit - Commit the files to the repository
  • Update - Update the current working version
  • History - Get the history for the selected file
  • Diff - Get the diff compared to the base version
  • Blame - Find out who committed the crimes in the file
  • Revert - Undo local changes
  • Modifications - Check to see if any files have been modified
  • Edit Conflicts - Edit the conflicts that arise from merging/updating
  • Resolve - Mark the file as resolved for conflicts
  • Repository - View the repository on the server
  • Project History - Get the history of the entire project
  • Add Solution - Add the solution being edited to source control
  • Branch/Tag - Perform a branch or tag on the current working copy
  • Settings - Set up TortoiseSVN

 

References:
1. tortoiseSVN
https://www.projects.dev2dev.bea.com/Subversion%20Clients/TortoiseSVN.html

Tuesday, November 11, 2008

Some Diff/Merge Tools

If the tools we provide don't do what you need, try one of the many open-source or commercial programs available. Everyone has their own favourites, and this list is by no means complete, but here are a few that you might consider:
WinMerge
WinMerge [http://winmerge.sourceforge.net/] is a great open-source diff tool which can also handle directories.
Perforce Merge
Perforce is a commercial RCS, but you can download the diff/merge tool for free. Get more information from Perforce [http://www.perforce.com/perforce/products/merge.html].

KDiff3
KDiff3 is a free diff tool which can also handle directories. You can download it from here [http://kdiff3.sf.net/].
ExamDiff
ExamDiff Standard is freeware. It can handle files but not directories. ExamDiff Pro is share ware and adds a number of goodies including directory diff and editing capability. In both flavours, version 3.2 and above can handle unicode. You can download them from PrestoSoft [http://www.prestosoft.com/].
Beyond Compare
Similar to ExamDiff Pro, this is an excellent shareware diff tool which can handle directory diffs and unicode. Download it from Scooter Software http://www.scootersoftware.com/].
Araxis Merge
Araxis Merge is a useful commercial tool for diff and merging both files and folders. It does three-way comparision in merges and has synchronization links to use if you've changed the order of functions. Download it from Araxis [http://www.araxis.com/merge/index.html].
SciTE
This text editor includes syntax colouring for unified diffs, making them much easier to read. Download it from Scintilla [http://www.scintilla.org/SciTEDownload.html].
Notepad2
Notepad2 is designed as a replacement for the standard Windows Notepad program, and is based on the Scintilla open-source edit control. As well as being good for viewing unified diffs, it is much better than the Windows notepad for most jobs. Download it for free here [http://www.flos-freeware.ch/notepad2.html].

Sunday, November 9, 2008

Virtualization : Tools and Terminologies

Virtual machine technology applies to both server and client hardware. Virtual machine technology enables multiple operating systems to run concurrently on a single machine. In particular, Hyper-V, a key feature of Windows Server 2008, enables one or more operating systems to run simultaneously on the same physical system. Today, many x86-based operating systems are supported by Virtual PC 2007, Virtual Server 2005, and Hyper-V.


What is virtual machine technology used for?
Virtual machine technology serves a variety of purposes. It enables hardware consolidation, because multiple operating systems can run on one computer. Key applications for virtual machine technology include cross-platform integration as well as the following:
Server consolidation. If several servers run applications that consume only a fraction of the available resources, virtual machine technology can be used to enable them to run side by side on a single server, even if they require different versions of the operating system or middleware.
Consolidation for development and testing environments. Each virtual machine acts as a separate environment, which reduces risk and enables developers to quickly recreate different operating system configurations or compare versions of applications designed for different operating systems. In addition, a developer can test early development versions of an application in a virtual machine without fear of destabilizing the system for other users.
Legacy application re-hosting. Legacy operating systems and applications can run on new hardware along with more recent operating systems and applications.
Simplify disaster and recovery. Virtual machine technology can be used as part of a disaster and recovery plan that requires application portability and flexibility across hardware platforms.
Moving to a dynamic datacenter. Hyper-V, along with systems management solutions, helps you to now create a dynamic IT environment that not only enables you to react to problems more efficiently but also to create a proactive and self-managing IT management solution.

Virtual PC lets you create separate virtual machines on your Windows desktop, each of which virtualizes the hardware of a complete physical computer. Use virtual machines to run operating systems such as MS-DOS, Windows, and OS/2. You can run multiple operating systems at once on a single physical computer and switch between them as easily as switching applications—instantly, with a mouse click. Virtual PC is perfect for any scenario in which you need to support multiple operating systems, whether you use it for tech support, legacy application support, training, or just for consolidating physical computers.

Virtual PC provides a time-saving and cost-saving solution anywhere users must run multiple operating systems. Use Virtual PC in the following scenarios:
Ease Migration: Run legacy applications in a virtual machine instead of delaying the deployment of a new operating system just because of application incompatibility. Test your migration plans using virtual machines instead of actual physical computers.
Do More in Less Time: Support staff can run multiple operating systems on a single physical computer and switch between them easily. They can also restore virtual machines to their previous state almost instantly. Train students on multiple operating systems and virtual networks instead of purchasing and supporting additional computers.
Streamline Deployment: Test software on different operating systems more easily. One crashing application or operating system doesn’t affect others.
Accelerate Development: Increase quality assurance by testing and documenting your software on multiple operating systems using virtual machines. Decrease time-to-market by reducing reconfiguration time.



Configurability
After installing Virtual PC, you can configure it to suit your requirements. Virtual PC has a number of settings that control how the product interacts with the physical computer, allocates resources, and so on.
Easy installation
Virtual PC is simple to install. Any administrator can run the Virtual PC guided setup program, and installation doesn’t require a reboot. The first time Virtual PC starts, it guides you through the process of creating the first virtual machine.
Standardization
Configure and test upgrades and installations on virtual machines, and then you can deploy throughout your company a standard configuration that avoids problems caused by minor differences between hardware platforms.
Convenience
Users switch between operating systems as easily as they switch between applications. They simply click the window containing the virtual machine. They can pause individual virtual machines so they stop using CPU cycles on the physical computer. They can also save virtual machines to disk and restore them at a later time. The restoration process normally takes a few seconds—much faster than restarting the guest operating system.
Host integration
Users can copy, paste, drag, and drop between guest and host. Virtual PC provides additions that you install in a guest operating system to enable this functionality.













function popWin(url){
spawn = window.open(url,'newWin','width=800,height=600,top=0,left=0,location=yes,toolbar=yes,status=true,scrollbars=yes,resizable=yes,fullscreen=no,menubar=no,directories=no');
spawn.focus();
}
Virtualization in Education When it comes to desktop computing in education, flexibility is key. Students, faculty and staff want to be able to access their own data, securely, from all of these locations. This white paper discusses how virtualization technology can help connect educators and students to applications and information from almost any device. »
Step Up Your game: Global Delivery Capabilities for the Electronics Industry How can your business meet the ever-increasing demands of end users and still make a healthy profit? IBM takes a look at innovation's role in driving business objectives and helping electronics enterprises move forward in a lightning fast marketplace. »
Innovation in Action Innovation is powering the energy and utilities network. Get plugged in with the "Innovation in Action: The Energy & Utilities Project" book from Montgomery Research Inc. and IBM. »
The Agile CFO: Enabling the Innovation Path to Growth Agile CFOs are realizing that they hold a unique and critical position in enabling innovation that matters. CFOs should expand their roles from an historically oriented stance to a proactive participant in driving innovation, change and growth. CFOs who do stand to drive their companies into leadership positions. This paper proposed several actions financial executives can take to become agile CFOs. »
Leveraging Web 2.0 to Accelerate New Service Ideas Learn how telecoms can harness Web 2.0 technologies to help visualize, develop and roll out new services. Get IBM's point of view on how to leverage the power of collaboration in a Web 2.0 environment now! »


In virtualization technology, hypervisor is a software program that manages multiple OS (or multiple instances of the same OS) on a single computer system. The hypervisor manages the system's processor, memory, and other resources to allocate what each operating system requires. Hypervisors are designed for a particular processor architecture and may also be called virtualization managers.

Tuesday, October 14, 2008

An introduction to Mocking

Mocking

When you want to test certain function/feature which is dependent on some other features which might take some time which might not be available at that time then you can Mock that particular module paticular procedure and you will get the result that you want to continue writing your Unit-Test. For example you want to mock the internet connection you can do that. So you want to let say that some of the features are only available when internet connection is established. Now you are in a situation where you can't continue writting your unit-test as there is no internet connection available. So you can mock the internect connection and write your unit-test with Mock object. The Mock object ensures that there is an internet connection but in reality there is no internet connection. So you can basically control there resturn type, you can control outside behaviour of that real object but it is not in reality the real object.

Mocking is an integral part of unit testing. Although you can run your unit tests without use of mocking but it will drastically slow down the executing time of unit tests and also will be dependent on external resources.

There are different types of Mocking frameworks available which allow us to create Mock Objects.
1. Rhino Mock
2. NMock2
3. TypeMock.NET
4. EasyMock.NET
5. Moq

TypeMock in not free.

Links
1. http://www.gridviewguy.com/Articles/447_Introduction_to_Mocking.aspx

Wednesday, September 24, 2008

MVC architecture

Model-view-controller (MVC) is an architectural pattern as well as a design pattern used in software engineering.
The main aim of the MVC architecture is to separate the business logic and application data from the presentation data to the user. In MVC the presentation layer is further separated into view and controller.
There are a number of frameworks is in use today that are based on this patterns includes: JAVA Struts, Maverick (mav.sourceforge.net), ROR, Microsoft Smart Client Software Factory (CAB), Microsoft Web Client Software Factory, and the recently announced ASP.Net MVC framework.

Smalltalk, one of the earliest object-oriented languages, gave developers a platform to develop best practices for object-oriented systems. This classic Model, View, Controller (MVC) design pattern was developed from this research.
Now the question is what is the benefit of separating the Business Logic and Application data from Presentation Layer? The answer is as follows:
User interfaces change often, especially on the web where look and feel is a competitive issue. Also, the same information is presented in different ways. The core business logic and data is stable.
So to overcome this problem, we use the software engineering principle of "separation of concerns" to divide the application into 3 different areas :
1. Model (The Data Layer) : represents the core structure of the data and functionality in an application. The model informs the View that it should update the representation of the data when that data changes.
Models are responsible for maintaining the state of the application, often by using a database. DataSet and typed DataSet (some times business object, object collection, XML etc) are the most common use of the model in a .NET application.
2. View (the Presentation Layer) : It represents the presentation of the data. There can be many views of the common data. These components are strictly for displaying data, they provide no functionality beyond formatting it for display. The ASPX and ASCX files generally handle the responsibilities of the view in .NET application.
3. Controller (The Business Logic Layer) : It accepts the input from the user and makes request from model and selected the appropriate View based on user preferences and Model State. Controllers are the central communication mechanism in a MVC application.
ie.
The handling of events or the controlling is usually done in the code-behind class in .NET application.

As the "Elements of Reusable Object-Oriented Software" book has defined MVC, the reason why MVC has gained a great deal of popularity is because it offers decoupling of presentation (View) and data (Model). That is exactly what developers dreamed about for their web applications: modifying the data or the presentation of the web application without impacting the other layer. Additionally, with the MVC design pattern, you could have several Views for a Model or different Models for a single View.

Example of MVC :
The classic example of this strategy is a spreadsheet program that runs on a personal computer. Using MVC architecture, the Model stores the formulae and other data. When you issue a command to save to a file (or load from a file), the Model handles this action. It also handles the specific logic, like recalculating the entire sheet. The View draws the familiar grid that shows a part of the data (depending on the scroll bar's position). The Controller deals with any process in which the user changes something.

One of the main points with the MVC pattern is that there is no direct communication between the model and view. This allows for reuse of model and controller code in different types of applications. The logic for a WebApplication could easily be applied to a Windows application by changing the view components. The controller components could also be exposed as WebServices for SOA without affecting the view. Of course the model could also be changed without affecting the controller; for instance, if the database itself or the scheme were changed. Another benefit to separating the functions is to allow for better testing. Since the view is only concerned with display, all of the logic that needs tested is in the controller. Unit tests can easily be incorporated to test these functions.

Advantages :

1. It provides a clean separation of concerns.
2. It decouples code which allows for easier unit testing and easier modifcations.
3. It also isolates changes to a large degree.
4. It lets developers write and debug modules independently.
5. MVC simplifies the creation of multiple user interfaces with the same data.
6. It increases the opportunity for code reuse.

Seprating Model from View give the following advatages :
1. Easy to add multiple data presentations for the same data.
2. Faciliates adding new types of data presentations as technology developers.
3. Model and View components can vary independently enhancing maintainability, extensibility and testability.
Seprating Controller from View give the following advatages :

- permits run-time selection of approprioate views based on workflow, user preferences or Model State.
Seprating Controller from Model gives the following advatages :
- allows configurable mapping of user actions on the controller to the application functions on the Model.
Drawbacks of MVC:
1. Requires high skilled experienced professionals who can identify the requirements in depth at the front before actual design.
2. It requires the significant amount of time to analyze and design.
3. This design approach is not suitable for smaller applications. It Overkills the small applications.


MVP (Model View Presenter)
The Original MVP was published in 1996 by 'Mike Potel' while working for Taligent, Inc. for generation programming model for C++ and Java applications, it was was based on Smalltalk 'Classic MVC' programming model and was intended to better fit the richer IDE's introduced at the time and to support client-server applications.
Taligent was started by Apple Computer, Inc. as a joint venture with IBM (and later joined by Hewlett Packard) before becoming the wholly owned subsidiary of IBM in late 1995.
Components :
The Model refers to the data and business functionality of the application.
The View is the visual representation of the Model and is comprised of the screens and widgets used within an application.

The Presenter is a component which contains the presentation logic which interacts with the Model.

How MVC is different from MVP (Model View Presenter)
With MVC, it’s always the controller’s responsibility to handle mouse and keyboard events. With MVP, GUI components themselves initially handle the user’s input, but delegate to the interpretation of that input to the presenter. This has often been called “Twisting the Triad”, which refers to rotating the three elements of the MVC triangle and replacing the “C” with “P” in order to get MVP.

With the MVC pattern it's possible to separate your presentation information from your behind the scenes business logic. Think along the lines of XHTML/CSS and separating your content from your presentation. A brilliant concept that works quite well, but is not without it's faults.
In MVC, the model stores the data, the view is a representation of that data, and the controller allows the user to change the data. When the data is changed, all views are notified of the change and they can update themselves as necessary (think EventDispatcher).
MVP is a derivative of MVC, mostly aimed at addressing the "Application Model" portion of MVC and focusing around the observer implementation in the MVC triad. Instead of a Controller, we now have a Presenter, but the basic idea remains the same - the model stores the data, the view is a representation of that data (not necessarily graphical), and the presenter coordinates the application.
In MVP the Presenter gets some extra power. It's purpose is to interpret events and perform any sort of logic necessary to map them to the proper commands to manipulate the model in the intended fashion. Most of the code dealing with how the user interface works is coded into the Presenter, making it much like the "Application Model" in the MVC approach. The Presenter is then directly linked to the View so the two can function together "mo' betta".
Basically, in MVP there is no Application Model middle-man since the Presenter assumes this functionality. Additionally, the View in MVP is responsible for handling the UI events (like mouseDown, keyDown, etc), which used to be the Controllers job, and the Model becomes strictly a Domain Model.

Variations of MVC:


Active MVC (orginal MVC)




Passive MVC



MVP

MVC is built up of Observer + Command + Adapter + Strategy + Composite + Decorator + Factory patterns.

Proxy pattern

Strategy lets the algorithm vary independently from clients that use it.
Proxy design pattern is used when we have too heavy object and don’t want to load all whole object in one go to avoid slow performance and to avoid consuming too much memory resource.
when you view google earth, it doesn’t show up complete image of universe. It just loads some part of that image.
Other example could be when you load very heavy document (pdf), it loads first few pages, and when you navigate to other pages, it loads other pages then in memory.

Thursday, September 18, 2008

Observer Pattern

The observer design pattern allows us to observe the state of an object in an application. “Define a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically.”
The best way to think about it is the publish-subscriber pattern. In this pattern you have one object that is being observed called the subject, and a group of objects watching the subject called observers. This pattern is an excellent example of loose coupling, because our classes can interact with very little knowledge of each other. There are 3 things that a subject needs to be concerned with: 1. Registering an Observer 2. Removing an Observer 3. Notifying Observers of Event
The pattern is used when it is necessary to ensure that multiple components (observers or subscribers) are kept in sync with a master set of data (the subject or publisher). Where is it used?
To maintain state and notify other objects of changes. Some examples include a stock-ticker application, weather data service, and machine health and status (e.g., CPU temp, fan speed, etc.).
Participants: Subject Keeps track of its observers Provides an interface for attaching and detaching Observer objects Observer Defines an interface for update notification
ConcreteSubject The object being observed Stores state of interest to ConcreteObserver objects Sends a notification to its observers when its state changes ConcreteObserver The observing object
Stores state that should stay consistent with the subject's
Implements the Observer update interface to keep its state consistent with the subject's


However in C# you can implement the same idea using Delegates and Events which is really more concise and elegant way of writing this pattern.
using System;
using System.Collections.Generic;
using System.Text;
namespace Patterns
{
delegate void StateChangeHandler(State newState);
enum State { State1, State2, State3 }
class Product
{
private State _state;
public State MyState
{
get { return _state; }
set
{ if (_state != value) {
_state = value; Notify(); }
}
} private void Notify() {
if (_onChange != null) _onChange(_state);
}
private event StateChangeHandler _onChange;
public event StateChangeHandler OnStateChange { add { _onChange += value; } remove { _onChange -= value; } }
}
}

Take a look on the previous code, the Product class has an important piece of info called _state, and is encapsulated in the property MyState, this class expects that other classes may be interested in observing the changes in the MyState, so the class adds another member which is an Event (_onChange) of type StateChangeHandler delegate, and encapsulated in the Event Property called OnStateChange, and in the setter of the property MyState a small check is made to see whether the new value is different than the older value then the event gets fired.A typical class which makes use of the Product class will look similar to this
using System;
using System.Collections.Generic;
using System.Text;
namespace Patterns
{
class Program
{
static void Main(string[] args) {
Product myProduct = new Product();
myProduct.OnStateChange += new StateChangeHandler myProduct_OnStateChange);
myProduct.MyState = State.State3;
}
static void myProduct_OnStateChange(State newState)
{
Console.WriteLine("State changed to {0}", newState);
}
}
}

Wednesday, September 17, 2008

What do you mean by Extreme Programming (XP)?

Extreme Programming (XP) is a well-known agile method; it emphasizes collaboration, quick and early software creation, and skillful development practices. It is founded on four values:
1. Communication
One of the key factors of software development teams that are highly successful is their ability
to communicate effectively. Teams that communicate often in an open and honest environment
are able to make effective decisions and mitigate problems more quickly than teams
that don’t have that type of communication.
Communication comes in several forms: written, spoken, gestures, and body posture.
Traditionally, these types of communications can be executed in several fashions: formal documentation, e-mail, telephone, video conferencing, and face-to-face conversation. While all of
these forms of communication are useful, XP favors face-to-face communication.
2. Simplicity

Another key factor of software development teams that are highly successful is their ability to
make what they do as simple as possible. This includes the code they develop, the documentation
they produce, the processes they use, and the form of communication they choose.
Simplicity forces the team to build what is needed to satisfy requirements as they are defined
today, as opposed to building unnecessary features that may never be needed.
The result of keeping things simple is a reduction in code, processes, and documentation,
which, in turn, leaves additional time to incorporate more features into the system. After all,
the project stakeholders are paying for system features, not code with functionality they did
not request.
3. Feedback
Feedback is the XP value that helps the team members know if they are on the right track. This
is why feedback needs to be highly repetitive and frequent. In XP, feedback comes not only
from individuals and their interactions, but also from the automation of tests, the progress
charts that the tracker generates, and the successful acceptance of user stories.
Constant feedback keeps the development team from missing the project target. It ensures
that the software the team is developing is high quality and stable. Feedback also gives the project stakeholders the confidence that what they will receive is what they need and expect.
4. Courage.
It takes an enormous amount of courage to try something new. That is because individuals seem
to naturally resist change and fear the unknown. XP teams need courage when they encounter
resistance to what they are trying to do.
It also takes courage to expose your weaknesses. That is what developers are doing when
they pair-program.
It takes courage for the development team members to tell the project stakeholders that
they are not going to complete all of the user stories in a given iteration.

Extreme Programming, XP for short, is an Agile software development methodology that is
made up of a collection of core values, principles, and practices that provide a highly efficient
and effective means of developing software. At the core of XP is the desire to embrace the
change that naturally occurs when developing software. XP differs from other Agile methodologies because it defines an implementation strategy for practicing the above four core Agile values on a daily basis.

What is SCRUM methodology in development

Definitions
1. User Stories: User stories are a set of sentences that describe what the user experiences while using the software. These can be derived from the wireframes; design comps aren’t a requirement.
2. Backlog: These user stories go into a “Backlog”. This backlog is a list of all the user stories, say 50… or a hundred. How many user stories you have are irrelevant. What’s important is that they are all documented in one place called the Backlog. You need to ensure all of the user stories are valid with the stake holders. It also helps to go over them with your entire team to ensure they make sense to everyone involved.
3. Point System: Once you are done with your backlog, you then have your team go through and assign points to each user story. To do that, you need a Point System. A point system is defining a metric for “how challenging something is”. In simple terms, a 1 is easy and a 5 is hard. It doesn’t have to be 1 through 5; it could be 1 through 10 or whatever. It’s best if they are numeric values because you use the points to measure a variety of statistics about your project. Additionally, each team may have their own definitions of what is challenging. While it’s best to use the same 1-whatever metric for the client and server team, the server team will most likely think creating a new server-side method a “1-easy” whereas a client developer a “5-hard”.
4. Sprint: Sprints are a period of time that your team works on their assigned user stories. A sprint is however long your team decides it should be to get a working build with the user stories assigned. This could be one week, two, or a month. It’s important to note the “uninterrupted” part. The client and project manager won’t interject user stories mid-sprint, nor modify your work load. Basically, they don’t bother you. This allows your team to focus on the task at hand, prevent fire drills, and stay productive.
5. Daily Stand-ups: Each team member goes around talking about what they have completed yesterday, this morning, and what they plan for rest of the day. Team also cites’ any roadblocks or issues, and if that involves another team member, They just schedule a separate call, or time to talk over IM, with that specific person.
Roles
Several roles are defined in Scrum; these are divided into two groups; pigs and chickens, based on a joke about a pig and a chicken.
· “Pig” roles
The Pigs are the ones committed to the project in the Scrum process – they are the ones with “their bacon on the line.”
1. Product Owner
The Product Owner represents the voice of the customer. He/she ensures that the Scrum Team works with the “right things” from a business perspective. The Product Owner writes user stories, prioritizes them and then places them in the product backlog.
2. ScrumMaster (or Facilitator)
Scrum is facilitated by a ScrumMaster, whose primary job is to remove impediments to the ability of the team to deliver the sprint goal. The ScrumMaster is not the leader of the team (as the team is self-organizing) but acts as a buffer between the team and any distracting influences. The ScrumMaster ensures that the Scrum process is used as intended. The ScrumMaster is the enforcer of rules. A key part of the ScrumMaster’s role is to protect the team and keep them focused on the tasks in hand.
3. Team
The team has the responsibility to deliver the product. A team is typically made up of 5–9 people with cross-functional skills who do the actual work (designer, developer, tester, technical communicator, etc.).
· “Chicken” roles
Chicken roles are not part of the actual Scrum process, but must be taken into account. An important aspect of an Agile approach is the practice of involving users, internal business groups and stakeholders into portions of the process. It is important for these people to be engaged in the outcome of the project by providing feedback into the development, its review and planning for each sprint.
1. Users
The software is being built for someone. “If software is not used”—much like “the tree falling in a forest” riddle—”was it ever written?”
2. Stakeholders (customers, vendors)
These are the people who enable the project and for whom the project will produce the agreed-upon benefit[s], which justify its production. They are only directly involved in the process during the sprint reviews.
3. Managers
People who will set up the environment for the product development organizations.
Meetings
1. Daily Scrum
Each day during the sprint, a project status meeting occurs. This is called a “scrum”, or “the daily standup”. The scrum has specific guidelines:
§ The meeting starts precisely on time. Often there are team-decided punishments for tardiness (e.g. money, push-ups, hanging a rubber chicken around your neck)
§ All are welcome, but only “pigs” may speak
§ The meeting is timeboxed at 15-20 minutes depending on the team’s size
§ All attendees should stand (it helps to keep meeting short)
§ The meeting should happen at the same location and same time every day
During the meeting, each team member answers three questions:
§ What have you done since yesterday?
§ What are you planning to do by today?
§ Do you have any problems preventing you from accomplishing your goal? (It is the role of the ScrumMaster to remember these impediments.)
2. Sprint Planning Meeting
At the beginning of the sprint cycle (every 15–30 days), a “Sprint Planning Meeting” is held.
§ Select what work is to be done
§ Prepare the Sprint Backlog that details the time it will take to do that work, with the entire team
§ Identify and communicate how much of the work is likely to be done during the current sprint
§ Eight hour limit
3. Sprint Review Meeting
At the end of a sprint cycle, two meetings are held: the “Sprint Review Meeting” and the “Sprint Retrospective“
§ Review the work that was completed and not completed
§ Present the completed work to the stakeholders (a.k.a. “the demo”)
§ Incomplete work cannot be demonstrated
§ Four hour time limit
4. Sprint Retrospective
§ All team members reflect on the past sprint.
§ Make continuous process improvement.
§ Two main questions are asked in the sprint retrospective: What went well during the sprint? What could be improved in the next sprint?
§ Three hour time limit
Artifacts
1. Product backlog
The product backlog is a high-level document for the entire project. It contains backlog items: broad descriptions of all required features, wish-list items, etc. prioritised by business value. It is the “What” that will be built. It is open and editable by anyone and contains rough estimates of both business value and development effort. Those estimates help the Product Owner to gauge the timeline and, to a limited extent, priority. For example, if the “add spellcheck” and “add table support” features have the same business value, the one with the smallest development effort will probably have higher priority, because the ROI is higher.
The product backlog is property of the Product Owner. Business value is set by the Product Owner. Development effort is set by the Team.
2. Sprint backlog
The sprint backlog is a greatly detailed document containing information about how the team is going to implement the features for the upcoming sprint. Features are broken down into tasks; as a best practice tasks are normally estimated between four and 16 hours of work. With this level of detail the whole team understands exactly what to do, and anyone can potentially pick a task from the list. Tasks on the sprint backlog are never assigned; rather, tasks are signed up for by the team members as needed, according to the set priority and the team member skills.
The sprint backlog is property of the Team. Estimations are set by the Team. Often an according Task Board is used to see and change the state of the tasks of the current sprint, like “to do”, “in progress” and “done”.
3. Burn down
The burn down chart is a publicly displayed chart showing remaining work in the sprint backlog. Updated every day, it gives a simple view of the sprint progress. It also provides quick visualizations for reference.
It should not be confused with an earned value chart.

Monday, September 15, 2008

Application Domain in .NET

When we launch the Notepad program in Windows, the program executes inside of a container known as a process. We can launch multiple instances of Notepad, and each instance will run in a dedicated process. Using the Task Manager application, we can see a list of all processes currently executing in the system.
An Operating System Process : Each process has its own virtual address space (in virtual memory), executable code, and data. A windows process cannot directly access the code or data of another windows process A windows process runs only one application, so if an application crashes, it does not affect other applications. Processes are efficient in isolating applications, but they require expensive IPC(Inter Process Communication) mechanisms to communicate. A process is a vary low level operating system construct; the exact behavior of a process is determined by the operating system. Thus a windows 2000 process is very different from a Unix process.

But .net went one step ahead.Multiple applications can run on same process.But it is divide into applicationdomain. Among the application domains the object invoking can be done through the Remoting.
Like a process, the AppDomain is both a container and a boundary. The .NET runtime uses an AppDomain as a container for code and data, just like the operating system uses a process as a container for code and data. As the operating system uses a process to isolate misbehaving code, the .NET runtime uses an AppDomain to isolate code inside of a secure boundary.
Note, however, that the application domain is not a secure boundary when the application runs with full trust. Applications running with full trust can execute native code and circumvent all security checks by the .NET runtime. ASP.NET applications run with full trust by default.

An AppDomain belongs to only a single process, but single process can hold multiple AppDomains. An AppDomain is relatively cheap to create (compared to a process), and has relatively less overhead to maintain than a process. For these reasons, an AppDomain is a great solution for the ISP who is hosting hundreds of applications. Each application can exist inside an isolated AppDomain, and many of these AppDomains can exist inside of a single process – a cost savings.
AppDomains are usually created by hosts. Examples of hosts are the Windows Shell, ASP.NET and IE. When you run a .NET application from the command-line, the host is the Shell. The Shell creates a new AppDomain for every application.AppDomains can also be explicitly created by .NET applications.

A single ASP.NET worker process will host both of the ASP.NET applications. On Windows XP and Windows 2000 this process is named aspnet_wp.exe, and the process runs under the security context of the local ASPNET account. On Windows 2003 the worker process has the name w3wp.exe and runs under the NETWORK SERVICE account by default.
Each ASP.NET application will have it’s own set of global variables: Cache, Application, and Session objects are not shared. Even though the code for both of the applications resides inside the same process, the unit of isolation is the .NET AppDomain. If there are classes with shared or static members, and those classes exist in both applications, each AppDomain will have it’s own copy of the static fields – the data is not shared. The code and data for each application is safely isolated and inside of a boundary provided by the AppDomain. In order to communicate or pass objects between AppDomains, you’ll need to look at techniques in .NET for communication across boundaries, such as .NET remoting or web services.
The relationship between a process, the CLR, application domains, and assemblies is illustrated in Figure 3-1. Three assemblies have been loaded into the first application domain and one assembly into the second. As yet, no assemblies have been loaded into the third application domain.


Service Oriented Architecture



A service-oriented architecture is essentially a collection of services. These services communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. Some means of connecting services to each other is needed. The .Net technology introduces the SOA by mean of web services.

“Service-Oriented Architecture (SOA) is a software architecture where functionality is grouped around business processes and packaged as interoperable services.”

"A service-oriented architecture can be defined as a group of services, which communicate with each other. The process of communication involves either simple data passing or it could involve two or more services coordinating some activity”

The SOA can be used as the concept to connect multiple systems to provide services. It has it's great share in the future of the IT world.
According to the imaginary diagram above, we can see how the Service Oriented Architecture is being used to provide a set of centralized services to the citizens of a country. The citizens are given a unique identifying card, where that card carries all personal information of each citizen. Each service centers such as shopping complex, hospital, station, and factory are equipped with a computer system where that system is connected to a central server, which is responsible of providing service to a city. As an example when a customer enter the shopping complex the regional computer system report it to the central server and obtain information about the customer before providing access to the premises. The system welcomes the customer. The customer finished the shopping and then by the time he leaves the shopping complex, he will be asked to go through a billing process, where the regional computer system will manage the process. The payment will be automatically handled with the input details obtain from the customer identifying card.
The regional system will report to the city (computer system of the city) while the city will report to the country (computer system of the country).

Service-oriented architectures are not a new thing. The first service-oriented architecture for many people in the past was with the use DCOM or Object Request Brokers (ORBs) based on the CORBA specification

image

Waterfall model

The waterfall methodology is a software development process that is broken up into a series of distinct phases, with each phase existing as an autonomous phase with respect to all subsequent phases. In a waterfall project, all phases of the process have a distinct beginning and end. Whena phase is over, the subsequent phase begins. This stepped approach continues throughout the remaining phases of a project until it reaches completion. Several characteristics of the waterfall methodology often create some undesirable results:
• Each phase of a waterfall project must be complete prior to moving to the next phase.
This approach makes it difficult for you to learn and adjust to changes in the project
requirements.
• The waterfall methodology is very heavily focused on process. This approach often
causes the team to concentrate more on managing the waterfall processes, as opposed
to fulfilling the goals of the project.
• The waterfall methodology is focused on documentation as one of its primary forms of
communication. Unfortunately, software development is often a complicated matter
and is difficult to capture entirely on paper. Additionally, what is not said, or written in
this case, can be as powerful as what is said, but that type of information is not captured
in a document.
• Extensive documentation is also used as a means of trying to control scope. In the analysis
phase, requirements documents are used as contracts between software developers
and the project stakeholders. This requires the project stakeholders to know exactly what
they want and to have those needs and wants remain constant throughout the development
life cycle. This is rarely the case.
• The waterfall methodology assumes that a project can be managed by a predefined
project plan. Months are spent perfecting the plan before any work on the project really
begins. A lot of work is put into maintaining the plan throughout the project, and often
the plan is out of date with the current status of the project. The end result is the project
plan tends to be more of a historical document than a working guide to the development
team. Planning is not the problem; the problem is trying to predict and plan for
the future.

While the waterfall approach does have problems, it did start with the best intentions—
bringing order out of chaos. When waterfall methods were first employed, there was no
software process at all in place. Having some processes, documentation, and a plan is not a
bad thing. Unfortunately, the waterfall methodology swung the pendulum too far to the right.
Software projects need to be manageable, but without becoming too brittle or complicated to
implement—which is exactly what the first waterfall methods created. This swing resulted in
the development of another group of methodologies, known as Agile methods.

Agile Development

Traditionally the way software were developed is that the customer who approaches the software development says this is what i want and gives a thick document and the customer and developer would sign that off. The customer would sign that he now agreed this what gonna be done within the next 6 months. But unfortunately, it is impossible for a customer to envision everything upfront. So after a little bit of development, he might say "Actually that would have been called but unfortunately i didn't think about this so It hasn't been signed off. So we are going to add this in phase 2." So because of that the customer might end up with a product that it is not exactly what i wanted. So Agile development enables the developer to more agile and the customer actually. There is no upfront document. The customer can be more agile about requirements and any time change in requirements. Agile is all about embracing change. As we know software changes over time. Needs changes all the time and the agile addresses that big problem and makes possible to give the customer exactly what he wants.

Needs for Agile:1. Rapidly changing requirements.
2. Changing the environment.
3. Changing laws.
4. Changing people.
5. Mistakes on the initial planning stage.



How to start the Sprint 

Imagine it's the first day of your 2-week sprint (iteration, etc.), and you're eager to get started. Your burndown (or task list, etc.) has just 4 tasks:
Task Estimate
1. Implement "Order Status" screen. 35 hours
2. Print username in system logs. 2 hours
3. Investigate clustering in Tomcat. 15 hours
4. Upgrade to newest version of GWT. 20 hours


The question is, then, how do you get started? What do you do on that first day? You've committed to completing all the tasks, so does it matter? Of course it does.
Two simple approaches are...
1. Start with the tasks that are most fun. "Investigating clustering" sounds interesting, so start with that! This makes for a more enjoyable sprint...at least for the first few days.
2. Knock off the low hanging fruit first, and get some quick gratification at the beginning of the Sprint. Just like some financial gurus say to pay off your lowest debts first, it's nice to build some confidence before you get to the tough stuff.
Honestly, I find it really tough not to use these strategies - I just naturally want to do the fun tasks or quick-wins first...but having lived through more than a few failed sprints, I've quickly learned better.
First, there are some tasks which other developers are depending on me for (e.g. "Upgrading GWT"), and so if I wait till the last few days to do this, I effectively hose my colleagues. Second, there are times when I don't finish all of my tasks (gasp!) - maybe I had to call in sick one day, or requirements shifted, etc. If I do the most fun tasks first, I may have left some high value tasks hanging.
So here are some more disciplined approaches:
3. Start with the task that has the most dependencies on it (either within my own task list or for other developers). This approach is the best for keeping you in the good graces of your team.
4. Find out from the business owners which tasks are the highest value and work from high value to low - that way if something doesn't get done, it will be less of a big deal. For example, the "Order Status" screen might be crucial to the business, but "Upgrading to GWT" is not as big a priority.
5. Dig into the task that is the most complex first, so you can identify and mitigate the biggest risks right away, and you'll have more time to address them during the sprint than if you waited to the end.
Now each of the last three approaches (which can be blended, of course), are significantly more disciplined than the first two, but there's still a problem...
Most developers (me included) typically like to work sequentially - complete a task, move on to the next one. I find this more gratifying (and less stressful), but not very effective. It's very possible that lingering within each of my tasks is some big gotcha, that needs time (i.e. calendar time, not hours of work) to be dealt with. For example, maybe there's some question about the "Order Status" screen that needs to be posed to a business owner, and that business owner is booked solid till Thursday. Or maybe investigating clustering requires the assistance of a sys admin, and he needs a week lead time. If I don't identify these dependencies early, I could easily put myself in a position where I can't complete my tasks.
Given this, I've found the most effective approach to starting the sprint is this...
6. On the first 2 days, take a spike through each of my tasks (similar to the XP concept), understand better the requirements, and pick out the tricky pieces that might require input from others. This may require writing some code, but not much. Once I have a good handle, at a conceptual level, of what each task entails, then I can use some blend of strategies 3, 4, and 5.
The biggest drawback to this approach is that on those first two days, I find myself barely burning any work down - because most of what I'm doing is asking questions and planning. After everything is in order though, I typically can roll smoothly through my work.



Additional material:

Websites:

1. https://www.scrumalliance.org/
2. https://www.scrumstudy.com/

Books

1. Fun retrospectives
https://www.dropbox.com/s/gym6tst8rc7lsxb/funretrospectives.pdf?dl=0

2. A bunch of pdf files:
https://www.dropbox.com/s/s6jpdgl001ge7gb/Agile%20Documents.rar?dl=0

Video tutorials List
https://www.dropbox.com/s/7terxxxfseqc0w1/Scrum%20Video%20List.xlsx?dl=0

 


Tuesday, September 9, 2008

Microsoft codenames

Product - Codename
1. SQL Server 2008 - Katmai/Akadia
2. SQL Server 2005 - Yukon
3. SQL Server 2000 - Shiloh (32 bit) , Liberty (64 bit)
4. SQL Server 7.0 - Sphinx
5. SQL Server reporting services - Rosetta
6. window Presentation Foundation - Avalon
7. Window Communication Foundation - Indigo
8. Window CardSpace - InfoCard
9. Microsoft Surface- Milan
10. Visual Studio 2008 - Orcas
11. Visual Studio 2005 - Whidbey
12. Visual Studio 2003 - Everett
13. Visual Studio 2002 - Rainier
14. Window XP - Whistler
15. Microsoft Forefront : Stirling
16. Visual Studio® Team System 2008 - Rosario

Monday, September 8, 2008

What is sandbox

Sandbox an online environment in which code or content changes can be tested without affecting the original system.
Sandbox in SQL Server is a place where we run trused program or script which is created from the third party. There are three type of Sandbox where user code run.
Safe Access Sandbox:-Here we can only create stored procedure,triggers,functions,datatypes etc.But we doesnot have acess memory ,disk etc.
External Access Sandbox:-We cn access File systems outside the box. We can not play with threading,memory allocation etc.
Unsafe Access Sandbox:-Here we can write unreliable and unsafe code.

useful open sources

tortoise SVN : it is a source control.
OnTime2008 Profession : It is a task management and bug tracking software.
IBM ClearCase
Visual Paradigm
.NET Reflector
GoToMeeting

Notepad++
SlickRun
ISO burster
SharpDevelop(#Develop)
SQL Prompt (by red gate)

Version control with Tortoise SVN (part - 1)

Software configuration management is the process of identifying and defining the configuration items in a software system, controlling the release, versioning and change of these items through out the software system life cycle, recording and reporting the status of configuration items and change requests, and verifying the completeness and correctness of configuration items.

Version Control or Revision Control or Source Control lets you track your files over time. So the idea is when you mess up you can easily get back to the previous working version.

Whenever you bring up subject of which version control system to use, there is always a list:
Microsoft Visual SourceSafe
SourceGear Vault
Perforce
VOODOO (Versions Of Outdated Documents Organized Orthogonally)
Borland StarTeam
BitKeeper
Monotone
OpenCM
GNU Arch
Serena PVCS Version Manager
MKS Source
CVS (Concurrent Version System) and TortoiseCVS
Subversion and TortoiseSVN
Microsoft Team Foundation System (TFS)
IBM Rational ClearCase

Why Use Subversion?
Subversion is a system designed to control your source code. You may occasionally see the acronym 'SCM' associated with Subversion and its like. 'SCM' stands for 'software configuration management', because Subversion is also very good at managing plaintext configuration files. However, I will be focusing on source control.
There are a number of reasons that you may want to use a piece of software to manage your source code. If you are working collaboratively on a project, letting each developer have their own copy of the code on their local machine is great. It will prevent you two from overwriting the other developer's changes. Of course, it will not stop you two from completely changing the API, so its worth noting that Subversion is not a replacement for communicating.
But what if you are working on a project alone? You can still use Subversion. Source control management software also tracks changes to your code. If you break your application, and you cannot figure out why, you will always have the older (and functional) version to compare your changes against.

Subversion is a centralized system for sharing information. At its core is a repository, which is a central store of data. The repository stores information in the form of a filesystem tree—a typical hierarchy of files and directories. Any number of clients connect to the repository, and then read or write to these files. By writing data, a client makes the information available to others; by reading data, the client receives information from others.

Subversion Architecture :
To use Subversion, each “set of files” is called a “repository”.  A centralized “Subversion server” must be used, and may contain any number of file repositories. To access these files, any number of “Subversion clients” may be used, typically from different machines.  Since Subversion is open-source, a considerable amount of effort has been dedicated to making the system cross-platform.  In general a Subversion server may be set up on Linux, Windows, or Mac OSX, and Subversion clients exist similarly for Linux, Windows, and Mac.

subArch

When files are retrieved from the server to the client, it is called an “update”, and when new versions of the files are sent to the server from the client, it is called a “commit”.

A typical repository will go through a continuous cycle of update-edit-commit.

subArch1

Features of Subversion

Directory versioning
Subversion implements a “virtual” versioned filesystem that tracks changes to whole directory trees over time. Files and directories are versioned. As a result, there are real client-side move and copy commands that operate on files and directories.
Atomic commits
A commit either goes into the repository completely, or not at all. This allows developers to construct and commit changes as logical chunks.
Versioned metadata
Each file and directory has an invisible set of “properties” attached. You can invent and store any arbitrary key/value pairs you wish. Properties are versioned over time, just like file contents.
Choice of network layers
Subversion has an abstracted notion of repository access, making it easy for people to implement new network mechanisms. Subversion's “advanced” network server is a module for the Apache web server, which speaks a variant of HTTP called WebDAV/DeltaV. This gives Subversion a big advantage in stability and interoperability, and provides various key features for free: authentication, authorization, wire compression, and repository browsing, for example. A smaller, standalone Subversion server process is also available. This server speaks a custom protocol which can be easily tunneled over ssh.
Consistent data handling
Subversion expresses file differences using a binary differencing algorithm, which works identically on both text (human-readable) and binary (human-unreadable) files. Both types of files are stored equally compressed in the repository, and differences are transmitted in both directions across the network.
Efficient branching and tagging
The cost of branching and tagging need not be proportional to the project size. Subversion creates branches and tags by simply copying the project, using a mechanism called Cheap Copies similar to a hard-links in Linux/UNIX .Thus these operations take only a very small, constant amount of time, and very little space in the repository.
Hackability
Subversion has no historical baggage; it is implemented as a collection of shared C libraries with well-defined APIs. This makes Subversion extremely maintainable and usable by other applications and languages.

SVN

TortoiseSVN
TortoiseSVN is a free open-source client for the Subversion version control system initiated in 2000 by CollabNet Inc. That is, TortoiseSVN manages files and directories over time. Files are stored in a central repository. The repository is much like an ordinary file server, except that it remembers every change ever made to your files and directories. This allows you to recover older versions of your files and examine the history of how and when your data changed, and who changed it. This is why many people think of Subversion and version control systems in general as a sort of “time machine”.
Some version control systems are also software configuration management (SCM) systems. These systems are specifically tailored to manage trees of source code, and have many features that are specific to software development - such as natively understanding programming languages, or supplying tools for building software. Subversion, however, is not one of these systems; it is a general system that can be used to manage any collection of files, including source code.


TortoiseSVN is a Windows shell extension that allows you to access SVN repositories within Windows Explorer. Basically, any folder on your hard drive can be turned into an SVN folder and used to store a revision of an SVN repository with just a few mouse clicks and some connections info.

Feature of Tortoise SVN

1. Shell integration
TortoiseSVN integrates seamlessly into the Windows shell (i.e. the explorer). This means you can keep working with the tools you're already familiar with. And you do not have to change into a different application each time you need functions of the version control!

And you are not even forced to use the Windows Explorer. TortoiseSVN's context menus work in many other file managers, and in the File/Open dialog which is common to most standard Windows applications. You should, however, bear in mind that TortoiseSVN is intentionally developed as extension for the Windows Explorer. Thus it is possible that in other applications the integration is not as complete and e.g. the icon overlays may not be shown.

2. Icon overlays
The status of every versioned file and folder is indicated by small overlay icons. That way you can see right away what the status of your working copy is.

image

image A fresh checked out working copy has a green checkmark as overlay. That means the Subversion status is normal.
image As soon as you start editing a file, the status changes to modified and the icon overlay then changes to a red exclamation mark. That way you can easily see which files were changed since you last updated your working copy and need to be committed.
image If during an update a conflict occurs then the icon changes to a yellow exclamation mark.

image If you have set the svn:needs-lock property on a file, Subversion makes that file read-only until you get a lock on that file. Read-only files have this overlay to indicate that you have to get a lock first before you can edit that file.

image If you hold a lock on a file, and the Subversion status is normal, this icon overlay reminds you that you should release the lock if you are not using it to allow others to commit their changes to the file.

image This icon shows you that some files or folders inside the current folder have been scheduled to be deleted from version control or a file under version control is missing in a folder.

image The plus sign tells you that a file or folder has been scheduled to be added to version control.

3. Easy access to Subversion commands
All Subversion commands are available from the explorer context menu. TortoiseSVN adds its own submenu there.


TortoiseSVN's History
In 2002, Tim Kemp found that Subversion was a very good version control system, but it lacked a good GUI client. The idea for a Subversion client as a Windows shell integration was inspired by the similar client for CVS named TortoiseCVS.
Tim studied the source code of TortoiseCVS and used it as a base for TortoiseSVN. He then started the project, registered the domain tortoisesvn.org and put the source code online. During that time, Stefan Küng was looking for a good and free version control system and found Subversion and the source for TortoiseSVN. Since TortoiseSVN was still not ready for use then he joined the project and started programming. Soon he rewrote most of the existing code and started adding commands and features, up to a point where nothingi of the original code remained.
As Subversion became more stable it attracted more and more users who also started using TortoiseSVN as their Subversion client. The user base grew quickly (and is still growing every day). That's when Lübbe Onken offered to help out with some nice icons and a logo for TortoiseSVN. And he takes care of the website and manages the translation.

Microsoft VSS vs tortoiseSVN
Subversion benefits over Visual Source Safe (VSS):
Database integrity
The Subversion developers place their highest emphasis on protecting data. VSS databases have a reputation for frequent corruption.
Locking of Database:
VSS uses Lock-Modify-Unlock Approach whilst Subversion uses Copy-Modify-Merge approach.
Security
Subversion is easy to deploy over an encrypted link. One can use svnserve over a secure shell (ssh) link, or the Apache Subversion module over Apache's SSL (HTTPS) protocol. This eliminates the need to use special VPN software to secure communication with the repository. Different parts of a repository can have different access policies. Multiple repositories can be served from a single Apache web server. For example, one could restrict commit rights to the main trunk to a small group of project leaders, while allowing each developer or team a separate branch to work within. Project leaders would review easily-identified changes in a team's branch and commit them to the trunk. This is in fact how Subversion itself is managed, ensuring high quality in an open source environment with many independent contributors.
Performance over WAN
VSS was designed for a LAN, and requires massive disk activity for even simple operations. Subversion was designed for global clients. It strives to minimize network traffic. Many common operations can be performed without connection to the repository, such as comparing one's working copy to the version that was checked out.
True client/server
VSS is a peer-to-peer system in which every client is really a server, requiring full access to the underlying database. Faults in any peer can damage the database, and this is known to happen frequently. Subversion is normally deployed as a client/server architecture, with a single server having access to the actual database. If a fault happens in a client during a transaction, the server will roll back the transaction, protecting all other clients from corruption.
Cheap copies (ie. branches and tags)
One can copy large parts of a repository to another path, and only the fact of the copy is stored, not the actual data. This makes it very cheap (almost free) to tag and branch. This in turn makes it cheap for developers to create private version-controlled "sandboxes" where large features can be developed without the need to coordinate with other groups. Only once the entire feature is tested is it merged into the trunk. The developer can merge well-tested trunk developments into her branch.
No downtime for "maintenance"
Normal maintenance is just backup. This is done with an administrative command ("svnadmin dump") that performs a normal database lock, so the repository remains highly-available.
Disconnected development
One doesn't have to be connected to the repository to start work, as long as one has a working copy. If you're on a plane or waiting for your train to the office, you can power up your laptop and immediately start editing. No need to lock files before you begin. Subversion makes this particularly easy because your working copy contains a pristine copy of the original checked-out files, before you started making changes. (This copy is normally kept in the .svn subdirectory
under each working directory.) This makes it easy to monitor your own edits without using the network. Subversion developers "eat their own dog food"
Microsoft does not use VSS internally. It's not an actively-maintained product. The Subversion developers use Subversion to manage the development of
Subversion. It's in their own interest to make it the best system it can be.

 

Versioning Models for Version Control Systems:
All version control systems have to solve the same fundamental problem: how will the system allow users to share information, but prevent them from accidentally stepping on each other's feet? It's all too easy for users to accidentally overwrite each other's changes in the repository.
The Problem of File-Sharing
Consider this scenario: suppose we have two co-workers, Harry and Sally. They each decide to edit the same repository file at the same time. If Harry saves his changes to the repository first, then it's possible that (a few moments later) Sally could accidentally overwrite them with her own new version of the file. While Harry's version of the file won't be lost forever (because the system remembers every change), any changes Harry made won't be present in Sally's newer version of the file, because she never saw Harry's changes to begin with. Harry's work is still effectively lost - or at least missing from the latest version of the file - and probably by accident. This is definitely a situation we want to avoid!

image
The Lock-Modify-Unlock Solution
Many version control systems use a lock-modify-unlock model to address this problem, which is a very simple solution. In such a system, the repository allows only one person to change a file at a time. First Harry must "lock" the file before he can begin making changes to it. Locking a file is a lot like borrowing a book from the library; if Harry has locked a file, then Sally cannot make any changes to it. If she tries to lock the file, the repository will deny the request. All she can do is read the file, and wait for Harry to finish his changes and release his lock. After Harry unlocks the file, his turn is over, and now Sally can take her turn by locking and editing.

image

The problem with the lock-modify-unlock model is that it's a bit restrictive, and often becomes a roadblock for users:

• Locking may cause administrative problems. Sometimes Harry will lock a file and then forget about it. Meanwhile, because Sally is still waiting to edit the file, her hands are tied. And then Harry goes on vacation. Now Sally has to get an administrator to release Harry's lock. The situation ends up causing a lot of unnecessary delay and wasted time.
• Locking may cause unnecessary serialization. What if Harry is editing the beginning of a text file, and Sally simply wants to edit the end of the same file? These changes don't overlap at all. They could easily edit the file simultaneously, and no great harm would come, assuming the changes were properly merged together. There's no need for them to take turns in this situation.
• Locking may create a false sense of security. Pretend that Harry locks and edits file A, while Sally simultaneously locks and edits file B. But suppose that A and B depend on one another, and the changes made to each are semantically incompatible. Suddenly A and B don't work together anymore. The locking system was powerless to prevent the problem - yet it somehow provided a sense of false security. It's easy for Harry and Sally to imagine that by locking files, each is beginning a safe, insulated task, and thus inhibits them from discussing their incompatible changes early on.
The Copy-Modify-Merge Solution
Subversion, CVS, and other version control systems use a copy-modify-merge model as an alternative to locking. In this model, each user's client reads the repository and creates a personal working copy of the file or project. Users then work in parallel, modifying their private copies. Finally, the private copies are merged together into a new, final version. The version control system often assists with the merging, but ultimately a human being is responsible for making it happen correctly. Here's an example. Say that Harry and Sally each create working copies of the same project, copied from the repository. They work concurrently, and make changes to the same file "A" within their copies. Sally saves her changes to the repository first. When Harry attempts to save his changes later, the repository informs him that his file A is out-of-date. In other words, that file A in the repository has somehow changed since he last copied it. So Harry asks his client to merge any new changes from the repository into his working copy of file A. Chances are that Sally's changes don't overlap with his own; so once he has both sets of changes integrated, he saves his working copy back to the repository.image 

image
But what if Sally's changes do overlap with Harry's changes? What then? This situation is called a conflict, and it's usually not much of a problem. When Harry asks his client to merge the latest repository changes into his working copy, his copy of file A is somehow flagged as being in a state of conflict: he'll be able to see both sets of conflicting changes, and manually choose between them. Note that software can't automatically resolve conflicts; only humans are capable of understanding and making the necessary intelligent choices. Once Harry has manually resolved the overlapping changes (perhaps by discussing the conflict with Sally!), he can safely save the merged file back to
the repository. The copy-modify-merge model may sound a bit chaotic, but in practice, it runs extremely smoothly. Users can work in parallel, never waiting for one another. When they work on the same files, it turnsout that most of their concurrent changes don't overlap at all; conflicts are infrequent. And the amount of time it takes to resolve conflicts is far less than the time lost by a locking system. In the end, it all comes down to one critical factor: user communication. When users communicate
poorly, both syntactic and semantic conflicts increase. No system can force users to communicate perfectly, and no system can detect semantic conflicts. So there's no point in being lulled into a false promise that a locking system will somehow prevent conflicts; in practice, locking seems to inhibit productivity more than anything else. There is one common situation where the lock-modify-unlock model comes out better, and that is where you have un-mergeable files. For example if your repository contains some graphic images, and two people change the image at the same time, there is no way for those changes to be merged together. Either Harry or Sally will lose their changes.

What does Subversion Do?
Subversion uses the copy-modify-merge solution by default, and in many cases this is all you will ever need. However, as of Version 1.2, Subversion also supports file locking, so if you have unmergeable files, or if you are simply forced into a locking policy by management, Subversion will still provide the features you need.


Checkouts and Commits in SVN
When a developer wishes to work with SVN version-controlled source code, he or she must first 'check out' the current version of the code (or possibly an older version, if necessary). 'Check out' describes the process of the TortoiseSVN client connecting to the SVN server, and downloading a version of the code in a repository. Once the code is checked out, it can be worked with just like un-versioned code. After some milestone has been reached (or the workday has ended), the updated code can then be 'committed' back to the SVN repository as a new version of the source code, and subsequent attempts to check out the latest version of the code will acquire this newer, updated version.

 

Branching / Tagging in Subversion
One of the features of version control systems is the ability to isolate changes onto a separate line of development. This line is known as a branch. Branches are often used to try out new features without disturbing the main line of development with compiler errors and bugs. As soon as the new feature is stable enough then the development branch is merged back into the main branch (trunk).
Another feature of version control systems is the ability to mark particular revisions (e.g. a release version), so you can at any time recreate a certain build or environment. This process is known as tagging.
Subversion does not have special commands for branching or tagging, but uses so-called cheap copies instead. Cheap copies are similar to hard links in Unix, which means that instead of making a complete copy in the repository, an internal link is created, pointing to a specific tree/revision.As a result branches and tags are very quick to create, and take up almost no extra space in the repository.

 

Creating The Repository With TortoiseSVN
1. Open the windows explorer
2. Create a new folder and name it e.g. SVNRepository
3. Right-click on the newly created folder and select TortoiseSVN ® Create Repository
here....

image

 

Accessing the Repository

Right click on the desktop and from the menu select TortoiseSVN->Repo Browser as shown in the figure

Repo1

In the coming screen type the url of the repository say [http://MyServerName/svn/MyRepos or svn://MyServerName/MyRepos] and clicks OK.

repoURL

It will display an authentication screen as shown below with provision to provide user id and password to login to repository. The subversion administrator will provide a user id and password for your repository access.

repoAuthentication

Check save authentication for saving the user name and password and click on [OK]. Now the following screen will appear displaying the repository contents.

RepoBrowser

 

(cont...)