There is a vast confusion between Smoke and sanity testing; Sundry people still have muddle between these terms. But there is a meek and comprehensible difference between the two.

• Smoke testing instigated in the hardware testing practice of whirling on a new piece of hardware for the first time. In software industry, smoke testing is a narrow and extensive approach whereby all areas of the application without getting into too deep, is tested.
• It is well scripted, either using written routines of tests or an automated test.
• A Smoke test is intended to touch every part of the application in a brief manner.
• Smoke testing is conducted to ensure whether the most essentialroles of a program are working but not considering the deep details.
• Smoke testing is like a normal health checkup in order to build an application before taking it to testing in depth.

Now let me give you a glimpse of other fundamental testing terms called “Sanity Testing”.


• Sanity test is a slight deterioration test that focuses on few areas of functionality. It is usually narrow and deep.
• A sanity test is typically impulsive.
• A Sanity test is used to govern a small unit of the application is still working after a minor alteration.
• Sanity testing is aalso brieftesting; it is performed whenever a cursory testing is adequate to prove the application is functioning according to specifications.
• Sanity testing is to confirm whether the requirements are met or not, checking all features.


Agile testing follows a more fluid, continuous process which takes place hand-in-hand with development and product management. An agile team doesn’t do all of the requirements work for a system, then all of the development work and then all of the testing work consecutively. Instead, the agile team takes a small piece of the system and works together to complete that piece of the system. The piece may be infrastructure-related, feature development or a research spike. Then the team takes on another small piece and completes that piece. The project marches toward completion piece by piece.

Completing a piece of the system, referred to as a story or backlog item, means that product management, development and testing work together toward a common goal. The goal is for the story to be ‘done.’ Stories are identified and prioritized by the product owner, who manages the backlog. Stories are selected based on their priority and effort estimate. The effort estimate is another team activity, which also includes testers. The team also identifies dependencies, technical challenges, or testing challenges. The whole team agrees on final acceptance criteria for a story to determine when it’s ‘done’

During an iteration, several stories may be in various stages of development, test, or acceptance. Agile testing is continuous, since everyone on an agile team tests. However, both the focus and timing of testing is different depending on what type of testing is performed. Developers take the lead on code-level tests, while the tester on the agile team provides early feedback during all stages of development, helps or is cognizant of code-level testing being performed, takes the lead on acceptance test automation building regression test plans and uncovers additional test scenarios through exploratory testing.

In addition, the agile tester ensures acceptance test coverage is adequate, leads automation efforts on integrated, system-level tests, keeps test environments and data available, identifies regression concerns and shares testing techniques. Additional testing, such as performance and regression testing, that falls outside the scope of story-level testing, can be addressed through test-oriented stories, which are estimated, planned and tracked just like a product-oriented story.


SaaS offering attributes

Integration with External Applications: Simple Object Access Protocol (SOAP)-based Service Oriented Architecture (SOA), Extract Transform Load (ETL) and On Line Analytical Processing (OLAP) Application Programming Interfaces (APIs)
Manageability: Multi-tenant architecture to support clients from a single instance in order to reduce the costs of infrastructure, hosting and management
Performance: Distributed data caching and code optimization tools for improving performance and response time
Scalability: Meta-database and load balancing for scalability
Security: Multi-tiered, multi-layered, role-based security model
Time-to-Market: Distributed Agile methodology and platform (GlobalLogic Velocity™) to accelerate time-to-market and provide shorter release cycles
Usability: AJAX-based APIs to provide interactive, professional-looking Graphical User Interfaces (GUIs) supported by a dedicated team of usability experts
Compatibility: Portability experts to provide consistent support across a variety of browser platforms
Availability: 24/7 in-house support services to ensure uptime and continuous availability
Expertise on Open Source: Use of tools to reduce total cost of ownership

Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure “in the cloud” that supports them.

Cloud computing customers do not generally own the physical infrastructure serving as host to the software platform in question. Instead, they avoid capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use. Many cloud-computing offerings employ the utility computing model, which is analogous to how traditional utility services (such as electricity) are consumed, while others bill on a subscription basis. Sharing “perishable and intangible” computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development).


The majority of cloud computing infrastructure consists of reliable services delivered through data centers and built on servers with different levels of virtualization technologies. The services are accessible anywhere that provides access to networking infrastructure. Clouds often appear as single points of access for all consumers’ computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers and typically offer SLAs. Open standards are critical to the growth of cloud computing, and open source software has provided the foundation for many cloud computing implementations.

Key Characteristics:

  • Agility improves with users able to rapidly and inexpensively re-provision technological infrastructure resources.
  • Cost is greatly reduced and capital expenditure is converted to operational expenditure. This lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and minimal or no IT skills are required for implementation.
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using, e.g., PC, mobile. As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet the users can connect from anywhere.
  • Multi-tenancy enables sharing of resources and costs among a large pool of users, allowing for:
    • Centralization of infrastructure in areas with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer for highest possible load-levels)
    • Utilization and efficiency improvements for systems that are often only 10-20% utilized.
  • Reliability improves through the use of multiple redundant sites, which makes it suitable for business continuity and disaster recovery. Nonetheless, most major cloud computing services have suffered outages and IT and business managers are able to do little when they are affected.
  • Scalability via dynamic (“on-demand”) provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored and consistent and loosely-coupled architectures are constructed using web services as the system interface.
  • Security typically improves due to centralization of data, increased security-focused resources, etc., but raises concerns about loss of control over certain sensitive data. Security is often as good as or better than traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible.
  • Sustainability comes about through improved resource utilization, more efficient systems, and carbon neutrality. Nonetheless, computers and associated infrastructure are major consumers of energy.

It is well understood that unit testing improves the quality and predictability of your software releases. Do you know, however, how well your unit tests actually test your code? How many tests are enough? Do you need more tests? These are the questions code coverage measurement seeks to answer.

Coverage measurement also helps to avoid test entropy. As your code goes through multiple release cycles, there can be a tendency for unit tests to atrophy. As new code is added, it may not meet the same testing standards you put in place when the project was first released. Measuring code coverage can keep your testing up to the standards you require. You can be confident that when you go into production there will be minimal problems because you know the code not only passes its tests but that it is well tested.

In summary, we measure code coverage for the following reasons:

* To know how well our tests actually test our code
* To know whether we have enough testing in place
* To maintain the test quality over the lifecycle of a project

Code coverage is not a panacea. Coverage generally follows an 80-20 rule. Increasing coverage values becomes difficult, with new tests delivering less and less incrementally. If you follow defensive programming principles, where failure conditions are often checked at many levels in your software, some code can be very difficult to reach with practical levels of testing. Coverage measurement is not a replacement for good code review and good programming practices.

In general you should adopt a sensible coverage target and aim for even coverage across all of the modules that make up your code. Relying on a single overall coverage figure can hide large gaps in coverage.

IT must treat SaaS like on-premise deployments — collaboratively working with the business to ferret out requirements, getting involved in vendor selection to make sure integration and customization capabilities can meet the needs, and doing business justifications for new projects. But IT must also adjust to the unique aspects of SaaS when thinking about:
  • Contracts. IT departments experienced with ISV and outsourcing contracts will be in a better position than most business users to negotiate with SaaS vendors, but additional considerations apply.3 Some vendors like NetSuite offer a standard SLA, but many do not. Additionally, because SaaS solutions are usually down for maintenance and upgrades, firms must specify whether the SLA covers total downtime or only downtime outside the vendor’s planned downtime windows. Firms must also consider data ownership issues. Make sure you get your data back for free at the end of the relationship, and make sure that you can get data dumps as needed for backup or analytics.
  • Integrations. Don’t reinvent the wheel. Many vendors offer prepackaged integrations to systems like Oracle, SAP, and Siebel, including some integration-as-a-service offerings. Ask the vendor to provide customer references for integration and learn from their mistakes and successes. Also take advantage of online developer communities when you have questions. Most importantly, find out whether the solution can support the type of integration you need before you make a buying decision. Most customers that we’ve spoken with are not doing integration in real time but are doing nightly or weekly updates instead.
  • Customizations. Unlike on-premise or outsourced software where you own the code and have full liberty to modify it, with SaaS you are limited to the customization tools exposed by the vendor, add-on solutions, and custom scripts that the vendor supports. Firms must be able to make the solution fit their business needs — so that they don’t end up adapting processes to the software. Firms should take advantage of industry templates where possible so that they don’t have to start from scratch. Siebel offers five deeper vertical solutions for an extra fee; salesforce.com offers eight lighter templates for free. When you have to do more advanced customizations yourself, take advantage of the free sample code and developer communities available online.
Software-as-a-service — from vendors like NetSuite and salesforce.com — has democratized the software industry from RFP to purchase to rollout. Unlike complex on-premise deployments that carry a hefty price tag and require weeks for vendor selection and months for rollout, SaaS provides a quicker, lower-risk alternative to traditional licensed software that empowers the business unit by:
  • Enabling business units to own the buying cycle. Business users now have a software option that allows them to control the buying cycle, in contrast to pre-SaaS application purchases that required IT involvement — to short-list, demo, and approve solutions — as well as corporate involvement for budget sign-off. With SaaS, line-of-business heads ranging from the vice president of sales to the vice president of HR can single-handedly own the decision — by taking advantage of free trial offers on Web sites to evaluate solutions and paying a monthly or quarterly rate low enough to stay off the corporate radar screen.
  • Eliminating dependence on IT. SaaS allows business units to roll out and manage day-to-day application needs, providing an alternative for users who don’t have necessary IT resources available — or IT’s cooperation. One customer we spoke with said, “Our division looked at SaaS since we couldn’t use our headquarters’ IT department. They weren’t willing to support us at all.” Since part of the SaaS appeal is freedom from IT, vendors have focused on creating easy-to-use, point-and-click tools so that business users can set up and configure solutions with little technical knowledge and minimal specialized training. For many users, SaaS wizards for creating custom reports, changing roles and access rights, and building custom layouts means an end to waiting on an IT project list for days or weeks until resources become available.
  • Facilitating ongoing development and innovation networks. Because firms running on a multitenant architecture are all running the same code base, user firms, channel partners, and SIs can choose to reapply one deployment’s customizations to another through templates, creating economies of scale not possible with on-premise implementations with modified code. Moreover, this co-development speeds innovation and enhancement by letting developers build off each other’s work. Vendors like salesforce.com and Salesnet support publicly accessible developer sites that provide free sample code, best practices, and forums for developer discussion.1 salesforce.com’s developer community has gone one step further by growing an open source developer forum on SourceForge.net.
Software Industry Shifts To Respond To New Customer Demands
As appetite for SaaS continues to grow, vendors must rush to match supply with demand. Although the midmarket shows the most demand for SaaS — with more than half of firms saying they seriously consider SaaS when buying new software — firms of all sizes are attracted to the model, and ISVs as well as SIs are getting the message