User Accounts in Windows

Local user accounts allow users to log on at and gain access to resources on only the computer where the local user account was created.
When you create a local user account, Windows XP creates the account only in that computer's security database, which is called the local security database. Windows XP does not replicate local user account information to domain controllers. After the local user account exists, the computer uses its local security database to authenticate the local user account, which allows the user to log on to that computer.

Do not create local user accounts on computers running Windows XP that are part of a domain, because the domain does not recognize local user accounts. Therefore, the user is unable to gain access to resources in the domain and the domain administrator is unable to administer the local user account properties or assign access permissions for domain resources.

Built-In user Accounts:

Windows XP automatically creates accounts called built-in accounts. Two commonly used built-in accounts are Administrator and Guest.

1. Administrator

Use the built-in Administrator account to manage the overall computer and domain configuration, such as creating and modifying user accounts and groups, managing security policies, creating printers, and assigning permissions and rights to user accounts to gain access to resources.
If you are the administrator, you should create a user account that you use to perform non-administrative tasks.

Note: You can rename the Administrator account, but you cannot delete it. As a best practice, you should always rename the built-in Administrator account to provide a greater degree of security. Use a name that does not identify it as the Administrator account because they do not know which user account it is.

2. Guest

Use the built-in Guest account to give occasional users the ability to log on and gain access to resources. For example, an employee who needs access to resources for a short time can use the Guest account. This account is disabled by default. Note: The Guest account is disabled by default. Enable the Guest account only in low-security networks and always assign it a password. You can rename the Guest account, but you cannot delete it.

Multiprocessor Semaphore

Shared memory semaphores are basic tools for interprocessor synchronization. Although
self-imposed design constraints can often reduce synchronization requirements, semaphores offer significant flexibility to multiprocessor system designers. The implementation presented here illustrates some fundamental issues of multiprocessor concurrency and demonstrates the tremendous value of a multitasking OS like DSP/BIOS.
Many variations on this theme are possible. Most obviously, we can modify the semaphore to handle multiple tasks on one processor or tasks on more than two processors. And because our wait operation handles notification interrupts and task wake-up, we can implement any scheduling policies that make sense for more generalized versions.

MULTIPROCESSOR SEMAPHORE
==============================================
Multiprocessing architectures are becoming pervasive. DSPs are widely used in dense multiprocessor arrangements at the network edge, and systems-on-chip often include DSP cores to accelerate math-intensive computation. Although DSP/BIOS provides a standard, efficient, robust API for uniprocessor applications, we sometimes encounter situations where interprocessor synchronization mechanisms would be very useful. Here we will take a look at multiprocessor mutual exclusion and we will talk about new method to implement interprocessor semaphores using DSP-BIOS.
Many multiprocessor DSP systems are designed to share a physical pool of memory in the sense that each processor sees the memory as directly addressable - a DSP shares a region of memory with a host processor or other DSPs-.
A common architecture uses a large region of single-port RAM shared by all devices, including the host. Although arbitration issues complicate hardware design. A second architecture uses of dual-port RAM between processors. The downside here is the relatively high cost and small storage capacity of these devices - large banks of expensive DPRAM are seldom practical. But in applications that use segmented data transport or small data sets where small amounts of DPRAM are sufficient, this method is very useful. DPRAM is relatively fast, designer-friendly and, unlike FIFOs, can store shared data structures used for interprocessor communication.
We need to remember this when using shared memory. When processors have on-chip cache or a system uses write posting, software designers must pay attention to shared variable coherence. Depending on the processor, programmers can disable cache, use cache by-pass, or flush cache to ensure that a shared location is in a proper state. The cache control API in TI's comprehensive Chip Support Library, for example, provides an ideal tool for cache subsystem management. Solutions to write-post delay problems are system-specific.
Our discussion here assumes that two processors use a common shared memory buffer to pass data or to operate cooperatively on a data set. In either case, one or more tasks on the processors might need to know the state of the buffer before accessing it, and possibly to block while the buffer is in use. As in the case of single-processor multitasking, we need a mutual exclusion mechanism to prevent inappropriate concurrent operations on the shared resource. Let's start with a quick review of mutual exclusion to better understand multiprocessor issues.
Shared resource management is a fundamental challenge of multitasking. A task (or thread, or process) needs the ability to execute sequences of instructions without interference so it can atomically manipulate shared data. These sequences, known as critical sections, are bracketed by entry and exit protocols that satisfy four properties - mutual exclusion, absence of deadlock, absence of unnecessary delay and eventual entry (no starvation). Our focus here is mutual exclusion - the remaining properties are detailed in any number of textbooks and will be satisfied by our multiprocessor semaphore.
Relative to a shared resource, mutual exclusion requires that only one task at a time execute in a critical section. Critical section entry and exit protocols use mechanisms such as polled flags (often called simple locks or spin locks) or more abstract entities such as blocking semaphores. Simple locks can be used to build protection mechanisms of greater complexity.
B-Semaphore
The semaphore is a system-level abstraction used for interprocess synchronization. It provides two atomic operations, wait (P) and signal (V), which are invoked to manipulate a non-negative integer within the semaphore data structure. The wait operation checks the value of the integer and either decrements it if positive or blocks the calling task. The signal operation either unblocks a task waiting on the semaphore or increments the semaphore if no tasks are waiting. A binary semaphore, with value limited to 0 and 1, can be used effectively by an application to guard critical sections.
A multiprocessor semaphore can be implemented by placing its data structure in shared memory and using RTOS services on each processor to handle blocking. Before outlining an implementation, let's look at two aspects of semaphores that cause complications in a multiprocessor environment. One is low-level mutual exclusion to protect shared data within a semaphore and the other is wake-up notification when a semaphore is released.
Low-level mutual exclusion
At its core, a semaphore has a count variable and possibly other data elements that must be manipulated atomically. System calls use simple mutual exclusion mechanisms to guard very short critical sections where the semaphore structure is accessed. This prevents incorrect results from concurrent modification of shared semaphore data.
In a uniprocessor environment, interrupt masking is a popular technique used to ensure that sequential operations occur without interference. Interrupts are disabled at the entrance to a critical section and re-enabled on exit. In a multiprocessor situation, however, this isn't an option. Even if one processor could disable the interrupts of another (rarely the case), the second processor would still execute an active thread and might inadvertently violate mutual exclusion requirements.
A second technique uses an atomic test-and-set (or similar) instruction to manipulate a variable. This variable might be the semaphore count itself or a simple lock used to guard critical sections where semaphore data is accessed. Either way, a specialized instruction guarantees atomic read-modify-write in a multitasking environment. Although this looks like a straightforward solution, test-and-set has disadvantages in both uniprocessor and multiprocessor scenarios. One drawback is dependence on machine instructions. These vary across processors, provide only a small number of atomic operations and are sometimes unavailable. A second problem is bus locking. If multiple processors share a common bus that doesn't support locking during test-and-set, processors might interleave accesses to a shared variable at the bus level while executing seemingly atomic test-and-set instructions. And a third problem is test-and-set behavior in multi-port RAM systems. Even if all buses can be locked, simultaneous test-and-set sequences at different ports might produce overlapped accesses.
Now consider two approaches that are very useful in shared memory scenarios. One relies on simple atomic hardware locks and the other is a general-purpose software solution known as Peterson's algorithm.
Hardware flags
In shared memory systems, hardware-assisted mutual exclusion can be implemented with special bit flags found in multi-port RAMs. DPRAM logic prevents overlap of concurrent operations on these hardware flags, forcing them to maintain correct state during simultaneous accesses. And because processors use standard-issue read/write instructions to manipulate the flags, special test-and-set-like instructions are not required. But this is still a limited solution - software engineers often encounter shared memory systems that lack this feature. So let's take one more step to arrive at a general-purpose hardware-independent method.
Peterson's Algorithm
Peterson's algorithm, published in 1981, provides an elegant software solution to the n-process critical section problem and has two distinct advantages over test-and-set spin locks. One is that atomic test-and-set is not required - the algorithm eliminates the need for special instructions and bus locking. The other is eventual entry - a task waiting for entry to a critical section won't starve in a weakly fair (typical) scheduling environment. Although Peterson's algorithm looks deceptively simple, it's a culmination of many attempts by researchers to solve the critical section problem.
The following pseudo-code shows the entry and exit protocols used to enforce two-process mutual exclusion. Note that Peterson adds a secondary "turn" variable - this prevents incorrect results from race conditions and also ensures that each waiting task will eventually enter the critical section.
Listing 1: Peterson's Algorithm
initialization
P1_wants_entry = P2_wants_entry = FALSE
turn = P1
task P1
P1_wants_entry = TRUE /* set lock */
turn = P2 /* grant turn to other task */
loop while (P2_wants_entry and turn == P2) /* buzz waiting for lock */

critical section /* execute critical section */

P1_wants_entry = FALSE /* release lock */
task P2 /* same logic as P1 */
P2_wants_entry = TRUE
turn = P1
loop while (P1_wants_entry and turn == P1)
critical section
P2_wants_entry = FALSE
We can easily imagine situations where more than two processes try to enter their critical sections concurrently. Peterson's algorithm can be generalized to n processes and used to enforce mutual exclusion between more than two tasks. And other n-process solutions, such as the bakery algorithm, are readily available in computer science textbooks. Our discussion here is limited to the two-process case only for clarity and brevity. Pseudo-code for the n-process Peterson algorithm can be found on the Electric Sand web-site.
Blocking mechanism
Now that we have a low-level mutual exclusion tool to safely manipulate shared data within a semaphore, consider the other key ingredient of semaphores, blocking. Assuming that each processor runs DSP/BIOS or another multitasking OS, we develop our wait operation using services that are already available on each individual processor. DSP/BIOS provides a flexible semaphore module (SEM) that we use in our implementation.
When the owner of a uniprocessor semaphore releases it with a signal system call, the local scheduler has immediate knowledge of the signal event and can unblock a task waiting on the semaphore. In contrast, a multiprocessor semaphore implies that the owner and the requestor can reside on different processors. Because a remote kernel has no implicit knowledge of signal calls to a local kernel, the remote kernel needs timely notification of local signal events. Our solution uses interprocessor interrupts to notify other processors of local activity involving a shared semaphore.
Semaphore implementation
This implementation of a multiprocessor binary semaphore (MBS) assumes that the hardware supports interprocessor interrupts and that a task won't try to acquire a semaphore while another task on the same processor owns it. The latter restriction simplifies the example and can easily be removed with some additional design work.
The wait operation
MBS_wait is invoked to acquire a shared memory semaphore. If the semaphore is available, MBS_wait decrements it and continues. If the semaphore is already owned, the requestor task blocks within MBS_wait until a release notification interrupt makes it ready-to-run. Once the interrupt occurs and higher priority tasks have relinquished the CPU, the task waiting on the semaphore wakes up within MBS_wait and loops to re-test it. Note that the task doesn't assume ownership immediately when unblocked. Because a remote task might re-acquire the semaphore by time the requestor wakes up, MBS_wait loops to compete for the semaphore again.
When MBS_wait determines that a semaphore is unavailable, it sets a notification request flag in the shared semaphore data structure to indicate that the processor should be interrupted when the semaphore is released elsewhere in the system. To avoid a race condition known as the lost wake-up problem, MBS_wait atomically tests the semaphore and sets the notification request flag if the semaphore is unavailable.
Code for the wait operation is divided into two distinct parts. MBS_wait contains the blocking code and is called by an application. MBS_interrupt runs in response to the notification interrupt and posts a local signal to the task waiting on the semaphore. This is very similar to a device driver model, where the upper part of a driver suspends a task pending I/O service and the interrupt-driven lower part wakes it up.
The signal operation
MBS_signal releases a semaphore by incrementing its value and posting an interrupt to a processor that requested release notification. This causes MBS_interrupt to execute on the remote processor where a task is blocked on the semaphore. Note that this varies slightly from the uniprocessor signal operation described earlier where the semaphore is incremented only if no tasks are blocked.

Pseudo-code
Now that we have a notion of shared semaphore architecture, let's look at pseudo-code describing the wait and signal operations. Keep in mind that this example applies to a two-processor version where only one task at a time on each processor tries to acquire the semaphore. General implementations servicing more processors and sharing the semaphore between multiple tasks on each processor can be implemented by using the n-process Peterson algorithm and modifying the MBS operations.
Note that the critical sections, enforced by Peterson's algorithm (Peterson entry and Peterson exit), are very short instruction sequences used to manipulate the semaphore data structure. The details of Peterson's algorithm are not shown - these are implicit in the Peterson entry/exit operations. The lock and turn variables used in Peterson's algorithm are distinct from the semaphore data elements accessed in the critical sections.
The critical sections are preceded with DSP/BIOS TSK_disable calls to prevent task switching. A task switch during a critical section could cause another processor to spin indefinitely in Peterson entry if it tries to acquire the same semaphore. The critical sections should be executed as quickly as possible.
Also note that the example omits error checking, return values and timeouts. The pseudo-code is meant to highlight discussion topics rather than provide a detailed implementation template.

MBS_wait () {
success = FALSE /* local variable, not part of semaphore */
while (success == FALSE) { /* repeat semaphore acquisition attempt */
TSK_disable () /* prevent DSP/BIOS task switch */
Peterson entry /* Peterson's entry protocol */
/* critical section begins */
if (sem_value > 1) {
sem_value = sem_value - 1
success = TRUE
}
else {
notification_request = TRUE
}
Peterson exit /* end critical section */
TSK_enable () /* re-enable DSP/BIOS scheduler */
if (success == FALSE) { /* local variable shows result */
SEM_pend () /* sleep using DSP/BIOS semaphore */
}
}
}
MBS_interrrupt {
SEM_post () /* local wake-up signal using DSP/BIOS */
}
MBS_post () { /* release the semaphore */
TSK_disable ()
Peterson entry

/* start critical section */

sem_value = sem_value +1; /* increment the semaphore */
if (notification_request == TRUE) { /* notify a remote task? */
send notification interrupt /* yes - send an interrupt */
}
Peterson exit /* end critical section */
TSK_enable ()
}

Why Data Loss ?!===> Hw or System Malfunction

We understand what happens when the information you have been storing for keeps suddenly becomes inaccessible.

When you lose the information, which was once accessible is referred as data loss.
The threats to data loss may come in many different forms, from a simple mistake to a massive natural disaster
Since, we now know what data loss is; let us read on to find out as to what causes data loss to occur and what measures do we need to follow in order to prevent that from happening.

Hardware or System Malfunction

The biggest factor leading to data loss is hardware malfunction or hard disks failure. It is known that 44% of all data losses are an outcome of hardware or system malfunction. Hard disks are mechanical devices and therefore are more prone to wear and tear. It is believed that the estimated average life of a hard disk is 3 years. Hardware fails to function properly due to one of the following reasons:

(a) Head/Media Crash: Head/media collisions account for a large percentage of hardware malfunctions.

Picture this: a series of hard disk platters rotating at the rate of 150 times per second with heads being separated (at submicron distances), moving over them. Even the slightest of disturbance inside the disk could cause the entire disk to stop functioning properly. When the read/write head touches the rotating platter of a hard disk, it leads to head crash.

When even the smallest bit of dust enters the sealed drive unit and settles on magnetic surfaces, it gets stuck in the thin gap between the head and the disk.

Dropping a disk to the floor may also cause hard disk malfunction. Not only this, even the slight jerk or vibration can unsettle alignments and lead to hardware malfunction.

(b) Sudden catastrophic failure: When you cannot detect the presence of the hard drive in the CMOS setup or when the operating system doesn’t locate the hard drive, it’s logical to think that the data loss has occurred. This POST failure results in hard drive to become inactive. This is when you hear the clicking noise from the hard drive. Sudden temperature variations may directly influence the data loss to occur.

(c) Electrical Failure: Another issue ultimately resulting in the loss of information is electrical failure, which can occur any time. Electrical failure is the direct effect of the circuit board failure, which is located on the bottom of the hard drive. A faulty component, electro-static discharge, damaging circuitry during installation are some of the various causes leading to electrical failure. It is important to keep the system clean and well ventilated; otherwise, it may cause electrical components to fail functioning properly. Hence, it is recommended to keep the system in cool conditions.

(d) Controller Failure: When you try to boot the system but instead receive an error message displaying “HDD Controller Failure”, then be prepared for the loss of your precious data. Yes, this is another reason for data loss to have occurred.

Hard disk controller failure occurs due to one of the following reasons:

• When the adapter is not firmly seated in the slot.
• Failing CMOS battery or accidental user intervention resulting in incorrect data in CMOS setup.
• When IRQ conflicts with other devices.
• When the IDE drives are not mastered/slaved properly.
• MBR (Master Boot Record) or the partition table is distorted.
• When the IDE drives (installed with master and slave) are incompatible.
• When Hard disk drive is not connected or set up properly.
• When the hard drive cable has gone bad (loose, twisted or has a broken wire).
• When the hard drive or the motherboard has gone bad.

When you notice the following changes taking place with your system, prepare yourself for the attack called “data loss”.

• Hard drive stops spinning.
• You receive an error message stating that the device is not being recognized by the system.
• You are unable to access the information, which you could previously.
• When you hear the scraping or rattling sound from the system.
• You system or the hard drive suddenly stop functioning.

This is what you could do in order to avoid the above-mentioned changes taking place:

• Connect your system to a UPS (Uninterrupted Power Supply), in order to protect your system against power surges, which is the main cause of electrical failures.
• Keep your computer in a dry and shaded room, which is clean and free of dust.
• Head crash is one of the most common hardware failures resulting in data loss. It is recommended not to shake the hard drive or avoid giving jerks to the computer, since the slightest of the jerk may result in a head crash or misalignment of platters. It is strictly advised not to remove the casing on the hard drive.

However if Data Loss occurs, Don't Panic. Call [url=spam.spam]Data Recovery Specialist for the Data Recovery Service[/url].
Stellar Information Systems Ltd., a company with over a decade of experience in [url=spam.spam]data recovery software[/url] & services, has an ultra-modern Class-100 clean room facility in Gurgaon. Stellar’s Data Recovery Services (DRS) division operates from this environmentally controlled dust-free area to ensure safe and effective recovery from damaged hard disks.

Storage Consolidation Hurdles for India

A consistent challenge within the IT organization in 2007 is the need to keep cost down. Management is not asking IT not to spend money - but to spend it like nothing more is coming. In other words, spend where it will bring the most benefit in the shortest possible time.

So should you wonder then that a recent survey of IT practitioners across India showed exactly this market condition?

Referring to chart 1, 64 percent of companies surveyed say they are expanding to new markets and new geographies. Second in the list of business concerns is competition. About 57.1 percent say that competition is a constant worry for them, both in the current markets where they are and in the new markets they are eyeing to penetrate or already entering.

In the process of expanding, engaging new potential customers, and staving off competition, companies must keep close watch over how they spend their limited resources. Prudent spending in the face of competition and the need to keep growing revenue to demonstrate shareholder value is pushing 56.7 percent to constantly monitor cost.

Closely trailing these top 3 concerns is the imperative to raise productivity with 53.9 percent continually looking for ways to raise productivity.

Top IT Challenges in 2007-2008

The issues facing IT managers reflect the business concerns of their companies. Because any failure on the part of IT is a showstopper, IT managers put security (52 percent) as their number one priority (see chart 2).

The second (48.3 percent) and fourth (40.4 percent) challenges are related. The rapid proliferation of new and emerging technologies coupled with the imperative to keep costs down means that IT must be sure about what where IT investments are going. The old adage "no one gets fired for buying IBM" is no longer valid. The IT management chain (top to bottom) share the responsibility of ensuring that the right choice is made each time.

The perennial shortage of qualified staff ranks as the third IT challenge among companies in India with 45.2 percent claiming that keeping qualified staff is just as difficult as finding them.

Current IT Initiatives in Place

In a bizarre twist of fate, the top three IT initiatives currently in place are all about availability. Close to 75 percent of respondents say they have a backup/restore strategy in place. Security comes a close 72.7 percent. Because business continuity often involves other departments or business units, the third most deploy IT initiative (business continuity and disaster recovery) is a distant 47.4 percent.

Wireless connectivity bested network storage by 0.2 percent as the fourth most commonly deployed IT initiative at 44.8 percent.

Business benefits of Storage Consolidation

How easy is it to explain to a non-technical person the merits of migrating from a proprietary platform to one that supports Open standards? In the same token explaining the complexity of consolidating storage and servers to the uninitiated (and possibly uninterested) presents a challenge in itself.

That said, 57.6 percent of IT respondents say that simplifying the management of the data center is the number one benefit that can be derived from a storage consolidation exercise.

This is followed by 53.4 percent seeing an improvement in service levels to users as data becomes centralized. This signals IT's understanding that customer satisfaction is an important part of their mandate.

One of the biggest selling points of storage consolidation (and the whole premise behind the success of network attached storage and storage area network) is the idea of storage underutilization and server proliferation. In the days when the only option is direct attached storage, companies would buy additional servers because the server has reached its storage limits.

Storage consolidation allows for companies to buy additional storage without increasing the number of servers on the data center. Survey respondents put better storage utilization as the third most important benefit of consolidation with 52.5 percent affirming this.

The fourth most important benefit (42.2 percent) is a strong selling point to both technology and business managers - the idea of standardizing on specific technologies and processes. By standardizing on a few instead of many, companies are able to simplify the management of their infrastructure.

Why Say NO to Consolidation?

You start to wonder with all the benefits of storage consolidation, why would companies choose to ignore this path? Over 70 percent of respondents listed "limited understanding of the technology and its benefits" as the single biggest hindrance to agreeing to consolidate their storage infrastructure.

This correlates to lack of in-house expertise as the second most cited reason (63.8 percent) for not taking the consolidation path.

Any storage consolidation project will involve costs, from redesigning the data center, to migrating content hosted on siloes of compute servers onto storage-only platforms. With concerns over keeping cost down, it is no surprise that "perceived high project cost" is third on the list at 59.7 percent.

Lack of standards (49.2 percent) and Lack of management support (42 percent) round out the objections to undertaking a storage consolidation exercise.

Profile of Respondents

The respondents to the survey represent mid to senior IT managers working for large enterprises in the India. Enterprise Innovation received 631 responses majority. While only 24.9 percent point to their role as being the final decision maker with regards to storage strategies and acquisitions, 32.8 percent influence the choices made by their company. A further 25.8 percent evaluate the technologies from which decisions are made. As expected only 16.5 percent point to external forces as guiding any storage acquisition indicating the level of trust management places on local talent.

Predicting what you need in the future is always a challenge. This is more acute when it comes to storage. While IT has no problem projecting the server computing requirements based on ongoing and future IT projects, storage requirements are often dictated by how successful campaigns are as well as target audience.

When a global bank launched its first online IPO product in Hong Kong, it quickly realized that it underestimated its storage requirements forcing the bank to review future storage purchase.

This reflects the changing market dynamics. In the survey close to 30 percent of respondents have no idea as to when their next storage purchase will be. But 58.6 percent believe that they have to purchase new storage within a 12-month window.

Understanding the Shift Toward Network-based Video Surveillance in Asia

Threats of security continue to pervade the global market since September 11. Bombings and threats promising mayhem and destruction had led to a surge in investments around security and surveillance systems. This is fueling the change in how we capture, store, and monitor video.

According to Shivanu Shukla, an industry analyst at Frost & Sullivan "There has been strong interest in being able to remotely monitor surveillance cameras, run video analytics, and integrate surveillance with other physical security systems."

Shukla notes that network-based video surveillance systems are becoming popular. Frost estimates the video surveillance market to grow from $992.1 million in 2006 to $3956.7 million in 2013.

Analog vs. digital

Analog video surveillance systems consists of analog cameras connected via cables to multiplexers and in-turn connected to monitors and key boards. But what happens when the area that needs to be monitored is a significant distance away and there is a need to record 7x24?

Network surveillance solutions allow existing analog cameras to be connected to a video server, which is connected to the network, and monitored by any computer that is on the network, or the existing control room.

"Storage of the video can be done by network video recorders (NVRs), which can be anywhere on the network, as opposed to digital video recorders (DVRs), which need to be placed close to the cameras or the switcher/multiplexer. In a complete network surveillance solution, network cameras are used to connect directly to the IP network, without the need for an external encoder," says Shukla.

Video surveillance deployments in Asia are mostly analog based due in part to the market's price sensitivity. But this is changing as the security threats continue to remain high on radar of both commercial and the public.

Kiran Kumar, a Frost Research Associate, notes that government and transportation sectors are spearheading video surveillance deployments, with large projects for airports, city surveillance, and other critical infrastructure surveillance.

"Fast developing physical infrastructure such as airports, seaports, highways, and rail networks is a key driving force for the strong adoption for video surveillance systems," says Kumar.

There are three main factors limiting the continuing growth of analog video surveillance systems:

Cost: Set-ups and installation costs of traditional coaxial or fiber-based cabling for analog video systems over large areas is very high. Large-scale projects for city surveillance and monitoring of harbors and ports take a significant role in effecting change to network surveillance.

Scalability: Despite DVRs having improved the recording quality of analog cameras, there is still the physical restriction of its installation near the analog matrix.

Flexibility: Integration of analog video surveillance systems with other systems can be cumbersome. Analog surveillance systems are limited to centralized video analytics, which requires additional hardware, cabling and is difficult to scale.

Benefits of network surveillance

Digital technology is helping extend the capability of surveillance beyond what can be achieved with traditional systems.

Technology now allows us to monitor an area from any location in the world in real-time without any significant investment.

Storage of video can be done on NVRs that can be anywhere on the network. How much video we can store digitally is limited only by the amount of hard disk space. And because the video traverses through the network, backups can be done remotely.

Scalability of network surveillance systems is easy and inexpensive. Network cameras can be connected to the network without rewiring.

With network surveillance systems, intelligence can be distributed either directly at the camera or encoder, or centralized on the NVR or a separate server.

Network surveillance systems are cheaper to build and maintain with reusability of existing IP network infrastructure, highly scalable with little incremental costs, low maintenance costs, and ability to reuse existing legacy surveillance cameras and other display and monitoring equipment as key factors for adoption of digital surveillance techniques.

Limitations of going digital

Not everything is bright and rosy. Due to its dependence on the network, security teams will need the support of the IT department.

"The key challenge to adoption is to get the security and IT teams to adopt network surveillance. Existing network infrastructure makes the proposition of network surveillance stronger. However, organizations where such infrastructure is less developed would be slow to move to network surveillance," says Shukla.

He concedes that network surveillance adoption is changing the dynamics between the security personnel and the IT teams within enterprises, hindering its adoption rate. The introduction of network surveillance implies the participation of the IT division in security matters.

"Security personnel are typically more conservative and not open to major changes in their environments. Network surveillance adoption would depend on the successful interactions and communication between the two teams within an enterprise," notes Shukla.

Although Frost & Sullivan expects the trend towards network surveillance to be strong, adoption of analog system will continue to grow as well, albeit slower than network surveillance deployments.

"While remote access, scalability, and distributed intelligence are the key drivers for network video surveillance, price, perceived reliability, and conservative nature of security teams to change and adopt new technologies will hinder adoption," says Kumar.

Traditionally, cameras have been the point of entry for vendors into the market; subsequently their offerings include DVRs, NVRs, encoders, and software, together with switchers and multiplexers.

Increasingly, due to the emergence of network surveillance solutions, there is an effort from vendors to approach the surveillance solution from the NVR or DVR front, by offering better management software, virtual matrix systems and video content analytics as a solution package.

As traction for network video surveillance picks up in Asia Pacific, providing complete end-to-end surveillance solutions is expected to become a key to succeed in the market.

Developing an Effective Ecommerce Website for a Lucrative Online Business

Ecommerce in simple terms can be called as online business. Business, no matter online or offline has the sole objective of making profit. When one is into an online business, most customers who visit your site are browsers and to convince them to make a purchase at your website is the real trick of the trade. Now when a browser lands on your website, it is the website which is the representative of the business organization, therefore it is important that it should have such an appeal so as to bring huge sales for the business organization and prove itself to be an asset rather than a liability for the company.

Keeping in mind some key points while designing the website can surely serve the lucrative purpose of the website:

* Ease of navigation - It is important that the website should be designed as simply as possible so that users do not have to waste their time looking for information or the product they are searching for. Complicated websites may exasperate the user and prompt him to switch over to some other website. Including a sitemap can be of great help in this matter.

* Product information - Including the information about the products and services that is offered by the website can be of great help in persuading the target audience to make a purchase.

* Easy shopping process - This will be very helpful, otherwise any complications at a later stage can prompt the customer to cancel out the fixed deal. Making available the multiple options of payments and security assurance for the online purchase is a part of ecommerce solution and a necessity for every online business.

* Features included - It is imperative for an effective web design to include features like email notification, auto responders, and encrypted websites for secure credit card dealings. An option of ecommerce shopping cart is also required, so that the customers are able to collect whatever item is desired while exploring the catalogue made available online. An effective development of an ecommerce website design by including all these tools will ensure a secure and satisfying experience to the customers.

* Multiple-browser - Another essential point is to make sure that the website for online business is accessible on multiple-browsers and is not restricted to a particular browser.

* SEO friendly - This is one of the primary requirements of a website that makes it easy for the website to be accessed by the Internet users. There are several SEO techniques like keyword rich content, article submission, original content, and including of exchange links that should be used.

Hence, following these steps can help you develop an effective ecommerce solution that will take your online business to the zenith of success and to reach this hiring a professional web design company is definitely a good move.

Mysql Backup Ideas

MySQL Backup of mixed type tables have been a night mare for me always. When it comes to the Win32 port serving a client application, which was a recent client requirement we at Saturn had to grease our brains a little too much. Finally all the requirements were met.

The Requirements were that all triggers, proceedures and functions along with table defenitions, permissions and data should be backed up. The backup and restore should be with maximum efficiency. Triggers should not be triggered when restoration is happening.

Considering all the above points, it was finally decided, to go with a suggested solution as follows. The structure of tables would be taken using show create table, parsed and properly formatted to a particular markup with a create table, a load data in file, and a set of alter tables.

Considering tables with mixed types, ie some tables are InnoDB, and others MyISAM, all InnoDB tables will be converted to InnoDB after the data has been populated, if there are any indexes ( there should be), and even auto_increment primary keys, are built after the data is populated. The data is taken as a csv using select into outfile. The triggers are taken from the data folder as files, and copied to a temporary folder.

The point where we had to break our heads were the functions and procedures, which was due to the mysql connector/v5 which we were using did not support delimiter, and finally what we did was to dump the proc table from mysql database with the same method. All the resulting files are zipped together with some meta information.

When the Restore was developed, we faced another problem, that simply dropping the TRG files did not activate the triggers, but we needed to restart the mysql server.

Finally all is well and the client is satisfied. The backups, taken will only be a full backup. But for the incremental one, it would be better to enable the replication service, and use mysqlbinlog.exe.

Getitnext Releases Version 1.1 for Ebay Searches

Searching an innovative new
search solution that dramatically improves the buying experience of the
eBay marketplace, today announced it has released version 1.1 of its
online search application. The latest release comes 5 months after the
company launched its service to the public and features a significant
number of upgrades based on feedback received from its online user
community.

While GetItNext’s innovative and proprietary approach still allows users
to quickly and easily eliminate any unwanted eBay listings, the site now
features a wide array of new tools that vastly improve user experience.
The most significant new features incorporated into this release are:

1) Homepage Redesign - A new, award winning look and feel. 2) Find a
deal - Search for items with 0 bids and less than 4 hours remaining.
Zero competition = bargains for users. 3) Bulk Deal - Easily search for
lots, multiples and wholesale items. 4) Refine your search - Add
keywords and categories to make your search results even better. 5)
Email it - Found a great deal on an item and want to share it? Email it
to your friends so they can start saving too!

“We are extremely pleased and excited about this release”, said Ron
Stewart, President and CEO at GetItNext. “Our users now have access to
new and upgraded tools, further enhancing their experience searching for
items on eBay. The main goal for GetItNext continues to be enabling our
users to make informed decisions with all details at hand when making a
purchase. The features included in this release accomplish just that –
plus we’ve built in a number of recommendations we have received from
our users since we launched GetItNext.”

In only 5 months, GetItNext has accomplished a number of significant
milestones including being awarded the 2007 IMA award for the design of
its website as well as steady growth in the number of unique site
visits, registered newsletter recipients and overall site usage. “We’re
thrilled with the response our service has received thus far”, Stewart
continues. “It shows that our community values what we provide, and in
return we want to make sure we listen to what they have to say so we can
incorporate the feedback into future releases”.

About getitnext:
Getitnext is an easy to use search solution that dramatically improves
the buying experience of the eBay marketplace. In contrast to current
first generation eBay tools and techniques, GetItNext’s Web 2.0 design
gives users an experience closer to a desktop application than a
traditional Web page. As a result, GetItNext’s proprietary search
techniques and industry leading tools make eBay easier to use and saves
users time and money.

What is Data Management?

Data Management is the comprehensive series of procedures to be followed and have developed and maintained the quality data, using the technology and available resources. It can also be defined that it is the execution of architectures under certain predefined policies and procedures to manage the full data lifecycle of a company or organization. It is comprised of all the disciplines related to data management resources.

Following are the key stages or procedures or disciplines of data management:

1. Database Management system

2. Database Administration

3. Data warehousing

4. Data modeling

5. Data quality assurance

6. Data Security

7. Data movement

8. Data Architectures

9. Data analysis

10. Data Mining

1. Database Management system:

It is one of the computer software from various types and brands available these days. These software are designed for specifically for the purpose of data management. These are few of these; Ms Access, Ms SQL, Oracle, MySql, etc. The selection of any one of these depends upon the company policy, expertise and administration.

2. Database Administration:

Data administration is group of experts who are responsible for all aspects of data management. The roles and responsibilities of this team depends upon the company’s over all policy towards the database management. They implement the systems using protocols of software and procedures, to maintain following properties:

a. Development and testing database,

b. Security of database,

c. Backups of database,

d. Integrity of database, and its software,

e. Performance of database,

f. Ensuring maximum availability of database

3. Data warehousing

Data warehousing, in other words is the system of organization of historical data, its storage capability etc. Actually this system contains the raw material for the management of query support systems. That raw material is such that the analysts can retrieve any type of historical data in any form, like trends, time stamped data, complex queries and analysis. These reports are essential for any company to review their investments, or business trends which in turn will be used for future planning.

The data warehousing are based on following terms:

a. The databases are organized so that all the data elements relating to the same events are linked together,

b. All changes to the databases are recorded, for future reports,

c. Any data in databases is not deleted or over written, the data is static, readable only,

d. The data is consistent and contains all organizational information.

4. Data modeling

Data modeling is the process of creating a data model by applying and model theory to create data model instance. The data modeling is actually, defining, structuring and organizing the data using predefined protocol. Then the theses structures are implemented in data management system. In addition, it also will impose certain limitation on the database with in the structure.

5. Data quality assurance

Data quality assurance is the procedure to be implemented in data management systems, to remove anomalies and inconsistencies in the databases. This also performs cleansing of databases to improve the quality of databases.

6. Data Security

It is also called as data protection, this is system or protocol which is implemented with in the system to ensuring that the databases are kept fully safe and no one can corrupt by access controlling. The data security, on other hand, also provides the privacy and protection to the personal data. Many companies and governments of the world have created law to protect the personal data.

7. Data movement

It is one term broadly related to the data warehousing that is ETL (Extract, Transform and Load). ETL is process involved in data warehousing and is very important as it is the way data is loaded into the warehouse.

8. Data Architectures

This is most important part of the data management system; it is the procedure of planning and defining the target states of the data. It is, realizing the target state, describing that how the data is processed, stored and utilized in any given system. It created criterion to processes the operation to make it possible to design data flows and controls the flow of data in any given system.

Basically, data architecture is responsible for defining the target states and alignment during the initial development and then maintained by implementations of minor follow-ups. During the defining of the states, data architecture breaks into minor sub levels and parts and then brought up to the desired form. Those levels can be created under the three traditional data architectural processes:

a. Conceptual, which represents all business entities

b. Logical means the how these business entities are related.

c. Physical, is the realization of the data mechanism for specific function of database.

From above statements, we can define that the data architecture includes complete analysis of the relationship between functions, data types and the technology.

9. Data analysis

Data analysis is the series of procedures which is used to extract required information and produce conclusion reports. Depending upon the type of the data and the query, this might include application of statistical methods, trending, selecting or discarding certain subsets of data based on specific criteria. Actually, data analysis is the verification or disproval of an existing data model, or to the extract the necessary parameters to achieve theoretical model over realty.

10. Data Mining

Data mining is the procedure to extract unknown but useful parameters of data. It also can be defined that it is the series of procedures to extract the useful and desired information from large databases. Data mining is the principle of sorting the large through the large amount of data and selected the relevant and required information for any specific purposes.

Backup on Lto Tape

The value of your data more then the hardware and we know that when we loss over data and that’s why the Backing up data has become golden rule in the computer world. We need to backup data once in a day to avoid panics at the time of data loss. Data backups, hard drive backups, email backups prove beneficial when you lose some important data.

In such case you can trust the backed up data! But how can we create backup of over data and what hardware can we use to have backup on that is reliable and easy to use. Capabilities such as dynamic rate matching and dual mode compression enhance tape drive performance and product life.
Released in 2007, LTO-4 tapes have a native capacity of 800 GB which can go up to 1.6 TB worth if compressed (2:1). The data transfer rate has gone up to 120 MB/s and a 256 bit AES-GCM drive level encryption has been added. LTO 4 Ultrium also features backward compatibility with LTO-2 and LTO-3 drives. Another advantage of using LTO tapes is that future versions are in development so there will be a chance to update your system instead of it being phased out of use. Multiple sources for LTO media and drives reduce production bottlenecks and also ensure investment protection for OEMs and end users alike. Related products available at Tape4Backup are: LTO 1, LTO 2, LTO 3, LTO Cleaning Cartridge, LTO Barcode Labels, LTO Empty Cases and LTO Cartridge Memory Reader.
Backup tape users are increasingly sensitive to data security in the wake of high profile data loss incidents, and encryption techniques have appeared to help ensure security. For example, if an unencrypted tape is lost or stolen, its data is at risk. But, if an encrypted tape is lost or stolen, its data is still considered to be secure. Thus, the use of encryption has a profound effect on corporate liability and reporting obligations.
We can find LTO 1, LTO 2, LTO 3, LTO Cleaning Cartridge, LTO Barcode Labels, LTO Empty Cases and LTO Cartridge Memory Reader on the

The Mythology of Data Governance and Data Stewardship

It will sound almost a cliché if you say that companies thrive on information. If you go through the front page of any news paper, watch any weather channel, come across a press conference, or go over any annual report, you will witness how data dominates the organizations today. The importance has grown so large that companies have to scour for data of the previous years to ascertain their business efficiency. From marketing department to the operation department, organizations rely on the data of every segment to make smart predictions, store historical records, and read the consumer behavior.

The volume of data grows with the augmentation of customer information. Moreover, the amount of data captured at a particular point of time multiplies every second year. Packaged applications are the norms of the present world and external data is an indispensably important component of every organization. However, old tactics and processes have given way for new ones when it comes to managing data. While the boardrooms get abuzz with the vociferous discussions on data among the executives, companies wake up to the realities of upgrading their data management processes and systems

This quest for advanced data management has given rise to the concept of data stewardship and data governance. However, the chaos and confusion over the roles between business and Information Technology continues to take place. Customer data integration (CDI) and master data management (MDM) are two important initiatives which promise to relieve business experts from the labor of defining and maintaining customer data.

The Dilemma of Data Governance
If anything has been most misused in business, it is the phrase of data governance. IT organizations have always been trying to deploy data governance to engage the business in legitimate ownership discussions. Ironically, vendors use this phrase to convey data management practices from modeling to quality automations. Worse still, even the term has been used synonymously for knowledge management and CRM. Even the IT executives mistake data stewardship with data governance.
However, data stewardship and data governance are two different concepts. Data governance is different from data stewardship in a way that it implies a level of organizational supervision that encompasses not only business but also information and technology. Moreover, it also comprises executives who desire to be a part of or are engaged with defining their companies’ policies. While they handle the internal and external regulations in one hand, they implement the customer-focused strategies on the other. Data Governance can be best defined as the mechanisms and decision-marking structures for treating data as one asset, implementing formal policies and administering the management of corporate data.

The process of data governance itself is dependant on the executive committee which institutes the policies, sort out conflicts and questions, consider customer commitments and evaluates success. Many companies that lack the processes or skills to manage their data deploy this too early in their systems. However, unless and until there is any legitimate data management and stewardship take hold, data governance will remain just a matter of discussion.