Labels

All About Business 4 you

Saturday, February 6, 2010

On The Line: Our "Dead" Strategic Plan

This month we examine ways to make strategic plans "come alive".

Problem/Question

I was asked recently to be part of a strategic planning session for our department. Since the department is quite large, it isn't possible to include all staff in the process. I think strategic planning is very important, but in past years, all the work put into it seems to have gone to waste. The strategic planning document seems to get put in the back of the drawer, and I suspect that most staff don't even read their copies. Since we put a lot of time and effort into the process, can you suggest any ways that we might make it worthwhile?

Answer

What you describe is probably the norm in organizations that do strategic planning. It is rare that plans of any sort are made to "come alive". To understand why this occurs is to take a step to altering the situation. Strategic planning can be one of the backbones of organizational functioning, serving to: · inform decision-making (eg. what we do, what we don't), · help staff determine both work unit and employee objectives · inform the staff development and personnel functions · form a basis for continuous improvement

One major reason for its failure is that it is often seen as an event, unlinked to anything else. One of the keys is to link it to the many other organizational functions through action, not just talk. If we consider strategic planning as long range planning, work units need to use it as a basis for their own shorter-term operational planning. If the larger department does it's strategic plan once a year, each work unit should be using that plan as the foundation for setting it's own goals and objectives for the upcoming fiscal year.

We use the term cascading to refer to the effects that a meaningful strategic plan can have. The departmental plan informs the divisional or work unit plan, which, in turn affects directly the allocation of resources and objectives for individuals.

The advantage of cascading lies in the use of the unit/individual objective setting process to focus on why the department is there, and to link, through concrete action planning, the departmental plan to the everyday activities of each staff member. Individual managers need to be held accountable for the integration of their own plans with the overall departmental plan. So that is often the best place to start; with those managers.

Many managers have inadequate experience in integrating strategic plans into everyday work. It may be a good idea, prior to the strategic planning process, for all participants to get together to plan out how they will make the plan come alive. While we often think of strategic plans in terms of formal release and distribution, the place where real success takes place is the everyday world. If each manager, in any decision-making conversations with staff, refers to the strategic plan as a guidepost for action, then staff begin to realize that it is not a "dead" document, but one that has practical and real relevance to their everyday worklife.


Here are some specific suggestions:

* Treat the strategic plan like a spider-web, with strands into all aspects of your organization, including budget, human resource development, objective setting and performance management, all decision-making at the everyday level, etc. It is part of a system of management, not a stand-alone piece.
* Use every opportunity to relate whatever is being discussed to the strategic plan. · Do your best to allow sufficient time to do the strategic planning process and do all the pieces. (About once a year we publish a model of integrated strategic/shorter term planning that you will find useful-see next month's newsletter).
* The strategic planning process is essentially a stepped process. That is, it is not something that can be completed in a single one day session. As such it is a good idea to communicate with staff and involve them at every step. At each step, participants go back to those not present, discuss the tentative decisions, and obtain input to bring back to the next planning session.

Improving Communication -- Tips For Managers

Research indicates that managers spend somewhere between 50% - 80% of their total time communicating in one way or the other. This isn't surprising, since communication is so critical to everything that goes on in an organization. Without effective communication there can be little or no performance management, innovation, understanding of clients, coordination of effort, AND, without effective communication it is difficult to manage the expectations of those who are in a position to make decisions about your fate.

It can also be said that many managers do not communicate well, and do not set an organizational climate where communication within the organization is managed effectively. This isn't surprising, since a manager who communicates ineffectively and does not encourage effective organizational communication is unlikely to hear about it. Poor communication is self-sustaining, because it eliminates an important "feedback loop". Staff are loathe to "communicate" their concerns about communication because they do not perceive the manager as receptive. Both staff and management play out a little dance.

In short, you may be fostering poor communication, and never know it. You may see the symptoms, but unless you are looking carefully, you may not identify your own involvement in the problem. What can you do about it?

Your Role In Communication Improvement
Effective organizational communication, regardless of form, requires three things.

First, all players must have the appropriate skills and understanding to communicate well. Communication is not a simple process, and many people simply do not have the required depth of understanding of communication issues.

Second, effective organizational communication requires a climate or culture that supports effective communication. More specifically, this climate involves trust, openness, reinforcement of good communication practices, and shared responsibility for making communication effective.

Third, effective communication requires attention. It doesn't just happen, but develops as a result of an intentional effort on the part of management and staff. Too often, communication, whether it is good or bad, is taken for granted.

We can define your role in improving communication with respect to each of these. First, if you want to improve communication, you will need to ensure that you and staff have the skills and knowledge necessary to communicate effectively. This may mean formal training is in order, or it may mean that you coach staff and provide feedback so that they can improve.

Second, you play a critical role in fostering and nurturing a climate that is characterized by open communication. Without this climate, all the skills in the world will be wasted.

Finally, you must bring communication to the forefront of organization attention. If you make the effort to improve communication, your staff will recognize that it is important. If you ignore it, so will staff.

Some Specific Tips:

1) Actively solicit feedback about your own communication, and communication within the organization. Ask staff questions like:

* When we talk, are you generally clear about what I am saying?
* Do you think we communicate well around here?
* Have you got any ideas about how we could communicate better?

Consider including these questions (or similar ones) in your performance management process, or staff meetings.

2) Assess your own communication knowledge and understanding

(See self-assessment instrument on Page 5-sorry, not available online).

3) Working with your staff, define how you should communicate in the organization. Develop consensus regarding:

* a) How disagreements should be handled.
* b) How horizontal communication should work (staff to staff).
* c) How vertical communication should work (manager to staff, staff to manager).
* d) What information should be available and when.

Once consensus is reached, support the achievement of these goals through positive reinforcement and coaching.

4) Look at the impact of the structure of your organization and how it impacts on communication. Indirect communication (communication that is transferred from person to person) is notorious for causing problems. Look at increasing direct communication where the person with the message to send does it directly with the receiver.

5) Learn about, and use active listening techniques. This will set a tone and contribute to a positive communication climate. If you don't know what active listening is, find out. It's important.

6) Consider undertaking a communications audit. (see sidebar).

Conclusion

We only have space to give you a few tips, and communication is a very complex process. We suggest that you take the communication self-assessment checklist on the following page, to assess your own understanding and application of communication principles.

If you would like to increase awareness and attention to communication, consider copying the self-assessment checklist and distribute it to staff.

Suggest that they complete it for their own use, and follow it up by discussing organizational communication in a staff meeting.

Be aware that exploring communication patterns and effectiveness can bring to the surface a number of resentments and perceptions. If you aren't prepared to deal with these, it is best to look to an outside consultant.

Monday, February 1, 2010

Business History

History of business & finance. The concepts of business and finance have been an integral facet of human activity since the development of the earliest civilizations. The earliest business transactions were based on a system of trade known as the barter system. In the barter system, prior to the emergence of systems of currency, goods were exchanged for other goods that were deemed to be of similar value. After the development of currency, goods were exchanged or sold for an agreed upon value in the form of money. The goods and the currency were thus interchangeable, so when goods were not plentiful, replacements could be readily purchased by using currency. The links included herein relate to the history and the human experience of business and finance.

Saturday, January 30, 2010

The Importance of Leadership In Managing Change

Front and Center - Leadership Critical To Managing Change

When change is imposed (as in downsizing scenarios), clearly the most important determinant of "getting through the swamp", is the ability of leadership to...well, lead. The literature on the subject indicates that the nature of the change is secondary to the perceptions that employees have regarding the ability, competence, and credibility of senior and middle management.

If you are to manage change effectively, you need to be aware that there are three distinct times zones where leadership is important. We can call these Preparing For the Journey, Slogging Through The Swamp, and After Arrival. We will look more carefully at each of these.

The Role of Leadership

In an organization where there is faith in the abilities of formal leaders, employees will look towards the leaders for a number of things. During drastic change times, employees will expect effective and sensible planning, confident and effective decision-making, and regular, complete communication that is timely. Also during these times of change, employees will perceive leadership as supportive, concerned and committed to their welfare, while at the same time recognizing that tough decisions need to be made. The best way to summarize is that there is a climate of trust between leader and the rest of the team. The existence of this trust, brings hope for better times in the future, and that makes coping with drastic change much easier.

In organizations characterized by poor leadership, employees expect nothing positive. In a climate of distrust, employees learn that leaders will act in indecipherable ways and in ways that do not seem to be in anyone's best interests. Poor leadership means an absence of hope, which, if allowed to go on for too long, results in an organization becoming completely nonfunctioning. The organization must deal with the practical impact of unpleasant change, but more importantly, must labor under the weight of employees who have given up, have no faith in the system or in the ability of leaders to turn the organization around.

Leadership before, during and after change implementation is THE key to getting through the swamp. Unfortunately, if haven't established a track record of effective leadership, by the time you have to deal with difficult changes, it may be too late.

Preparing For The Journey

It would be a mistake to assume that preparing for the journey takes place only after the destination has been defined or chosen. When we talk about preparing for the change journey, we are talking about leading in a way that lays the foundation or groundwork for ANY changes that may occur in the future. Preparing is about building resources, by building healthy organizations in the first place. Much like healthy people, who are better able to cope with infection or disease than unhealthy people, organization that are healthy in the first place are better able to deal with change.

As a leader you need to establish credibility and a track record of effective decision making, so that there is trust in your ability to figure out what is necessary to bring the organization through.

Slogging Through The Swamp

Leaders play a critical role during change implementation, the period from the announcement of change through the installation of the change. During this middle period the organization is the most unstable, characterized by confusion, fear, loss of direction, reduced productivity, and lack of clarity about direction and mandate. It can be a period of emotionalism, with employees grieving for what is lost, and initially unable to look to the future.

During this period, effective leaders need to focus on two things. First, the feelings and confusion of employees must be acknowledged and validated. Second, the leader must work with employees to begin creating a new vision of the altered workplace, and helping employees to understand the direction of the future. Focusing only on feelings, may result in wallowing. That is why it is necessary to begin the movement into the new ways or situations. Focusing only on the new vision may result in the perception that the leader is out of touch, cold and uncaring. A key part of leadership in this phase is knowing when to focus on the pain, and when to focus on building and moving into the future.


After Arrival

In a sense you never completely arrive, but here we are talking about the period where the initial instability of massive change has been reduced. People have become less emotional, and more stable, and with effective leadership during the previous phases, are now more open to locking in to the new directions, mandate and ways of doing things.

This is an ideal time for leaders to introduce positive new change, such as examination of unwieldy procedures or Total Quality Management. The critical thing here is that leaders must now offer hope that the organization is working towards being better, by solving problems and improving the quality of work life. While the new vision of the organization may have begun while people were slogging through the swamp, this is the time to complete the process, and make sure that people buy into it, and understand their roles in this new organization.

Conclusion

Playing a leadership role in the three phases is not easy. Not only do you have a responsibility to lead, but as an employee yourself, you have to deal with your own reactions to the change, and your role in it. However, if you are ineffective in leading change, you will bear a very heavy personal load. Since you are accountable for the performance of your unit, you will have to deal with the ongoing loss of productivity that can result from poorly managed change, not to mention the potential impact on your own enjoyment of your job.

Leadership

The Responsive Manager/Leader

You Can Preview our help card on Responsive Managers by clicking here.

Also, for more help and information on leadership, advice, articles, leadership tools, and leadership experts, visit The Leadership Development Resource Center.
The Responsiveness Paradigm outlined elsewhere in this newsletter is applicable at a number of levels. For example, it applies to organizations in general, and the ability of the organization to respond to the needs of customers, staff and other stakeholders (eg. politicians, etc). It applies to non-supervisory staff, and their ability to respond to the needs of their managers, customers and co-workers. This month we are going to look at responsiveness as it applied to managers, leaders and/or supervisors.
Influence Of The Responsive Manager

The responsive manager tends to succeed by building bonds of respect and trust with those around him/her. Staff respond positively to responsive managers; they work more diligently, work to help the manager and the organization succeed, and will go the extra mile when necessary. That is because responsive managers act consistent with the principle that their jobs are to help their staff do their jobs. So, a basic inter-dependence emerges based on behaviours that show concern, respect and trust.

Responsive managers also influence those above them in the hierarchy. Because responsive managers have the ability to read and act upon the needs of their "bosses", they are perceived as helpful and reliable, or in a simple way, very useful. This allows them to get the "ear" of people above them in the system, and further helps get things done when needed.

Contrast this with the limited influence of the UNresponsive manager. The unresponsive manager is restricted in influence because those around him/her do not respect or trust them to look out for their welfare. Influence is more limited to the use of power coming from the formal position, and fear, a motivational component that is hard to sustain over time. Unresponsive managers tend to be perceived as self-interested, or at best uninterested in the needs of those around them. They also tend to be perceived by those above them as less reliable and less useful due to their focus on empire building, organization protection, and self-interest, rather than getting done what needs to be done.
How Do They Do It?

Responsive managers apply a number of specific skills and abilities to the task (as outlined generally in The Responsiveness Paradigm article). Above all, they appear to be "withit". Withitness

has a number of components. First withit managers are able to put aside their concerns to listen to (and appear to listen to) those around them. As a result, they know what is going on, and know what is both said, and said between the lines. They have the knack of appearing to know what people need even if those needs are not expressed directly.

However, knowing what is going on, and identifying the needs of those around them is not sufficient. The responsive manager also acts upon that knowledge, attempting to help fulfil the needs of employees, superiors, etc. Responsive managers wield influence to solve problems for those around them, often before even being asked.

Here's an example:

I was responsible for automating an office system in a government department. As happens sometimes, the Management Information Systems people were not keen on our going our own way on the project, despite the fact that they had indicated they could not do it for us in the near future. As a result their cooperation (needed for the project) was patchy. As team leader, I faced a number of roadblocks, despite the fact that our Assistant Deputy Minister wanted to see this project come to fruition. I regularly reported back to our Director, outlining progress and roadblocks. Every time I communicated roadblocks to the Director, they were removed within a short time, despite the fact that I did not request direct action. In addition, the Director advised and counselled me on how to deal with the "systems people" so I could have maximum impact. Despite the roadblocks, the project was completed on time and was very successful, much to the chagrin of some of the systems people, who I think were hoping we would fail.

This is a simple story, but one full of meaning. In this situation the Director was able to identify the project leader's needs with respect to the project, listening carefully, and identifying actions she could take to "smooth the path". Not only was the Director able to remove obstacles and fulfil the need of the project leader, but the Director responded on a deeper level, helping to teach the Project Leader methods of becoming more effective, fulfilling yet another need. All of this was assumed to be the proper role of the Director, and was done without expressing all of the needs specifically or explicitly.

We can contrast this with the unresponsiveness of the MIS people. They lectured, they fussed, they predicted dire consequences, rather than offering consistent, responsive help. They focused not on responding to the needs of their clients, but on some other factors having to do with control, and their own needs. Eventually, their lack of responsiveness resulted in the very thing they did not want; loss of control of the project. As a result of this project their overall status in the organization suffered, simply because at both an organization and individual level they were seen as barriers, rather than useful.

Let's look at one more example.

An employee had been working for a government branch for about a year, having moved to the city as a new resident. In a casual conversation, the supervisor noted that the employee wasn't looking at his best, and asked how he was feeling. The employee explained that he hadn't been feeling well lately, and sounded very tired and overwhelmed. The supervisor determined that the staff member didn't have a local family doctor, asked if he would like the supervisor to arrange an appointment, and proceeded to do so immediately. The problem turned out to be a minor one.

In this example we see again the ideas of "withitness" and responsiveness. The supervisor was able to identify that the staff member was in need of some help, despite the fact that the staff member did not state this explicitly. Note that the supervisor didn't pressure the staff member to go to the doctor, but identified needs, checked them out, and then acted upon them. In this case, help consisted of direct, helpful action.

These two examples are the stuff of loyalty and commitment. They are remembered years and years after the fact, and continue to extend the influence of managers. In this sense responsiveness is a critical component of management success, because it allows managers and supervisors to get things done, for the benefit of all players.

In the limited space we have, we have attempted to give you a feel of what responsiveness means. You might want to extend your own understanding by considering some of the following questions.

CONCLUSION:

1. If you are a manager or supervisor, how can you modify your own behaviours so that you become and are perceived as more responsive by a) your staff, b) your boss and c) your customers?

2. Again, if you are a manager or supervisor what is your definition of the "responsive employee"? Can you identify your "favourite employees", and consider how they are responsive to you? Our bet is you will find that your most valued employees are responsive.

3. If you are non-management, what would you need to do to be perceived as more responsive by the people around you?

Monday, January 25, 2010

Tips for large scale business process outsourcing contracts

For some years now, the "sexy" part of the outsourcing industry has been very much business process outsourcing (BPO), covering such diverse areas as HR, finance and accounts, logistics, back office administration and even legal services.



in contrast, IT outsourcing has been increasingly seen as more of a commodity service, and as such is more mature and probably better understood as a result.

However, the reality is that just as businesses began to fully appreciate how much they were dependent upon their IT systems once they had been outsourced, so it is clear that the majority of BPO services are similarly dependent upon the IT systems which deliver them. What then are the implications of this for a typical BPO contract?

Establishing the service levels

A frequent challenge in BPO deals, is working out what kind of contractual service levels should be set. Unlike IT outsourcing deals (where internal departments will frequently have been measuring their performance in terms of such things as fix times and levels of availability, for some time, as part and parcel of good practice), customers can sometimes struggle to find metrics which are genuinely reflective of the "quality" of the BPO services, or to provide details of what the relevant levels of performance were before the BPO contract was signed.

However, it will frequently be the case that many of the types of service levels commonly seen in IT outsourcing deals (eg availability of particular applications/systems, times to resolve particular problems or to provide workarounds for them) will still appear, if only because the delivery of the relevant BPO services is dependant upon the integrity of the underlying IT systems and networks.

In any outsourcing deal, it will be essential for the customer to ensure that it has the right to make available to its supplier any particular materials or software which the supplier will, in turn, use in its provision of services to the customer.

Many forms of software licence now routinely provide that the scope of use extends to the licensed customer and any entity which it utilises to provide outsourced services to it, provided that such use is then limited to the provision of such outsourced services.

However, such provisions are less common outside of the "core" kinds of applications which the IT department is used to deal with, and may especially be lacking in relation to types of licences or contracts where outsourcing was not foreseen as such a possibility (including in particular therefore many forms of non IT business processes).

Particular care will accordingly be required in order to assess how many existing suppliers/licensors will need to be approached in order to give their specific consent to the use of their licensed products for the purposes of the envisaged BPO project.

Future licence rights

Many BPO services are provided from the supplier's own systems/facilities. For example, an outsourced HR payroll service may be hosted and run from a supplier's shared service centre. Whilst this may be simpler from a customer perspective and help ensure enhanced service levels, it raises difficult issues surrounding the customer's continuing licence rights (if any) to the system (and any modifications made to it on its behalf) following the end of the BPO contract.

By that point, the customer may have become largely dependent upon the supplier, and will at the very least need some time in order to migrate across to a replacement supplier, and for them to put in place a comparable system.

Some level of transitional services and continuing licence rights to access and utilise the original supplier's systems and software are accordingly highly likely to be required (albeit that the suppliers will inevitably be keen to ensure that such licence rights to no wider or longer than is absolutely necessary, given the fact that some of the real commercial differentiators for their service may well be the underlying software products which they have developed, and which they will accordingly not want their competitors to have access to).


DR and business continuity

In most large scale IT outsourcing contracts, the supplier's DR and business continuity obligations will be a key part of the contract, on the basis that the supplier will be expected to ensure that it has in place the necessary infrastructure to be able to reinstate the services in the event of a disaster event.

Whilst this may still be the case with BPO arrangements, one frequently finds that the customer itself is the most effective and convenient "fall back" option, and so may itself take on the BCP/DR arrangements.

If this is the case, the IT department will need to ensure that it arranges for regular checks and tests of its BCP plan, involving not just its own staff and systems but also those which interface with (or would need to replace) those of the BPO supplier.

Conclusion

IT issues and related services remain at the core of most BPO projects. Great care must accordingly be taken to make sure that the same issues that would be considered in the context of an IT outsourcing deal are assessed for likely relevance, and dealt with accordingly in the eventual BPO contract.

The customer's IT department must likewise be recognised as a key stakeholder in the overall process (much as the various business units should have had a major voice in connection with any proposed outsourcing of the IT function!), and consulted accordingly.

Managing Large-scale Business Intelligence Solutions

Business intelligence is a cornerstone of every successful enterprise. Business-critical processes rely on business intelligence environments: demand-planning applications, marketing campaigns, personalized Web site content, and more are driven by knowledge obtained from business intelligence systems.
Access to business intelligence has gradually grown from tens of users to hundreds and even thousands, including fully automated processes that drive business processes. The largest business intelligence databases have grown from about 5 TB in 1998 to almost 30 TB in 2003 (as independently verified by Winter Corporation). Databases may well run into the hundreds of terabytes. Managing these large database environments, and guaranteeing their availability and performance with shrinking budgets, presents several challenges for IT.
We explore those demands, and suggest approaches to tackle them, in this article.

The Problem

Business intelligence is about using data to obtain competitive advantage. The more data you have, the better your decisions. You may also want to include all possible dependencies in order to make the right decisions. As a result, there is a natural tendency to consolidate data in one place.

Consolidation results in:

* More users accessing the (single) data set
* Queries accessing more data and hence using more resources in order to make good, qualified decisions

If you don't carefully manage this multi-dimensional growth, you may end up with an overloaded system that benefits no one.
The Challenges

From a manageability perspective, every IT department faces three major challenges:

Performance is key to the end users' experience when they use their business intelligence tools or applications to run queries. How long does it take to retrieve results?

Scalability is key to supporting ever-growing data volumes as well as growing numbers of users running complex queries. Your end-user population must be served appropriately despite growing data volumes and increased system demand.

Availability is also key, since more and more businesscritical processes rely on consolidated information in data warehouses. Availability of business intelligence depends on reliability of infrastructure components (servers, networks, etc.) and, of course, access to information stored within data warehouses.

In every business intelligence deployment, these three requirements go hand in hand. Every IT organization managing a large-scale business intelligence solution must ensure that all three requirements are met while keeping one important factor in mind: cost. Despite increasing data volumes and a growing end-user population, IT organizations don't get additional budget to address the requirements.

This article explains how organizations can implement today's technologies to address the requirements for a large-scale business intelligence deployment with a flat (or even reduced) IT budget.
By executing complex queries and workloads as multiple parallel processes (as opposed to a single process), query execution, data loading, and other database operations can be executed much more rapidly.

Performance

At a high level, there are two ways to address performance requirements in a large-scale business intelligence implementation:

* Add resources to the individual queries or data loads so they can finish more quickly.
* Be smart about the amount of resources used to satisfy the queries or perform the data loads.

Add Resources: Parallel Execution
Parallelism is the ability to apply multiple CPU and I/O resources to the execution of a single command. In a parallel query, the data is distributed across multiple CPUs that perform computations independently before the data set is combined to perform any remaining operations and be presented to the end user.

In a single-server environment, parallelism is implemented on multiple CPUs in the same server, each contributing to the execution of the command. In a multi-server clustered environment, multiple CPUs across multiple servers in the cluster can contribute to the execution of the command.

Today's database technology can automatically and dynamically set the degree of parallelism for a query, depending on query complexity, table size, hardware configuration, and the active system load. By executing complex queries and workloads as multiple parallel processes (as opposed to a single process), query execution, data loading, and other database operations can be executed much more rapidly.

Be Smart: Query Optimization
uery optimization techniques are crucial for good performance. Common optimization techniques include the use of indexes and summary tables. Queries that scan or access massive amounts of data to produce a query result can use summary tables and indexes to reduce the total resource consumption. Taking advantage of summary tables and indexes can save computation resources as well as I/O throughput.

Databases use internal statistics, such as the number of rows in a table and the estimated number of rows a query would retrieve, to choose the execution strategy. As a result, queries can be dynamically rewritten if the database optimizer decides that using summary tables and indexes is more efficient than retrieving the data directly from tables.

Be Smart: Partitioning
Partitioning large tables and indexes provides several benefits for query performance. Consider, for example, a table that contains sales data for the last three years, and assume the data has been partitioned by month. A query that retrieves sales figures for September 2004 will access only one out of 36 partitions, with a performance improvement of up to 36 times. Also, note that it does not matter whether the table contains three years of data versus five or 10 years—the query to retrieve September 2004 sales data will only access the partition that contains the September 2004 data.

Partitioning is extremely complementary to parallel processing. When a database assigns resources to queries, it takes partitions into account. Queries will run faster when separate processes access separate partitions.

Scalability and performance enhancements include running queries against multiple tables that share the same partition attributes in the partition definition. The “join” condition between tables will eliminate partitions, resulting in less data being accessed and faster query performance. Typically, different query requirements will result in using different partitioning mechanisms to ensure more granular access to individual partitions for fastest query performance. Among the common partitioning techniques: range and hash algorithms ensure an equal distribution of data across different partitions.

Be Smart: Data Compression
Data compression is a readily available technology that can address cost-effective storage of large data volumes online. Databases enable data to be stored in a compressed format to reduce data volume. The compression comes at a small performance cost, but the hit is only taken when the data is loaded in a compressed format. Query retrieval actually benefits from the data being compressed because less data needs to be read from the disks. In any business intelligence application, it’s always disk I/O that slows performance.

Data compression works hand in hand with partitioning. Data in business intelligence environments typically remains active for a certain period, after which it does not change any more. For example, a retailer may allow a 30- day return period after the purchase, after which the records will not change. Once data is “frozen,” it can be compressed and made available for query-only purposes. A time-based partitioning scheme helps in identifying the data set that can be compressed.

Be Smart: Cut Costs
In a typical business intelligence system, active and less-active data can be distinguished. Costs can be reduced if less-active data is stored on low-cost, typically lower-performing storage systems. Active (and typically more-recent) data can still sit on more expensive, highperformance storage to satisfy end-user queries. As data becomes less active, it can move to low-cost storage.

It’s also wise to consider compressing the data on low-cost storage to reduce the I/O bandwidth requirements when the data is still accessed.

If the less-active data is identified using the partitioning scheme, the partition can be used as a unit of data for movement and compression. For example, if a monthly partition scheme is used and data older than a year is considered less active, then every month it can be moved and compressed to a one-month partition.
Scalability

Scalability in a business intelligence environment covers two distinct but related aspects:

* Number of users
* Data volume

Number of Users
Database systems can handle hundreds or thousands of concurrent users. Database technology has progressed in several areas to make sure these users get all the resources they need when they need them.

Scale up or scale out. The terms "scale up" and "scale out" typically describe the differences between a single-server environment and a clustered system.

In a single-server environment, the system can be scaled by adding more CPUs or replacing single-core CPUs with double-core CPUs. This strategy is called scaling up. Even though the same approach can be used in a clustered environment, adding servers to a cluster to increase computing resources is also an option. This strategy is called scaling out. Both scaling up and scaling out are common strategies in business intelligence environments.

The limits of the scale-up approach are determined by the hardware limits of the system: the number of CPUs, number of dual-core CPUs, and how much memory fits in the server until it must be replaced. Most business intelligence systems have not outgrown the hardware restrictions of today¡¯s biggest servers, but large servers (as well as components for these servers) are expensive.

Clustered environments involve more—and different—hardware components. In general, the limits in a clustered environment are more software- than hardware-related. For example, how well does the system scale when nodes are added? How many servers can be included in a cluster?

Today's database technology is definitely ahead of the current requirements for business intelligence environments. A clustered database can provide a cost-effective infrastructure for large-scale business intelligence implementations.

Automatic memory management. Good memory management is key to supporting large numbers of users running concurrent, complex queries, often in conjunction with data loading. Database engines require memory to perform query and loading operations, and every query or load process needs memory to execute. Traditionally, DBAs would tune memory parameters and allocate memory for database operations and memory for every process that executes inside the database. This isn't the best approach because some processes require more memory than others. Besides, some users or processes may be more important than others.

Today, databases have built-in techniques to allocate memory to processes dynamically based on the system workload. DBAs define coarse limits for the maximum amount of memory that the engine is allowed to consume, and the database guarantees optimal execution within these boundaries. The result is more efficient memory consumption and better service (i.e., faster response times) to end users running business intelligence queries.

Resource management. DBAs need tools and utilities to manage the relative importance of users and processes. Resource managers can indicate that certain users or processes have to be restricted in their resource consumption for other processes to be serviced appropriately.

Resource managers also provide query governors to monitor ongoing activities in databases. Limits can be set to particular queries; if these queries go beyond the limits, an error is returned. Resource management tools should be applied to control memory allocations as well as CPU and I/O resources, enabling DBAs to automatically put systems resources where they are most needed.

Data Volume
Today’s database technology can easily handle almost any imaginable data volume. The restrictions are not in data size, but rather in the type of analysis performed. From a data-size perspective, DBAs want to make sure they do not waste time managing the data volume.

Data management for a large business intelligence system is not really different from data management for any other system. As a general rule, data should be striped across many disks in order to get the throughput needed for queries. Storage software (or even database software) can take care of storage striping across disks. Data can automatically be rebalanced when disks are added (or removed). DBAs only need to tell the system that a disk or logical volume should be added to the storage pool and the software automatically does the rebalancing.

Data compression is also an important factor to keep the storage costs under control. Data that is not frequently accessed can take full advantage of the available storage because it does not have the same throughput requirements that frequently accessed data has. Frequently accessed data should sit on many disks in order to get sufficient throughput.

Backup and recovery. Data size also poses a challenge for backup and recovery strategies. Organizations want to make sure they can recover from a disaster, but they don’t want to backup their full 100+ TB business intelligence database. Consider the amount of resources necessary to perform a full system backup of that size within a reasonable time!

Currently, database backup and recovery utilities must only make a full database backup once. From that point, the utility will track the incremental changes so that it never needs to take a full backup again. A restore operation would restore the initial full backup and roll forward any changes.
Availability

Availability in the context of a business intelligence system has two dimensions. First, the system must be available to end users. Second, data must be available when users want to access it.

System Availability
Clustering technologies have always been praised for the high availability achieved through server redundancy. If a server in a cluster fails, other servers take over the workload, and with transparent application fail-over capabilities, the impact to end users is minimized.

The impact of a server failure in a cluster is proportional to the number of servers in the cluster. If one server in a two-node cluster fails, then 50 percent of the computing power is lost. However, if one server in an eight-node cluster fails, only 12.5 percent of the computing resources are lost.

High-availability solutions also include complete fail-over sites. Every new commercial database contains utilities to set up a fail-over site using data replication. Fail-over can be used in single-server environments to increase system availability. However, this is generally not the most cost-effective approach. More recently, multi-node business intelligence systems on low-cost components running Linux have been implemented, proving that highly available clusters can be implemented at an extremely attractive cost.

Data Availability
Data availability is key to a successful business intelligence system. Business decisions must be made that take into account all relevant parameters.

Clusters: Shared-Disk versus Shared Nothing. At a high level, there are two common storage architectures for clustered databases:

* Shared disk: in a shared-disk environment all servers can access the full data set.
* Shared nothing: in a shared-nothing environment, individual servers bear responsibility for a particular data slice. A hash-based distribution is applied to the data to achieve an equal data distribution across all servers. Note that queries in a shared-nothing architecture have minimal parallelism equal to the number of servers owning the data being accessed by the query.

Overall, system availability benefits from a cluster of servers. If one server fails, the remaining server(s) continue to serve queries and other operations. In a shared-disk architecture, all that is lost is the computing resources that the failed server provided. In a sharednothing architecture, however, the data that the server was responsible for can be lost. Depending on the query, the full data set requested may not be accessible.

Shared-nothing architectures typically do have an answer to high availability for the entire data set. However, the disk size would have to be doubled in order to implement such a solution.

Disk Failure. Disks (and any other component) may fail in a business intelligence infrastructure. As more disk space (perhaps in low-cost storage units) is used, the likelihood of a failure increases. It's important that the data be safe even if that happens; the system should continue to run without interruption.

The most common approach to handling disk corruption is by using a data-mirroring technique. Obviously, a data mirror requires additional disk space, but generally one needs disks to satisfy throughput requirements. When a disk turns out to be bad, it must be taken offline, data must be rebalanced across the remaining disks, and the system should continue to operate without interruption. Today's storage-vendor and database software can handle such a scenario.

Data Volume. Sheer data volume is a challenge for a large business intelligence implementation. However, business decisions benefit from the availability of large amounts of data. Trend analysis may not be useful if only one year of data is available; but if it can analyze trends based on five or more years worth of experiences, it may suddenly be able to derive valuable information.

Data compression, the ability to implement low-cost storage solutions, and partition-based data management all enable organizations to give users access to more data.

Backup and Recovery. In the ideal world, systems never break. Clustering eliminates server failure as a common cause of unplanned downtime, but this can still happen during power outages or natural disasters. The reality is that systems do fail, and businesses need to be prepared. They must implement a backup and recovery strategy that efficiently manages protection and restoration of business intelligence solutions.

Because of the sheer volume of data, this task deserves careful attention—something database vendors know. For the successful management of large-scale business intelligence backup and recovery processes, the IT department must consider:

* Large business intelligence systems should not be taken offline unnecessarily. The system has to remain available while the backup is running, even though the backup should run during less active hours. Online logging capabilities can track smaller changes that occur during the backup cycle so a complete environment can be restored from the last backup, and a point-in-time recovery can restore changes that occurred since the last data load. Note that some of the data loads are reproducible, so data loads may not explicitly write logs to enable a point-in-time recovery. One would restore the backup and re-run the data load process to reach the same situation.
* Make partitions of the database read-only. Data in business intelligence environments typically remains active for some time, after which it does not change. Unchanged data only needs to be backed up once. Backup utilities take the read-only property into account during a backup-and-restore scenario.
* In conjunction with read-only capabilities, data compression reduces the volume of data to be written to backup devices.

In clustered environments, backups can be managed by assigning servers in the cluster to perform backups, while other servers remain busy servicing the needs of end users. Whichever strategy or best practice is being implemented, be aware that to backup and recover tens of terabytes is a much greater challenge than that of the (smaller volume) OLTP systems.
Cost-Saving Opportunities

Business intelligence solutions are valuable to businesses. However, an organization must avoid business intelligence solutions that become prohibitively expensive and must be scaled down or allow the business intelligence efforts to be eliminated altogether.

This article has explored several approaches to reducing expenses:

* Implement clusters on low-cost components
* Use low-cost storage solutions for less active data
* Implement data compression to reduce the data volume
* Take advantage of the infrastructure's self-managing capabilities

In addition, there are cost-saving opportunities that are less obvious at first glance but may well be feasible in the mid- to long-term. The opportunities below reduce the number of components to manage:

* Take advantage of the database as an ETL engine. Database technologies have advanced. While a separate ETL engine was the obvious choice a number of years ago, you may be able to leverage the database engine and eliminate the entire environment to manage and maintain covering the ETL requirements.
* Databases have become smarter and smarter, and provide sophisticated analytical capabilities. Why not use these instead of extracting data into a separate environment in order to perform analytic calculations or mine the data?
* Take advantage of database technology to manage data storage, high availability, and backups.

Conclusion

Managing large-scale business intelligence solutions can be a difficult and challenging task. Business intelligence environments have three major requirements: performance, scalability, and availability. Nothing is impossible if there is an unrestricted budget; but realistically, IT departments still face cost restrictions.

When planning large-scale business intelligence deployments, size matters. Think carefully about available database-related technologies capable of meeting performance, scalability, and availability requirements.