By David Nicoll, Ph.D.
Competency, as currently used in organizational design, is considered to be an underlying capacity. It is the ability to perform a particular task at a certain level of expertise. It is akin to the foundation of a house. A house cannot stand without a firm foundation. Similarly, a person cannot perform his or her job without the requisite skill, knowledge and cast-of mind required by that job. When identifying the competencies associated with a task or activity, consultants usually divide the job they're evaluating into two specific areas. The first area is called "core" competencies; these are the behavioral attributes essential to the successful completion of the job. These are the bottom-line, minimal skills and abilities a person must have to even attempt the task.
"...the competency movement has demonstrated its merit "
"Performance" competencies are the second area of focus; these are the skills and abilities associated with effective task execution. High levels of performance competency purport to indicate that a person is capable of superior levels of task performance. The combination of core and performance competencies associated with a job creates a descriptive set of skills and abilities that are seen as predictive of effective performance. The experts call this a "competency profile."
Questions can be raised concerning the effectiveness of existing competency programs along a number of fronts. For example, available literature suggests that many firms are developing competency profiles using "brainstorming" methods. That is, a knowledgeable and experienced group of people is assembled. They are then asked to use existing knowledge and experience to spontaneously list all the behavioral routines they believe are associated with performing a specific job. The list generated from this work is then compiled, vetted and is used as a job's "competency profile."
This approach, when compared to the way job evaluations were made in prior years, is a step forward. However, it falls far short of being an optimal method. Without a shared and well-developed conceptual foundation, there can be no assurance that any brainstorming process can produce either a complete or accurate profile. Without these assurances, there can be no certainty that profiles based on these methods are an optimal or even a reliable predictor of performance.
A small number of firms recognize this issue and have used supplementary tests to probe the accuracy of their competency profiles for certain positions. For example, one firm surveyed present and past holders of a position for their core competencies. The firm decided that if anyone has ever successfully held the job being reviewed without having a core competency in their profile, then this competency cannot be "core". By definition, a core competency is one that must be present for a job to be executed at all. The ability of anyone to execute that job without this competency proves that this competency is not absolutely necessary.
Using the same logic, the firm also decided that if there ever has been an individual who has performed a job at high proficiency absent a particular "performance" competency in their profile, then the importance of this competency has probably been misstated and is worthy of serious reassessment.
"Experimentation and iteration were his favored ways ... "
Assessing previous jobholders in this fashion has given this firm some measure of assurance that their new profiles are reasonably accurate for the positions that they examined. However, the cost was high and the scope of their inquiry was necessarily limited. However, this example demonstrates that it is possible to design a profiling methodology rigorous enough to give a reasonable assurance of accuracy. But it is probably not worth the cost. In most firms there are simply are too many jobs to make a rigorous approach practical. Rather, a reliance on the interpersonal agreement of knowledgeable people probably provides a reasonable tradeoff. However, approaching competency profiling using casual methods is rather like trying to hit a target with blurred vision: the approach will work some times, but not always. This limit should be kept in mind as profiles are applied in real world situations. Things may not be as certain as might be suggested by the profile.
Other flaws also exist in the way competency profiling is conducted. For example, the construction of a reliable competency profile can depend on establishing the right relationships between the individual factors within a profile. For instance, a semi-literate programmer might be able to perform exceptionally when his job is confined to writing the bits and bytes of a machine code. That same programmer may be hopelessly inadequate if he also is required to document the code he has written. Or, when repairing an old Buick coupe, an auto mechanic's general diagnostic skill may compensate for the fact that she knows little about this particular type of car.
The above examples demonstrate that the relative importance of a particular competency can be triggered by specific and/or transient conditions within a task or activity. This makes it unlikely that a completely stable competency profile can ever be constructed — even when each individual factor can be accurately identified and measured. In other words, even perfectly constructed competency profiles can NEVER be 100% accurate in all situations.
While the current approach to competency specification has its flaws, these flaws are not fatal. Specifying job competencies allow firms to take a step toward a greater reliance on observable, testable facts. This alone is sufficient to improve the quality of job assignments and the inherent fairness of the system. It works to the benefit of all involved.
The availability of well-articulated profiles also serves the interest of people aspiring to new positions. It makes it easier for these people to prepare for new jobs. Once a position is obtained, the person has a greater probability of functioning efficiently and effectively within it. The firm benefits from the superior job performance. The individual benefits from the personal success associated with proficient execution. Everyone wins. Profiling can also contribute to the perceived fairness of compensation systems since wages can be structured to more closely reflect job requisites and skills. Recruitment and hiring can benefit from the more precise statement of qualifications that can be matched to candidate characteristics. Succession planning might be improved by identifying exposure points in the skill and ability inventories.
Overall, the competency movement has demonstrated its merit and is well worth furthering— so long as it is taken as an indicative support tool and not as a law-like mandate of required skills.
"...their competency in running a competency program was called into question..."
One flaw of current competency programs is not as benign as those cited above. It can be fatal to a individual's career, damage the integrity of people in charge of the program and, in certain strategic instances, be harmful to the firm as a whole.
This overlooked factor is the task setting within which job competencies are expressed. This factor has been systematically omitted from the program design in all programs with which this author is familiar. The flaw is visible to anyone and rests on an obvious, almost self-evident, fact. This is that in today's business, jobs are performed within a work group. This— the work group— is the overlooked factor.
Work groups typically employ an organized sequence of work. Group members interact with each other in specific ways to produce particular products or outcomes. In short, the workgroup is the arena within which individuals exercise and display their job competencies. Omitting this group factor is the equivalent of designing a computer chip without taking into account the motherboard within which it will fit.
The risks involved in overlooking the group can be substantial. Anyone trying to make job assignments without an accurate understanding of both the job profile and the nature of the group context will inevitably make serious mistakes. A decision-maker relying solely on job profiles is working with only half the equation. The other half— the fit between the proposed candidate and his or her workgroup is— more often than many of us suspect— the real determining factor in successful job performance.
The damage done by ignoring the hidden half of the competency equation is itself hidden. People using only the competency factor part of the equation will have some successes. But the key word is "some." Neglecting the second half of the equation inevitably results in unexpected and unexplainable failures.
More frequently, competency programs yield mediocrity. The benefits of identifying explicit competencies will be overwhelmed by the strong influence of group interactions. It is only a matter of time before someone questions why money is being spent on a program that does not do what it claims to be able to do. When this occurs, those involved in the creation and operation of the program can expect to take on some level of personal damage.
Many people reading this article will instantly recognize the exposure. For those that do not, a story drawn from real experience can illustrate the importance of the group context.
A major automotive firm had two production groups using the same basic engineering workflows. The jobs in these two workflows had been evaluated and their competency profiles were virtually identical. The major difference was that one group was responsible for exhaust systems and the other was charged with the design of fuel injection systems. This firm had a young engineer— let's call him Joe— who was newly assigned to the Fuel Injection Team. Joe was innovative and saw creativity as good in and of itself. For Joe, ideas seemed to come "out of the blue, "often involving previously unseen relationships between totally unrelated things. Joe reveled in his ability to quickly resolve seemingly impossible problems using unique and different methods.
Early in his tenure with the Fuel Injection team, Joe was asked to calculate the surface area of a particularly complex part. This was a tough problem involving convoluted angles, arcs and depressions. The task appeared to require difficult measurement, advanced calculus, and much time. Joe went looking for a better method.
Joe found one. He took a blueprint of the part in question and, with an Exacto knife, cut out its outlines. He then took the blueprint of another part whose area was known and cut it out. He took these slivers of paper— one for the part whose area was known and the other for the part whose area was not known— to an atomic scale and weighed them! Joe then used a simple ratio to compare the weight of the sliver with the known area to the weight of the sliver whose area was unknown. The result was an estimate sufficient for the task at hand. Joe's did not even use his engineering competencies. Clever, to say the least— concepts of atomic weight and surface area were connected using the weight of slivers of blueprints. Joe had arrived at a fast, inexpensive and creative solution to a matter of concern.
Not only was Joe clever but, as can be inferred from this story, he also had a penchant for quick action. He avoided using analysis to figure out if something would work. Experimentation and iteration were his favored ways to arrive at an acceptable solution. Joe's basic style can be illustrated by his approach to a relatively daunting challenge. Joe was given the task of developing a computer model of a particular process. The process was extremely complex. It had many variables that interacted with each other— but only under certain conditions. While the internal process was complex, the inputs were known and the output could be measured to determine if the results were accurate.
Joe responded to the challenge by simply "throwing together" his first model. He then began iterating solutions on his computer, making modifications to the model as he went. After a few hundred runs, the model began to stabilize, producing fewer and fewer errors. Joe continued to iterate and finally took the "finished" model to management and told them that his model had not failed in the last 1000 runs.
During the review, Joe told management that there could be no assurance his model would not fail on the very next run. He observed that if management needed absolute certainty, his model would at least give the more analytical types in the department a "head start." The associations imbedded in his code, he said, worked most of the time and could be used by the analysts to figure out more precisely the relationships involved. Effort would be required, but much less than if the project had been addressed by the analysts starting with nothing but a blank piece of paper.
The conclusion? Management decided to do both. Joe's tentative model was immediately put to use. It however wasn't trusted on matters of high significance. In these cases, confirmatory indications were to be required. Other staff members were assigned the task of developing the model to a point where a greater degree of certainty could be systematically realized. Joe had produced a useful tool that yielded immediate gains. He had also paved the way for others of a more rigorous, analytical bent to refine it to deliver greater degrees of certainty.
For more than a year, Joe worked in the Fuel Injection Team. His teammates saw much merit in his approach. They enjoyed the fast pace he set and, like him, they had no difficulty working on an issue before they knew all the details or could estimate a probable outcome. Things in the Fuel Injection team moved fast. There were lots of false starts and avoidable failures, but, in final analysis, their work was finished on time and the product functioned as required.
Given Joe's scores in the company's competency evaluation program and his performance with the Fuel Injection group, Joe was moved to the Exhaust Team. The intent was to broaden his experience and expand the network of people who knew him first hand. The effort worked, but not entirely as expected.
Joe was initially seen as an exciting addition to the team. However, it only took the people on the Exhaust Team three weeks to judge him to be sloppy, inattentive, and given to acting before an item was fully thought through. Whereas his old teammates saw him as creative and insightful, his new teammates saw him as inefficient, wasteful and confused. Joe was, at least in their eyes, not performing effectively.
Within the first month, the Director of the Exhaust Team had several talks with Joe, trying to "bring him around" to his new team's "way of working." These efforts met with only limited success. After a while, his new team members began limiting Joe's area of responsibility and visiting him often to make sure he was doing his work "properly." Life was miserable for everyone. Their product was delivered on time and fully met specification. But Joe was not a major contributor. In fact, the experience had poisoned Joe's view of the firm and he began a search for other options outside the firm.
These two situations, both with the same engineer, are real. The same person with the same competencies, working in mirror-image workflows had yielded markedly different outcomes. The firm's competency program had identified him as competent and qualified, as someone who would be a high performer. But, Joe's work in two different teams yielded drastically different results. Why?
Members of the firms Competency Evaluation Program met shortly after Joe left the Exhaust Team. They wanted to discover why he had failed. After much discussion, they concluded that the cause of Joe's problems was the different products. This was the only distinction that they could see. They believed that both jobs had been "properly" profiled, so the only answer that made sense was that Joe could perform well working on one kind of product, but not on the other. This judgment was reached even though the competency evaluation showed the jobs to be virtually identical.
The situation damaged all involved. Joe's record was smudged, and his future with the firm had dimmed. Those who were running the competency program fared no better. Their explanation for the failure was weak, and consequently, their competency in running a competency program was called into question by the line-managers involved. The firm lost on all fronts; everyone was confused and unhappy, contributions that might have been made were lost, and ultimately the company lost Joe to a competitor.
For those educated in Organizational Engineering (OE), the reason why Joe "failed" with the Exhaust Team is obvious. Joe's problem rested not in a mismatch with any job competency profile or product to which he was assigned. Rather, it rested in the mismatch between Joe's and the Exhaust Team's information processing styles.
Joe's first group, the Fuel Injection team, was attuned to spontaneous efforts. Intuition and iteration were their favored methods, and experimentation was their preferred way for testing the results in the "real world." This team worked in and through what, in OE terms, is known as the "Changer" pattern. They were geared to working quickly and were comfortable making decisions based only on essential details. Their rapid fire, spontaneous and iterative style saved on planning and analysis but they paid for these savings in a number of false starts and failures.
The Fuel Injection group often "hit it lucky" and solved problems in an early iteration. When this happened, they would finish their project in record breaking time and save considerable amounts of development money. If they were not "lucky," they paid for their experimentation in wasted effort, excess use of resources, and high levels of negative management attention. As indicated above, Joe's style was tailor made for this group's strategic pattern. Not only did he have all the behavioral competencies necessary for the job, his personal processing style was a perfect fit with that of his fuel injection teammates. The Exhaust Team had a totally different processing style. As a team, they paid attention to detail, meticulously identified all options and thoroughly examined potential outcomes. They liked new ideas but favored addressing them through planning and specification.
They were constantly searching for an optimal outcome and always tried to achieve it first time out of the box. This style saved on false starts and wasted effort, but paid for these advantages in planning and specification time. For this team, there was no opportunity to hit it lucky. But, the probability of ultimate success was virtually assured once their processes had been completed. Joe's processing style was a complete mismatch for this team's approach. This misalignment is what caused Joe to fail.
This group factor— a team's collective information processing profile— was something the company's competency evaluation program had not considered. If it had, its professionals would have known that the teams had different but equally rational and effective approaches. They would recognized that the Exhaust teams information processing pattern would not easily commingle with Joe's strategy. They probably would not have placed Joe on this team— at least not without some careful preparation.
As it turned out, it appeared to everyone involved that Joe was "fired" for poor performance. In reality, he was fired because he was a poor fit with the other members of the Exhaust Team.
Joe's story clearly demonstrates that there are two different competencies to evaluate. The first are those competencies associated with the job under review. The second are those competencies associated with the team's information processing style. For a good "fit," a match on both competencies is required.
Organizational Engineering argues that the competencies associated with a given job are only valuable if they can be expressed through the group. In today's world, very few of us work alone. Rather, our individual competencies have to be expressed in a way that is useable by the group in which we work. This is the piece that was missed in Joe's situation. It's the piece that's been missed in too many competency evaluation programs.
Organizational Engineering has shown how individuals use different information processing styles in addressing issues both at work and home. OE shows how these styles can be collapsed into four basic strategic styles as described in Table 1. Each strategic posture carries its own unique behavioral pattern. These are the behaviors that co-workers see and to which they react.
The behavioral patterns in Table 1 are documented in the seminal books on the subject (Salton, 1996, 2000). Numerous articles demonstrate that the principles outlined in these books operationally and reliably define how groups function in the "real" world (Stepanek, 1997: Daly & Nicoll, 1997; Slaby & Austrom, 1998; Ungrvi, 1998a, 1998b; Salton & Fields, 1999; Salton & Fuhrmann, 1999; Lapides & Matthews, 1999: Matthews, 2000: Leach,1999, 2000). This body of knowledge explains exactly why Joe performed differently in the two automotive teams.
In the fuel injection case, the team and Joe shared the same RS and RI styles. Everyone was comfortable using similar kinds of input, all of them processed the information received in roughly the same way, all were working at about the same speed, and all were targeting more or less the same outcome optimality. The unspoken assumptions and preferences of one team member dovetailed with those of the others. Given this— and the fact that Joe had the job competencies required— it is not surprising that he was successful in his first assignment. These factors virtually insured the effortless integration of the people on the team.
The reason for Joe's difficulty with the Exhaust Team is equally obvious. This team consisted of people who used HA and LP type behaviors. As a team they were deliberate. They were thorough and careful. They believed that there was an optimal solution and also believed that they could use analytical means to find it. They reinforced this way of working in one another. This uniformity produced a faith that they had discovered the "right" way of approaching issues in their area.
Joe ran head-on into the Exhaust Team's heavily reinforced processing pattern. Joe's somewhat random approach was uniformly judged illogical. And risky. Joe's tendency to ignore detail was seen as sloppy rather than efficient. Joe's bias toward action was seen as impulsive rather than decisive. The problem was not Joe's strategic style— it had worked famously with the Fuel Injection team. The only problem was that Joe's style did not match the one dominating the Exhaust Team.
Joe was not aware of this dangerous mismatch. There was no reason for him to alter his own natural style when he joined the Exhaust Team. Similarly, the Exhaust team did not have a clue that a structural incompatibility existed. Everybody involved assumed that matching the job competency profile was all that was required. They were wrong There was a problem here. The root is the failure to recognize the existence of the "group" variable. With the first team, Joe's information processing preferences were a "competency match." With the second team, Joe was a "competency mismatch." The "villain" in this piece is an incomplete competency program. The program matched Joe's job competencies to the tasks required. But it failed to match Joe's processing style to the team into which he was placed. In this case half a loaf was not better than none. Rather, it was a tragedy for all involved.
If a group's information-processing style is so important, why, you might ask, don't competency-based job evaluations fail more often?
The answer is that most people have elements of multiple strategic styles within their behavioral repertoire. In fact, in over 10,000 recorded scorings there has never been a case of a person scoring 100% commitment to any single strategic style. Most of us can draw on these capacities to work with people who favor strategic styles different from our own.
While most of us have capacities in all areas, we also tend to favor one or two styles in actual practice. The distribution of strategic styles in population is shown in Graphic I.
In many teams, there is a mixture of strategic preferences. One person may favor study and analysis while another may favor quick action. When a person like Joe joins such a group, a natural ally is already in place. Others can relate to someone like Joe by tapping into the secondary preferences they already use with Joe's natural ally. Things can get bumpy, but they "kinda" work. This explains why placements based only on competency work as well as they do. This is not a formula for optimality but it is probably one for sufficiency.
Having said this, it's also true that teams heavily biased in favor of one or the other of the OE's four strategic styles are not unusual. The prevalence of these singular approaches varies by level and by function. However, an overall estimate of 1 in 5 would probably not bean exaggeration. When this type of dominance occurs, the ability of a team to see merit in a contrasting style can be seriously compromised. This is what happened to Joe when he was assigned to the Exhaust Team.
Managers of competency programs intuitively understand that the group context is important. Even managers unskilled in Organizational Engineering sense that the group matters. They know something significant is going on inside their work groups. They just don't know what. Or how. Or why. Consequently, they don't know what they can do about it. In this situation the unskilled managers do the best that they can— they ignore it.
Organizational Engineering opens another channel for the managers of competency programs. They can now design their programs for optimality, not sufficiency. They can avoid the tragedy that befell Joe and the Exhaust team. The full specification of all of the options and tools available is beyond the scope of this article. However simply outlining four of the basic discoveries can help anyone to better engineer the effective expression of their job competencies in existing workgroups.
The first discovery is that most (but not all) human beings have a preferred information processing style. Survival demands that people find way of coping with the mass of information assaulting them. People cope by developing strategies for handling the information overload. These strategies form a basis on which psychology is built. For example, a person using a short information processing horizon (e.g., a Reactive Stimulator) will tend to be judged "impatient." Impatience does not cause a short horizon. Rather, a short horizon necessarily generates the psychological attribute of impatience.
A person's preferred strategic style creates a reasonably consistent pattern of assumptions, methods and viewpoints that they bring to work. It is the pattern people use to do their job and to interact with their team members. The symphony of interactions that each person employs creates the character of the group and ultimately the work product being sought by the organization. Recognizing that this exists and accepting it as part of their responsibility would go far toward mitigating the kind of problems identified in this paper.
The second discovery is the fact that there are only four basic strategies. The strategies outlined in Table 1 are the cornerstone of Organization Engineering. These strategic styles can be measured and have proven to be accurate, reliable and useful descriptions of a person's behavioral propensities, both at home and at work.
The simple awareness that these strategies are in play would allow the competency manager to avoid some, but not all, of the dangerous problems. For example, a simple glance at the Exhaust Team's typical work posture would have "red flagged" the risk being assumed in attempting to place Joe on the team.
The third discovery is the instrument that measures the strength of strategic styles. The instrument is a survey, not a test, and has been validated using all eight of the accepted measures of statistical validity (Soltysik, 2000). It produces measurements that can be relied upon, numerated and attached to a competency profile along with the other core and functional competencies. The survey is fast (about 6 minutes to complete), inexpensive (less than $11) and can provide more sophisticated companies with the ability to make more refined judgments on potential exposures in their placements.
The fourth discovery is a method of combining individual profiles together to arrive at an accurate estimate of the propensities and predispositions of entire groups. As with the survey instrument, the group assessment methods also have been validated and can be trusted.
The method of combining people merely requires overlaying the strategic profiles of the individuals in a group. For small groups, this can be done visually and without additional cost. In these cases, competency managers can quickly ascertain the probable outcome of a potential placement before commitments are made and exposures are generated. All that is required is the measurement from the survey and the knowledge obtainable from an introductory book on the subject (Salton, 2000). The method can be visually demonstrated using a free computer program (from Professional Communications Inc.). Larger or more complex groups require the use of a computer program. This tool, named TeamAnalysis™ , calculates various dimensions of group preferences. It then translates these calculations into business oriented English that is accessible to anyone with a 12th grade (i.e., high school) education.
The availability of sophisticated analysis allows the competency manager to go beyond simple "go-no go" decisions. Insights into the group operation, strengths and exposures are routinely reported. This might help a competency manager "fine tune" a group. For example, a TeamAnalysis™ can help decide how much analytical talent might be installed in an action-oriented team without stepping over a boundary and creating a thought-oriented team. It allows the option to engineer productive coalitions within a team or to prevent existing coalitions from compromising a team's productivity.
Organizational Engineering technology not only tells the Competency Program manager what's there, it gives the manager the ability to change a team in a way that furthers the team's performance.
Organizational Engineering's four discoveries provide competency managers with the tools they need for designing teams in which individuals and group competency profiles mesh properly. The manager can accomplish this by taking into account both individual and group effects. Even without using OE's more sophisticated technologies, knowledge of OE can dramatically improve the success rate of the entire competency program. Even tough interventions, like installing a capacity for disciplined, sustained action in an impulsive team, or doing the opposite for an analysis-bound team, can often be designed without extensive study.
The full level of sophistication is not needed for every job, team, or workflow. These technologies are tools that can be used when they're needed. The only real "must" is that a competency manager recognize the impact of group effects and "call out" the right level of knowledge appropriate to the issue being faced. This small step can convert mediocrity punctuated with occasional disasters to consistent, high level performance.
The competency initiative is well founded. The expressions of competency factors are what allow an organization to convert raw material into products that benefit society and earn a return for all involved in the effort. The efforts to identify, catalog and organize these factors should be applauded and encouraged. But, it should also be recognized that a competency program confined to this level will be incomplete.
The reason for the deficiency is that competency means nothing if it is not allowed full expression. What good is it to have an ability to act decisively if you are placed in a group that demands thorough review and meticulous planning? What good is it to be a brilliant planner if instant action is demanded and no time is provided for the planning process to occur? Only a moment's reflection is needed to see that the group context is an essential component of the success of any competency program. Ignoring it condemns the program to substandard performance.
Organizational Engineering provides a fast, inexpensive and proven vehicle for considering the vital second half of the competency equation. People responsible for Competency programs owe it to themselves and the firms which they belong to instruct themselves in Organizational Engineering and then employ it for the benefit of their programs, their employees, their work teams, and the organization as a whole. It is a win-win strategy that benefits all involved.
Dr. David Nicoll is a principle in the consulting firm of Merlin & Nicoll, Inc. Dr. Nicoll also teaches at the University of California. He can be reached by email at dnicoll@mediaone.net or by telephone at 310-839-3620
Daly, Richard E. and Nicoll, David. "Accelerating a Team's Developmental Process," OD Practitioner, a journal of the Organization Development Network, Vol. 29, No. 4, 1997.
Lapides, Joseph & Matthews, William. "Organizational Engineering: A New Paradigm for Understanding Individual Differences," Papers in Performance Technology, International Society for Performance Improvement, 1999.
Leach, Fances Mae. Quoted in "Focus: Coaching and Mentoring", Workforce Strategies, Vol. 17, No. 6, Bureau of National Affairs, May 31, 1999, pp. ws26-ws30.
Leach, Fances Mae. "Engineering Coaching and Mentoring Programs", Journal of Organizational Engineering, Vol. 1, No. 3, October 2000.
Matthews, William. "Facilitating with Style: Using Organizational Engineering to Facilitate Teamwork and Collaboration." Presentation at International Association of Facilitators, Toronto, April 2000.
Salton, Gary J. Organizational Engineering: A new method of creating high performance human structures. Ann Arbor, Professional Communications, Inc. 1996.
Salton, Gary J. Managers Guide to Organizational Engineering. Amherst: HRD Press, Inc. 2000.
Salton, Gary J and Fuhrman, Charles E. "Enhancing and Expanding Six Hat Thinking with Organizational Engineering," OD Practitioner, a journal of the Organization Development Network, Vol. 31, No. 3, 1999.
Salton, Gary J. and Fields, Ashley. "Understanding and Optimizing Team Learning," The Journal of Applied Management and Entrepreneurship, Vol. 5, No.1, September 1999, pp. 48-60.
Slabey, William and Austrom. "Organizational Engineering Principles in Project Management," Project Management Journal, a professional Journal of the Project Management Institute, Vol. 29, No.4, December, 1998, pp. 25-34.
Soltysik, Robert. Validation of Organizational Engineering Instrumentation and Methodology. Amherst: HRD Press, Inc. 2000.
Stepanek, John. "Organization Optimization at Tampa Electric," OD Practitioner, a journal of the Organization Development Network, January 1997.
Ungvari, Steve. "(TRIZ)OE = Improving TRIZ Results by Dynamically Matching Tools to Teams," The TRIZ Journal, <http: www.triz-journal.com>, October 1998.
Ungvari, Steve. "Engineering the New Product Development Cycle," Visions: Product Development and Management Association Journal, Vol. XXII, No. 3, July 1998.
(C) 2001, Organizational Engineering Institute. All rights reserved
Organizational Engineering Institute
101 Nickels Arcade
Ann Arbor, MI 48104
Phone: 734-662-0052
Fax: 734-662-0838
E-Mail: OEInstitute@aol.com
ISSN: 1531-0566
Printed in the United States of America