Measuring streamlining models-

Our mission is to help leaders in multiple sectors develop a deeper understanding of the global economy. Our flagship business publication has been defining and informing the senior-management agenda since Our learning programs help organizations accelerate growth by unlocking their people's potential. Companies know where they want to go. We have found that for companies to build value and provide compelling customer experiences at lower cost, they need to commit to a next-generation operating model.

Measuring streamlining models

Measuring streamlining models

The critical Cindy crawfort nude pressure P CRITa quantitative assessment of upper airway collapsibility, is derived from pressure flow relationship during sleep. The Starling resistor model is a mechanical analog of the upper airway which assumes an infinitely collapsible segment and the absence of neural activity in streamilning structure. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Customers Measurong also upload pictures of damages, and both file and track claims online. An illustration of completed P CRIT analysis for one series of pressure and airflow data points blue circles. A mathematical model to detect Measuring streamlining models flow limitation during sleep.

Erotic chiropracter. Streamlining Tasks to Improve Efficiency

The same concern Measuring streamlining models relevant for productivity measurement. Further, spending years of research looking for it would probably not be of much use. However, in recent years, the big, multi-million-dollar wind tunnels are being used less and less. On the other hand, if the operational measure is insensitive to changes in output i. This can be used to quantify sweep-out between injectors and a producer, as well as provide allocation factors for each well. A brief description and discussion of each of the eight components follows. Maintaining personnel discipline reflects the degree to which negative behavior, such as substance abuse at work, law or rule infractions, or excessive absenteeism, is avoided. The following issues Measuring streamlining models critical. Under some conditions, it Xxx lactation videos possible to use a front-tracking simulator, rather than finite-difference simulator, as a basis for drawing the streamlines. Facilitating peer and team performance reflects the degree to which the individual supports his or her peers, helps them with job problems, and acts as a de facto Measuring streamlining models. By whatever definition, productivity is composed of major components that are distinct enough to preclude talking about it in the singular as one thing.

The critical closing pressure P CRIT , a quantitative assessment of upper airway collapsibility, is derived from pressure flow relationship during sleep.

  • The key principle behind streamline modeling is that the streamlines are tangent to the flow velocity.
  • To measure the aerodynamic effectiveness of a car in real time, engineers have borrowed a tool from the aircraft industry -- the wind tunnel.

Our mission is to help leaders in multiple sectors develop a deeper understanding of the global economy. Our flagship business publication has been defining and informing the senior-management agenda since Our learning programs help organizations accelerate growth by unlocking their people's potential. Companies know where they want to go. We have found that for companies to build value and provide compelling customer experiences at lower cost, they need to commit to a next-generation operating model.

This operating model is a new way of running the organization that combines digital technologies and operations capabilities in an integrated, well-sequenced way to achieve step-change improvements in revenue, customer experience, and cost. A simple way to visualize this operating model is to think of it as having two parts, each requiring companies to adopt major changes in the way they work:. Many organizations have multiple independent initiatives underway to improve performance, usually housed within separate organizational groups e.

Tangible benefits to customers—in the form of faster turnaround or better service—can get lost due to hand-offs between units. These become black holes in the process, often involving multiple back-and-forth steps and long lag times. Instead of working on separate initiatives inside organizational units, companies have to think holistically about how their operations can contribute to delivering a distinctive customer experience.

The best way to do this is to focus on customer journeys and the internal processes that support them. These naturally cut across organizational siloes—for example, you need marketing, operations, credit, and IT to support a customer opening a bank account. Journeys—both customer-facing and end-to-end internal processes—are therefore the preferred organizing principle. Transitioning to the next-generation operating model starts with classifying and mapping key journeys.

At a bank, for example, customer-facing journeys can typically be divided into seven categories: signing up for a new account; setting up the account and getting it running; adding a new product or account; using the account; receiving and managing statements; making changes to accounts; and resolving problems.

We often find that companies fall into the trap of simply trying to improve existing processes. Instead, they should focus on entirely reimagining the customer experience, which often reveals opportunities to simplify and streamline journeys and processes that unlock massive value. Concepts from behavioral economics can inform the redesign process in ingenious ways. In , a major European bank announced a multiyear plan to revamp its operating model to improve customer satisfaction and reduce overall costs by up to 35 percent.

Three design guidelines are crucial:. Organizations need to ensure that each lever is used to maximum effect. Having something already under way is a truism: everyone has something under way in these kinds of domains, but it is the companies that press to the limit that reap the rewards. Executives need to be vigilant, challenge their people, and resist the easy answer. In the case of analytics, for example, maxing out the potential requires using sophisticated modeling techniques and data sources in a concerted, cross-functional effort, while also ensuring that front-line employees then execute in a top-flight way on the insights generated by the models.

Implementing each lever in the right sequence. That means, in practice, figuring out which one depends on the successful implementation of another. Systematic analysis is necessary to guide decision making. The next step is to use a structured set of questions to evaluate how much opportunity there is to apply each of the remaining levers and then to estimate the potential impact of each lever on costs and customer experience.

This systematic approach allows executives to consider various sequencing scenarios, evaluate the implications of each, and make decisions that benefit the entire business. Finally, the levers should interact with each other to provide a multiplier effect. For example, one bank only saw significant impact from its lean and digitization efforts in the mortgage application journey after both efforts were working in tandem. A lean initiative for branch offices included a new scorecard that measured customer adoption of online banking, forums for associates to problem solve how to overcome roadblocks to adoption, and scripts they could use with customers to encourage them to begin mortgage applications online.

This, in turn, drove up usage of online banking solutions. Software developers were then able to incorporate feedback from branch associates, which made future digital releases easier to use for customers. This in turn drove increased adoption of digital banking, thereby reducing the number of transactions done in branches.

These maps include estimates for each journey of how much costs can be reduced measured in terms of both head count and financial metrics and how much the customer experience can be improved. Companies find heat maps a valuable way to engage the leadership team in strategic discussions about which approaches and capabilities to use and how to prioritize them. In insurance, a key journey is when a customer files a claim, known in the industry as first notice of loss FNOL.

This company improved response times by using digital technologies to access third-party data sources and connect with mobile devices. With these new tools, the insurer can now track claimant locations and automatically dispatch emergency services.

Customers can also upload pictures of damages, and both file and track claims online. The insurer also allows some customers to complete the entire claims process without a single interaction with a company representative. Advanced analytics. Now able to apply the latest modeling capabilities to better data, the company is using advanced analytics to improve decision making in the FNOL journey.

Intelligent process automation IPA. Once digital and analytics were in place, IPA was implemented. Automation tools were deployed to take over manual and time-consuming tasks formerly done by customer-service agents, such as looking up policy numbers or data from driving records. In addition to reducing costs, IPA sped up the process and reduced errors. By combining four levers—lean plus digital, analytics and IPA—this insurer drove a significant uplift in customer satisfaction while at the same time improving efficiency by 40 percent.

Senior leaders have a crucial role in making this all happen. They must first convince their peers that the next-generation operating model can break through organizational inertia and trigger step-change improvements. With broad buy-in, the CEO or senior executive should align the business on a few key journeys to tackle first. Finally, there is the work of actually implementing the model. Transformation cannot be a siloed effort. The full impact of the next-generation operating model comes from combining operational-improvement efforts around customer-facing and internal journeys with the integrated use of approaches and capabilities.

McKinsey uses cookies to improve site functionality, provide you with a better browsing experience, and to enable our partners to advertise to you. Detailed information on the use of cookies on this Site, and how you can decline them, is provided in our cookie policy.

By using this Site or clicking on "OK", you consent to the use of cookies. Featured McKinsey Global Institute Our mission is to help leaders in multiple sectors develop a deeper understanding of the global economy.

McKinsey Quarterly Our flagship business publication has been defining and informing the senior-management agenda since Featured McKinsey Academy Our learning programs help organizations accelerate growth by unlocking their people's potential.

Subscribe Sign In. The next-generation operating model for the digital world. We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Related Articles. Article Speed and scale: Unlocking digital value in customer journeys. Interview Engineering the switch to digital.

Create a profile. Sign up for email alerts.

The car or plane inside never moves, but the fans create wind at different speeds to simulate real-world conditions. Customer satisfaction data can be gathered from surveys, registered complaints and other feedback. Taken together, the two kinds of studies should provide considerable information about why particular strategies that are used to improve IT productivity succeed or fail. This information can be used to optimize field performance. Second, what performance determinants should be allowed to operate and which should be controlled? Also, units of measure must be defined.

Measuring streamlining models

Measuring streamlining models

Measuring streamlining models

Measuring streamlining models

Measuring streamlining models. Measuring Drag Using Wind Tunnels

In such an instance, the measurement goal is to determine whether the specified technical skills have in fact been mastered, not whether the individual chooses to use them in the actual job setting. In general, the critical issue is whether the measure allows the relevant determinants to influence scores. Perhaps another example would help illustrate the point. It is generally agreed that many commercial airline accidents are the result of faulty "cockpit management.

If a simulator is used to measure performance, there are two major considerations at the outset. First, does the simulator allow performance on the two components to be observed?

Second, what performance determinants should be allowed to operate and which should be controlled? For example, one frequently critical determinant in cockpit management is the hesitation of a junior crew member to question the actions of the senior pilot if he or she appears to be in error.

To bring this determinant into the simulator, the simulator "crew" should re-. To serve a different objective, the measurement procedure could choose to control for the motivational determinants so as to evaluate the effects of knowledge or skill differentials without being confounded with motivational differences.

Again, the choice of measurement operations is very dependent on the measurement objectives. Individual and group productivity are surely influenced by a number of other things besides individual knowledge, skill, and motivation, and legitimately so.

For example, the translation of individual output to group output is a function of such things as the nature of the task, the structure of the organization, the nature of the technology, and a number of management considerations e.

Chapter 3 provided an excellent summary of what is known, and not known, about how to model the linkage between individual and organizational productivity. Taken in concert with Chapters 5 and 6 , Chapter 3 makes it possible to at least outline the basic determinants of organizational productivity.

At perhaps the highest level of abstraction, the list of basic determinants might be as follows:. Technology, in this case IT, as discussed in Chapter 9. The interaction of technology and individual capabilities, in the sense that certain kinds of technologies, in combination with certain kinds of individuals, may have a much greater or lesser effect on productivity than would be expected from the sum of the main effects.

Organizational structure, as it applies to the individual-organizational linkage. The parameters of organizational structure were discussed in Chapters 3 and 5. The interaction between technology and organizational structure, that is, some technologies may be very inappropriate and even counterproductive when implemented within certain kinds of organizational structures, and vice versa. For example, installing and maintaining a computerized project management system may detract from the performance of a nonhierarchical research and development team that interacts closely on a daily basis.

Management functions, as in the expertise with which planning, coordination, goal setting, monitoring, control, and external representation are carried out. The overall effects of ''management" on organizational productivity are very much a function of who does it and how well. Also, there are undoubtedly a number of critical interactions among individuals, technologies, and the procedures by which the management functions are executed. For example, the empowered work group may be an effective "manager" only for individuals at a certain level of performance.

The critical issue is that the specific determinants of a specific component of organizational productivity constitute the linkage with which this report is concerned. For a productivity measure to be useful in studying linkage phenomena, it must be capable of being influenced by the appropriate determinants.

The basic change strategies might be outlined as follows:. On at least one occasion, even one of the largest corporations in the world the Ford Motor Company has been the unit of analysis Banas, The pieces of this strategy go by various names, such as autonomous work groups, self-managed work teams, employee empowerment, and the high-involvement organization Goodman, ; Goodman et al.

The central concern of this very large domain of research and practice is how the contributions of individuals to the effectiveness of the larger unit can be optimized by the strategy of decentralizing the management functions. Such a strategy should lead to better communication, coordination, and problem solving, and to higher motivation and commitment. If a particular change strategy aimed at a particular determinant, or set of determinants, of organizational productivity fails to exhibit any effects, it is useful to keep in mind that such a result could occur for any of several reasons.

Among them are the following:. There is a certain lag between the time of implementation and the time the effect will be realized.

The productivity indicator was measured too soon see Chapter 3. Changes in the productivity indicator are a function of so many other things that even if the change strategy is a good one its efforts will be masked see Chapter 4. The productivity indicator is not a measure of productivity. That is, it is a reliable measure of something else. To cite one strategy of major interest, a question considered by a number of analysts see Chapter 2 ; also Loveman, is whether the large investments in IT by firms or industries have improved the productivity of the firm or the productivity of the industry.

In the opinion of many people, the investment has not yielded much of a return. So many other. However, the difficulty is not symmetrical because the influence of a type II error saying that a technological change has no effect on productivity when in fact it does operates only one way. One could fail to find a significant relationship because of a low N e. It is perhaps little wonder that so few relationships are detected.

These issues are not unique to the implementation of information technologies. The problems associated with the implementation of change have been major topics for research and practice for many decades e. Chapter 4 summarized a number of the issues and demonstrated that it is unreasonable to expect a specific intervention that is directed at part of the organization to affect substantially the overall productivity of the entire organization as reflected in a summary index several steps removed from the direct effects of the new technology.

Finally, there is sometimes an implication that the goal of modeling the individual-organizational productivity linkage is to be able to determine how much of the variance in group or organizational productivity is due to individual productivity and how much is due to other sources. That is, the goal is to account for all significant sources of variance in organizational productivity and to determine the proportion of variance accounted for by each source.

Using such a comprehensive analysis-of-variance framework would pose measurement problems e. In reality, estimates of the variance in organizational productivity accounted for by individuals will always be a function of specific sample i.

That is, by definition, there is no general answer to the question of how much of the variance in organizational productivity is due to variation in individual productivity. Instead, as pointed out in Chapter 3 , the goal should be to learn as much as possible about how each determinant operates under various conditions. In summary, the proposed model for the measurement of IT productivity incorporates the following notions:.

A guiding definition of IT productivity must be agreed on. For example, is the central concern performance, effectiveness, productivity in the conventional sense Mahoney, , utility, or something else? Are there different domains of IT that require a different definition? By whatever definition, productivity is composed of major components that are distinct enough to preclude talking about it in the singular as one thing.

Specification of the major determinants of each productivity component is of critical importance. In particular, for IT productivity, are the effects of differences in individual knowledge and skill, individual motivation, IT, job and organizational structure, and the management functions all of interest, or just some of them?

A measure will be valid to the extent that 1 the variables to be measured are defined appropriately, 2 the content of the measure matches the content of the variable, and 3 the determinants of score differences on the measure accurately reflect the measurement objectives.

The score variation on measures of individual productivity should be under the control of the individual.

The score variation on measures of unit productivity should be under the control of the unit. Measurement should minimize the opportunity for productivity "scores" to be influenced by sources of contamination having nothing to do with the objectives of measurement. Given these implications, the next section outlines steps that should be taken to enhance understanding of the nature of IT productivity and its measurement.

The analyses of IT productivity measurement issues in this chapter point to a number of questions that can be addressed only through. To achieve truly effective measurement of IT productivity for purposes of modeling the linkage between individual and organizational productivity and evaluating the effects of productivity improvement strategies, the following steps should be taken.

First, a representative panel of relevant experts i. The possibilities range from the productivity of software design organizations to the productivity of operational information processing systems themselves. A taxonomy of the critical types of IT organizations or systems and their relevant goals would add considerable clarity to all these issues. Second, for each type of IT organization, an additional expert panel should be assembled to consider all available information and formulate an initial statement of what the basic components of productivity are within that context.

These would be substantive specifications, not abstractions. To proceed with measurement, the enterprise simply must know what it wants to measure.

As used here, expert does not refer to academics or other experts in organizational research or measurement. The experts of interest are the people who have responsibility for using IT itself. To the fullest extent possible, the panel s should also attempt to specify the major determinants for each relevant component of productivity.

If, for certain specific productivity components, it would be impossible for changes in a specific determinant e. In effect, these two steps would generate a working "theory of productivity" in each context for which IT productivity is a critical issue.

Third, the above steps would feed directly into a program of research and development on productivity measurement itself. Chapters 6 and 7 outlined specific procedures for such measure development and offered relevant examples. It is the group's responsibility to make sure that the appropriate determinants are reflected by the measures and that there is no serious contamination by extraneous influences.

That is, by design, output measures must be identified or developed that directly reflect the performance of the unit in question and are directly relevant for the organization's goals.

There is really no way to sidestep these judgments. There is no standardized set of commercially available operational measures that can be purchased and used. This is as true for organizational productivity as it is for unit or individual productivity Campbell, Pritchard et al. This might be a very eye-opening exercise for organizations. Achieving large gains on some components of productivity may be of little value, while even small gains on some other measure might be judged to have tremendous value; and the differences in marginal utility need not be highly correspondent with a dollar metric.

In an ideal world, the specific goals and specific measures that are specified for a particular organization would be congruent with the theory of productivity articulated in the second step above. If they are not, revisions to the model should be considered.

Over time, this interplay between a conceptual framework and specific measurement applications should steadily increase understanding of IT productivity and how it should be measured. One way to aid such investigation would be to develop an IT productivity measurement manual that would incorporate the working model and a set of procedures such as those suggested in Chapters 6 and 7.

Fourth, to enhance understanding of the linkage of certain determinants to specific components of IT productivity, it would be useful to conduct two kinds of exploratory investigations, using the working theory developed in the second step above. Both kinds of studies would seek simply to describe the critical events in specific organizations that seemed to have a positive or negative effect on productivity.

One type would be the straightforward case study. The second type of study would collect accounts of critical incidents from several panels of people within each organization. The general instructions to the writers of the accounts would ask them to describe specific examples of incidents that illustrate positive or negative ef-. This is a proven strategy for identifying specific individual training needs Campbell, Taken together, the two kinds of studies should provide considerable information about why particular strategies that are used to improve IT productivity succeed or fail.

The two types of studies are exploratory in nature. It is also true that some of the reasons why new technology does not have the intended effects are already well known. The case studies and critical incident data gathering are not meant to reinvent the wheel.

The intent is simply to provide additional specific information as to how changes in IT can succeed or fail. Aggregating such information over a large number of instances may indeed lead to an expansion of the understanding of how to improve IT productivity. Ackerman, P. Individual differences in skill learning: An integration of psychometric and information processing perspectives.

Psychological Bulletin — Standards for Educational and Psychological Testing. Washington, D. Anderson, J. Cognitive Psychology and Its Implications , 2nd ed. New York: W. Banas, P. Campbell and R. Campbell, eds. San Francisco: Jossey-Bass. Bennis, W. Benne, and R. The Planning of Change. Campbell, J. On the nature of organizational effectiveness. Goodman and J. Pennings, eds. Productivity enhancement via training and development.

Modeling the performance prediction problem in industrial and organizational psychology. Dunnette and L. Hough, eds. Palo Alto, Calif. Cascio, W. Applied Psychology in Personnel Management , 4th ed. Englewood Cliffs, N. Goodman, P. Change in Organizations. Designing Effective Work Groups. Devadas, and T. Groups and productivity: Analyzing the effectiveness of self-managing teams. Kanfer, R.

Motivation and cognitive abilities: An integrative-aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psychology — Landy, F. A process model of performance rating. Lawler, E. High Involvement Management. Loveman, G. Sloan School of Management.

Cambridge, Mass. Mahoney, T. Productivity defined: The relativity of efficiency, effectiveness, and change. Nathan, B. A comparison of criteria for test validation: A meta-analytic investigation. Personnel Psychology — Nissen, M. Attentional requirements of learning: Evidence from performance measures. Cognitive Psychology — Pritchard, R.

Jones, P. Roth, K. Stuebing, and S. The evaluation of an integrated approach to measuring organizational productivity. Sackett, P.

Zedeck, and L. Relations between measures of typical and maximum job performance. Schmitt, N. Gooding, R. Noe, and M. Meta analyses of validity studies published between and and the investigation of study characteristics. By one analysis, a 12 percent annual increase in data processing budgets for U. This timely book provides some insights by exploring the linkages among individual, group, and organizational productivity. The authors examine how to translate workers' productivity increases into gains for the entire organization, and discuss why huge investments in automation and other innovations have failed to boost productivity.

Leading experts explore how processes such as problem solving prompt changes in productivity and how inertia and other characteristics of organizations stall productivity. The book examines problems in productivity measurement and presents solutions. Also examined in this useful book are linkage issues in the fields of software engineering and computer-aided design and why organizational downsizing has not resulted in commensurate productivity gains.

Important theoretical and practical implications contribute to this volume's usefulness to business and technology managers, human resources specialists, policymakers, and researchers. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book. Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

Before a process improvement is introduced, a baseline measurement is taken. At the end of the project, the process is measured again. The amount of improvement is then calculated.

Connect your primary metric to a business metric. The business metric measures how an operational improvement achieves one of the company's goals.

For example, if the primary metric is to manufacture the product faster, the business metric might be increased profit or reduced fixed costs. There is a cause-and-effect relationship between the primary and the business metrics.

It demonstrates why the improvement in the primary metric is good for business. Consider the possibility of unforeseen outcomes. There may be collateral damage caused by the process improvement project.

If the primary metric measures what must be improved, there must be another metric, the consequential metric, to measure what must not change. Consequential metric data should be collected before, during and after the project. Of the many possible consequential metrics in a given project, only the top few which impact the quality of outputs should be considered. Proportion of damaged products would be the consequential metric in this case.

Establish a financial metric. This should not be confused with an accounting of the cost of the project. Rather, the financial metric should be a tool for analyzing the financial benefit of the project. Many companies continue to monitor the financial metric for up to one year after the completion of the project. The company would monitor revenue, along with other factors like profit, from the time the changes were instituted and on to measure how the change in process affected them. Measure time.

Process time measures how long it takes to complete the steps of creating a product or service. Metric calculations may include the percent on time delivery. Doing so allows for greater production and delivers the product or service sooner. Consider furniture manufacturing. All things being equal, customers would rather receive their new couch or dining room set sooner rather than later.

If you can reduce process time, you improve your chances for repeat orders and new business. Measure costs. Cost metrics assess the total cost of the production process. They also measure operational costs relative to production levels.

Cost per transaction measures the cost to produce one unit. Cost savings measures reduction in costs per transaction.

Labor savings measures a reduction in labor hours needed to produce the product or service. For example, when Bank of America experienced a downturn in lending and trading revenue in , they decided to reduce their workforce in order to continue delivering revenues to shareholders.

Measure quality. Quality metrics measure customer satisfaction. Customer satisfaction data can be gathered from surveys, registered complaints and other feedback. Quality metrics also assess whether the process creates value for the customer. They also look at the frequency of errors and the need for reworks. The percent complete and accurate rate measures how often no mistakes are made. Quality improvement must be data-driven. Analysts look at financial and clinical data to identify variations in how health care is delivered by providers.

They break down the process and find areas of waste or redundancy in order to create a process that delivers a high-quality clinical outcome. The cost reduction is not valuable unless it increases or maintains the current level of quality provided. Efficiency refers to the amount of resources necessary to deliver a product or service. Effectiveness refers to how well the objectives of the product or service have been achieved.

Measure output. Output metrics measure the quantity of products or services produced in a given time period. The production goal should align with customer demand. Output metrics also look at backlogs and excess inventory. These should be minimal. Finally, works in progress is measured. This determines the number of products or services in the pipeline.

For example, in the automobile industry, auto makers have a standard method for assembling cars. Manufacturers can standardize production processes with the goal of increasing output.

Metrics can help to analyze how well a process improves the output for the manufacturer. Measure process complexity. This metric measures how many steps are in the production process.

The critical closing pressure P CRIT , a quantitative assessment of upper airway collapsibility, is derived from pressure flow relationship during sleep. Data analysis was performed independently using three paradigms: 1 PAS Igor Pro; median regression , 2 non-graphic statistics application SAS; spline regression , and 3 manual spreadsheet calculations Excel; linear regression.

The reliability and accuracy of the PAS was examined through the agreement between each approach using Bland-Altman plots of the mean difference and within-individual variation using intra-class correlation ICC. There was a mean difference of 0. PAS preserves the reliability and accuracy of the original P CRIT analysis methods while vastly improving their efficiency through graphic user interface and automation of analytic processes.

Providing a standardized platform for physiologic data processing offers the ability to implement quality assurance and control procedures for multicenter studies as well as cost saving by improving the efficiency of complex repetitive tasks.

Upper airway collapsibility is commonly measured by the critical pressure, P CRIT , which is the pressure at which flow ceases 3 - 8. While there are currently efficient methods in place for collecting sleep data, the analytic generation of the pressure flow relationships remains a time-consuming and cumbersome task 17 - In general, trained researchers and technicians are expected to mark events in the sleep recording, extract the data into text files and spreadsheets, sort through many rows and columns of data, extensively review individual numbers, switch software platforms to perform statistical analysis of P CRIT and separately graph the data to visually display a pressure-flow relationship.

The process is seldom linear, whereby researchers often need to repeat analytic steps. Needless to say, there are considerable time commitments in these steps. Inefficiencies in the data analysis process limit the applicability of physiological measurements in epidemiology and genetic cohorts, thus there is need to streamline methods.

The data processing steps of physiological measurement can be reasonably repetitive, therefore an on-time solution is possible for streamlining data export, graphically displaying the extracted physiological values, categorizing experimental conditions and integrating relevant calculations and statistics. This makes immediate review of the data possible, allowing effortless error detection and correction while creating an organized structure for database entry.

The data analysis software, in a numerical computing environment with 4 th generation programming language, was chosen for its built-in commands that facilitate generating graphical displays of large data sets.

In the current paper, we describe the components of a streamlined P CRIT analysis system and its application to investigating upper airway properties. Improving efficiencies in physiological measurements will enhance the applicability of their integration in large clinical and community cohorts.

The segments upstream and downstream to the collapsible site have fixed diameters and resistances across all upstream and downstream pressures. There are several features important to the use of inspired airflow to estimate upper airway collapsibility It has been shown that flow cannot occur until the pressure upstream from the collapsible segment exceeds the upper airway collapsing pressure P CRIT 22 - The Starling resistor model is a mechanical analog of the upper airway which assumes an infinitely collapsible segment and the absence of neural activity in the structure.

Under the conditions of airflow limitation, maximal flow V I max is determined by the gradient between the upstream nasal pressure P N and the P CRIT , and the resistance R N upstream as described in the equation. Flow is independent of the changes downstream hypopharyngeal pressure 22 , 29 - Under flow-limited conditions, pressure downstream to the site of collapse no longer influences maximal inspiratory airflow V I max.

Rather, V I max varies solely with changes in the upstream nasal pressure 21 and increases proportionately with elevations in nasal pressure, allowing us to determine P CRIT assuming a constant sloped pressure-flow relationship. Since V I max during inspiratory airflow limitation is determined solely by upper airway characteristics rather than the level of ventilatory drive, any increase in the level of mean inspiratory airflow would be attributed to increases in upper airway neuromuscular responses or upper airway biochemical and physical properties.

Similarly, during the inspiratory airflow limited condition a change in airflow can be used as a marker for responses in upper airway properties to a given stimulus. The P CRIT of the upper airway can be extrapolated by plotting the maximal inspiratory airflow V I max against P N during sleep-induced flow-limited inspiration 27 , Such flow limitation is determined by the presence of a flattened flow vs. The flow-limited segment of the V I max vs. P N relationship can be identified using a median segmented regression approach between the upper and lower inflection points of the sloped flow-limited segment see Figure 1.

Periods of the recording are selected to perform breath-by-breath analysis that marks inspiration from each breath and extracts the peak inspiratory airflow. A total of 17 subjects participated in the current study. Seven of the patients were non-apneic 4 male, 3 female and ten were previously diagnosed with sleep apnea 7 male, 3 female.

Sleep apnea was defined as a respiratory disturbance index RDI greater than 10 events per hour Table 1. Written informed consent was obtained from each participant for this study, which was approved by the Johns Hopkins Medical Institution Human Investigational Review Board. Intermittently P N is reduced in subsequent pressure drops until a level where there is no flow. Flow measurements are extracted from several breaths at each P N level pressure drop for later analysis.

The routines described can be performed using a technical graphing and data analysis software package for Macintosh and Windows Igor Pro software, version 6. Minor adjustments may be needed to adapt the program to run on the Macintosh operating system. Default parameters such as the path to executable files are also required to be established during setup. It is also possible to incorporate workflow features such as database integration and server side run time analysis procedures to complement this program.

Custom plug-in software developed using the Somnologica SDK 3. In brief, analysis is targeted to select periods of the airflow trace or the complete trace if required : first, to mark the inspiratory phase of the respiratory cycle; second, to detect the peak inspiratory airflow; and third, to export the breath parameters.

An example of the output from the breath-by-breath data is a displayed in Table 2 , which is saved in spreadsheet file format. Saving the data into a database format is an alternate method for archiving individual breath-by-breath variables. An illustrative representation of maximal inspiratory airflow V I max versus nasal pressure showing distinct regions of the pressure-flow curve as described using the Starling resistor model: a completely occluded segment section I , a flow-limited section II , and a non-flow-limited segment section III.

Regression analysis of the flow-limited section of the pressure-flow plot is used to establish the P CRIT as the nasal pressure at which airflow approaches zero.

The main approach employed to import data into the PAS is to directly load breath-by-breath data from a spreadsheet file as shown in Table 2. A file selector panel acts as the primary data explorer to provide a flexible directory search option.

Users may select an experimental data folder, which serves as a primary root folder, in order to search for files of known type and specify the depth of folders to inspect within the primary root folder. It is possible to narrow file search criteria by limiting returned file extensions e. From the file selector panel, the chosen file can be either loaded into the PAS or opened in its native application e. Somnologica or RemLogic if viewing. Alternatively, a completely flexible browse option allows files to be manually imported into the PAS.

Data for direct import into PAS must follow the given format in Table 2. The PAS is constrained to a maximum of 60 pressure drops and 8 series of pressure drops. Prior to importing data into the PAS, radio button and variable settings may be selected or modified from a settings panel to optimize the analysis process.

Once the data is imported, PAS creates an overview plot, a pressure-airflow plot and a data selection panel. Combined, these data are the allow a user to obtain a snapshot of the breath-by-breath data extracted from the raw pressure and airflow traces, with respect to the standard sleep studies analysis from Remlogic.

The overview plot also serves as a graphical reference point data analysis of upper airway collapsibility in the PAS. The pressure-airflow plot depicts the relationship between maximal inspiratory flow V I max and P N. In the PAS, the pressure-airflow plot is a dynamic display and focus for data output: it presents grouping of breaths at set pressure levels bins for generating pressure-airflow median regression analysis, the medians of each pressure bin, and the slopes between medians.

Each point on the pressure-airflow plot corresponds to one on the overview plot, allowing informational display as well as deletion, alteration, or restoration of selected data points from either plot. Users may customize pressure assignments and threshold median slope values on a bin-by-bin basis to determine which data points are incorporated in the median regression..

Further customization is possible through a data selection panel , which allows users to determine the breaths data points included in the pressure-airflow plot. The collection of data display, review and interactive windows provide the tools for easy data revision and finalization. Users may wish to compare pressure-airflow plots under various experimental conditions or between individuals.

The state of pressure-airflow plots can be copied and preserved, thus allowing simultaneous review of multiple data sets Figure 3 , such that comparisons within and between P CRIT studies are straightforward. PAS menus provide a list of commonly used tables for easy data display, and shortcuts allow keystroke-interfacing with the program: to open, close, refresh windows, save, load files, and reset the program. An illustration of completed P CRIT analysis for one series of pressure and airflow data points blue circles.

The multicolored horizontal bars indicate the individual range of pressure levels used to perform median regression for the entire pressure-flow data series. To determine the utility of the streamlined approach for determining upper airway collapsibility using the PAS, we used three separate approaches to generate passive pressure-flow curves for each upper airway physiology study. The approaches were performed in random order and independent of one another. The analytical paradigms employed were 1 standard linear regression using Excel Microsoft, Redmond, Washington , 2 segmented regression spline analysis using SAS 9.

Second, the intraclass-correlation ICC between measurements, an assessment of the within-individual variation, was determined using the method described by Deyo et al An ICC of 1. Third, Bland-Altman plots of the difference in approaches vs. Demographic, anthropometric, polysomnographic characteristics for all 17 subjects are displayed in Table 1.

The ICC was 0. The Bland-Altman analysis did not demonstrate a systematic bias between either set of measurements Figure 4A-B. We have described a visual analysis system for processing complex physiological parameters in a streamlined approach. The findings in this study demonstrate that the software is both valid and reliable in streamlining P CRIT measurements as there is no significant difference in the calculated values for upper airway collapsibility between the Igor-based P CRIT analysis software, and the formerly established methods of linear regression and spline analysis.

While precision of the measurement is consistent throughout the three paradigms, the PAS provides a considerable boost in efficiency, by eliminating many redundant and time-consuming steps through the use of a graphic user interface and automation of analytic processes.

These steps include the manual organization and sorting of physiological protocol steps, manipulation of large data spreadsheets to identify specific outlying values, and data point re-exportation and recovery, to name a few. In the current study we have shown that while maintaining the same accuracy and precision, streamlining analytic processes in physiological studies utilizing software solution can overcomes time-consuming methods and ultimately facilitate efficient review.

This improved efficiency, accuracy, and reliability may translate into cost savings and reduce the amount of training investigators need to generate pressure flow relationships from sleep data.

PAS provides a standardized platform for physiologic data processing, which offers the ability to implement quality assurance and control procedures for multicenter studies. The main utility and benefits of using the PAS are to visually aid the analysis process to be able to implement an overview that assists quality control and expedites data review.

Figure 3 Panel A visually captures the input data for the entire study and provide a display that allows the user to easily identify inconsistencies. The x-axis of the overview panel is equivalent to the row number in Table 2 of each breath identified in the raw data illustrated in Figure 2.

The first of the four plots on the overview panel displays the number of breaths on the y-axis for each nasal pressure drop with a color spectrum that starts from the red end used to segregate subsequent drops.

Second from the top is the level of nasal pressure for each breath grouped using the color of a series of pressure drops. The color of each nasal pressure series of the overview panel corresponds to the series checkbox filters on the selector panel displayed in Figure 3 Panel C. Figure 3 Panel B displays the nasal pressure x-axis versus the V I max y-axis for each breath in the series selected from the selector panel C.

Immediately above the pressure-flow plot are indicator bars that signify a pressure level bin of data for inclusion in the median regression analysis. This visual display of the data allows a technician or investigator to easily identify and correct errant values, thus expediting the analytic process. PAS would specifically be beneficial to clinical investigations of sleep apnea.

Sleep apnea is due to physiologic disturbances in upper airway mechanics and neural control.

Measuring streamlining models

Measuring streamlining models

Measuring streamlining models